repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15 values | content stringlengths 335 154k |
|---|---|---|---|
rajuniit/udacity | image_classification/dlnd_image_classification.ipynb | mit | import tarfile
from tqdm import tqdm as progress_bar_lib
from urllib.request import urlretrieve
from os.path import isfile, isdir
import problem_unittests as tests
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DownloadImageData(progress_bar_lib):
last_batch = 0
def start(self, batch_num=1, batch_size=1, total_size=None):
self.total = total_size
self.update((batch_num - self.last_batch) * batch_size)
self.last_batch = batch_num
if not isfile('cifar-10-python.tar.gz'):
with DownloadImageData(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset Downloading') as progress_bar_obj:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
progress_bar_obj.start)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 4
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return (x - np.min(x)) / (np.max(x) - np.min(x))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/cifar/cifar-10-python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
max_value = 10
return np.eye(max_value)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, (None, *image_shape), name="x")
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, (None, n_classes), name="y")
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob");
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
input_shape = int(x_tensor.shape[3])
output_shape = conv_num_outputs
weight_shape = [*conv_ksize, input_shape, output_shape]
weight = tf.Variable(tf.random_normal(weight_shape, stddev = 0.1))
bias = tf.Variable(tf.zeros(output_shape))
conv_net = tf.nn.conv2d(x_tensor, weight, [1, *conv_strides, 1], padding="SAME")
conv_net = tf.nn.bias_add(conv_net, bias)
conv_net = tf.nn.relu(conv_net)
mp_strides = [1, *pool_strides, 1]
mp_ksize = [1, *conv_ksize, 1]
conv_net = tf.nn.max_pool(conv_net, mp_ksize, mp_strides, padding="SAME")
return conv_net
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
flatten = tf.contrib.layers.flatten(x_tensor)
return flatten
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
weight_shape = (int(x_tensor.shape[1]), num_outputs)
weight = tf.Variable(tf.random_normal(weight_shape, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
layer = tf.add(tf.matmul(x_tensor, weight), bias)
layer = tf.nn.relu(layer)
return layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
weight_shape = (int(x_tensor.shape[1]), num_outputs)
weight = tf.Variable(tf.random_normal(weight_shape, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
layer = tf.add(tf.matmul(x_tensor, weight), bias)
return layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
# TODO: return output
conv_num_outputs_layer_1 = 64
conv_ksize_layer_1 = (3, 3)
conv_strides_layer_1 = (1, 1)
pool_ksize_layer_1 = (2, 2)
pool_strides_layer_1 = (2, 2)
conv_num_outputs_layer_2 = 64
conv_ksize_layer_2 = (3, 3)
conv_strides_layer_2 = (2, 2)
pool_ksize_layer_2 = (2, 2)
pool_strides_layer_2 = (2, 2)
conv_num_outputs_layer_3 = 64
conv_ksize_layer_3 = (3, 3)
conv_strides_layer_3 = (1, 1)
pool_ksize_layer_3 = (2, 2)
pool_strides_layer_3 = (2, 2)
fc_layer_1_num_outputs = 512
fc_layer_2_num_outputs = 256
output_num_outputs = 10
conv_layer_1 = conv2d_maxpool(x,
conv_num_outputs_layer_1,
conv_ksize_layer_1,
conv_strides_layer_1,
pool_ksize_layer_1,
pool_strides_layer_1)
conv_layer_2 = conv2d_maxpool(conv_layer_1,
conv_num_outputs_layer_2,
conv_ksize_layer_2,
conv_strides_layer_2,
pool_ksize_layer_2,
pool_strides_layer_2)
conv_layer_3 = conv2d_maxpool(conv_layer_2,
conv_num_outputs_layer_3,
conv_ksize_layer_3,
conv_strides_layer_3,
pool_ksize_layer_3,
pool_strides_layer_3)
flatten_layer = flatten(conv_layer_3)
fc_layer_1 = fully_conn(flatten_layer, fc_layer_1_num_outputs)
fc_layer_1 = tf.nn.dropout(fc_layer_1, keep_prob)
fc_layer_2 = fully_conn(fc_layer_1, fc_layer_2_num_outputs)
fc_layer_2 = tf.nn.dropout(fc_layer_2, keep_prob)
output_layer = output(fc_layer_2, output_num_outputs)
return output_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
feed_dict = { x: feature_batch, y: label_batch, keep_prob: keep_probability }
session.run(optimizer, feed_dict=feed_dict)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
feed_dict_loss = { x: feature_batch, y: label_batch, keep_prob: 1.0 }
loss = session.run(cost, feed_dict=feed_dict_loss)
feed_dict_accuracy = { x: valid_features, y: valid_labels, keep_prob: 1.0 }
valid_accuracy = session.run(accuracy, feed_dict=feed_dict_accuracy)
print('Loss: {:.4f} Validation Accuracy: {:.4f}'.format(loss, valid_accuracy))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 50
batch_size = 256
keep_probability = 0.7
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
dereneaton/ipyrad | testdocs/analysis/cookbook-distance.ipynb | gpl-3.0 | # conda install ipyrad -c conda-forge -c bioconda
# conda install ipcoal -c conda-forge
import ipyrad.analysis as ipa
import ipcoal
import toyplot
import toytree
"""
Explanation: <h1><span style="color:gray">ipyrad-analysis toolkit:</span> distance</h1>
Genetic distance matrices are used in many contexts to study the evolutionary divergence of samples or populations. The ipa.distance module provides a simple and convenient framework to implement several distance based metrics.
Key features:
Filter SNPs to reduce missing data.
Impute missing data using population allele frequencies.
Calculate pairwise genetic distances between samples (e.g., p-dist, JC, HKY, Fst)
(coming soon) sliding window measurements along chromosomes.
required software
End of explanation
"""
# generate and draw an imbalanced 5 tip tree
tree = toytree.rtree.imbtree(ntips=5, treeheight=500000)
tree.draw(ts='p');
"""
Explanation: Species tree model
End of explanation
"""
# setup a model to simulate 8 haploid samples per species
model = ipcoal.Model(tree=tree, Ne=1e4, nsamples=8)
model.sim_loci(1000, 50)
model.write_snps_to_hdf5(name="test-dist", outdir="/tmp", diploid=True)
# the path to the HDF5 formatted snps file
SNPS = "/tmp/test-dist.snps.hdf5"
"""
Explanation: Coalescent simulations
The SNPs output is saved to an HDF5 database file.
End of explanation
"""
from itertools import groupby
# load sample names from SNPs file
tool = ipa.snps_extracter(SNPS)
# group names by prefix before '-'
groups = groupby(tool.names, key=lambda x: x.split("-")[0])
# arrange into a dictionary
IMAP = {i[0]: list(i[1]) for i in groups}
# show the dict
IMAP
"""
Explanation: [optional] Build an IMAP dictionary
A dictionary mapping of population names to sample names.
End of explanation
"""
dist = ipa.distance(
data=SNPS,
imap=IMAP,
minmap={i: 1 for i in IMAP},
mincov=0.5,
impute_method=None,
)
# infer the distance matrix from sequence data
dist.run()
# show the first few data cells
dist.dists.iloc[:5, :12]
"""
Explanation: calculate distances with missing values filtered and/or imputed, and corrected
The correction applies a model of sequence substitution where more complex models can apply a greater penalty for unobserved changes (e.g., HKY or GTR). This allows you to use either SNPs or SEQUENCES as input. Here we are using SNPs. More on this later... (TODO).
End of explanation
"""
tool = ipa.neighbor_joining(matrix=dist.dists)
"""
Explanation: Infer a tree from distance matrix
End of explanation
"""
# create a canvas
canvas = toyplot.Canvas(width=500, height=450);
# add tree
axes = canvas.cartesian(bounds=("10%", "35%", "10%", "90%"))
gtree.draw(axes=axes, tip_labels=True, tip_labels_align=True)
# add matrix
table = canvas.table(
rows=matrix.shape[0],
columns=matrix.shape[1],
margin=0,
bounds=("40%", "95%", "9%", "91%"),
)
colormap = toyplot.color.brewer.map("BlueRed")
# apply a color to each cell in the table
for ridx in range(matrix.shape[0]):
for cidx in range(matrix.shape[1]):
cell = table.cells.cell[ridx, cidx]
cell.style = {
"fill": colormap.colors(matrix.iloc[ridx, cidx], 0, 1),
}
dist.dists
# style the gaps between cells
table.body.gaps.columns[:] = 3
table.body.gaps.rows[:] = 3
# hide axes coordinates
axes.show = False
# load the snp data into distance tool with arguments
dist = Distance(
data=data,
imap=imap,
minmap=minmap,
mincov=0.5,
impute_method="sample",
subsample_snps=False,
)
dist.run()
"""
Explanation: Draw tree and distance matrix
End of explanation
"""
# save to a CSV file
dist.dists.to_csv("distances.csv")
# show the upper corner
dist.dists.head()
"""
Explanation: save results
End of explanation
"""
toyplot.matrix(
dist.dists,
bshow=False,
tshow=False,
rlocator=toyplot.locator.Explicit(
range(len(dist.names)),
sorted(dist.names),
));
"""
Explanation: Draw the matrix
End of explanation
"""
# get list of concatenated names from each group
ordered_names = []
for group in dist.imap.values():
ordered_names += group
# reorder matrix to match name order
ordered_matrix = dist.dists[ordered_names].T[ordered_names]
toyplot.matrix(
ordered_matrix,
bshow=False,
tshow=False,
rlocator=toyplot.locator.Explicit(
range(len(ordered_names)),
ordered_names,
));
"""
Explanation: Draw matrix reordered to match groups in imap
End of explanation
"""
|
MTG/sms-tools | notebooks/E2-Sinusoids-and-DFT.ipynb | agpl-3.0 | import numpy as np
# E2 - 1.1: Complete function gen_sine()
def gen_sine(A, f, phi, fs, t):
"""Generate a real sinusoid given its amplitude, frequency, initial phase, sampling rate, and duration.
Args:
A (float): amplitude of the sinusoid
f (float): frequency of the sinusoid in Hz
phi (float): initial phase of the sinusoid in radians
fs (float): sampling frequency of the sinusoid in Hz
t (float): duration of the sinusoid (is second)
Returns:
np.array: array containing generated sinusoid
"""
### your code here
"""
Explanation: Exercise 2: Sinusoids and the DFT
Doing this exercise you will get a better understanding of the basic elements and operations that take place in the Discrete Fourier Transform (DFT). There are five parts: 1) Generate a sinusoid, 2) Generate a complex sinusoid, 3) Implement the DFT, 4) Implement the IDFT, and 5) Compute the magnitude spectrum of an input sequence.
Relevant Concepts
A real sinusoid in discrete time domain can be expressed by:
\begin{equation}
x[n] = A\cos(2 \pi fnT + \varphi)
\end{equation}
where, $x$ is the array of real values of the sinusoid, $n$ is an integer value expressing the time index, $A$ is the amplitude value of the sinusoid, $f$ is the frequency value of the sinusoid in Hz, $T$ is the sampling period equal to $1/fs$, fs is the sampling frequency in Hz, and $\varphi$ is the initial phase of the sinusoid in radians.
A complex sinusoid in discrete time domain can be expressed by:
\begin{equation}
\bar{x}[n] = Ae^{j(\omega nT + \varphi)} = A\cos(\omega nT + \varphi)+ j A\sin(\omega nT + \varphi)
\end{equation}
where, $\bar{x}$ is the array of complex values of the sinusoid, $n$ is an integer value expressing the time index, $A$ is the amplitude value of the sinusoid, $e$ is the complex exponential number, $\omega$ is the frequency of the sinusoid in radians per second (equal to $2 \pi f$), $T$ is the sampling period equal $1/fs$, fs is the sampling frequency in Hz and $\varphi$ is the initial phase of the sinusoid in radians.
The $N$ point DFT of a sequence of real values $x$ (a sound) can be expressed by:
\begin{equation}
X[k] = \sum_{n=0}^{N-1} x[n]e^{-j2 \pi kn/N} \hspace{1cm} k=0,...,N-1
\end{equation}
where $n$ is an integer value expressing the discrete time index, $k$ is an integer value expressing the discrete frequency index, and $N$ is the length of the DFT.
The IDFT of a spectrum $X$ of length $N$ can be expressed by:
\begin{equation}
x[n] = \frac{1}{N} \sum_{k=0}^{N-1} X[k]e^{j2 \pi kn/N} \hspace{1cm} n=0,...,N-1
\end{equation}
where, $n$ is an integer value expressing the discrete time index, $k$ is an integer value expressing the discrete frequency index, and $N$ is the length of the spectrum $X$.
The magnitude of a complex spectrum $X$ is obtained by taking its absolute value: $|X[k]| $
Part 1 - Generate a sinusoid
The function gen_sine() should generate a real sinusoid (use np.cos()) given its amplitude A, frequency f (Hz), initial phase phi (radians), sampling rate fs (Hz) and duration t (seconds).
All the input arguments to this function (A, f, phi, fs and t) are real numbers such that A, t and fs are positive, and fs > 2*f to avoid aliasing. The function should return a numpy array x of the generated sinusoid.
Use the function cos of the numpy package to compute the sinusoidal values.
End of explanation
"""
# E2 - 1.2: Call the function gen_sine() with the values proposed above, plot and play the output sinusoid
import IPython.display as ipd
### your code here
"""
Explanation: If you use A=1.0, f = 10.0, phi = 1.0, fs = 50 and t = 0.1 as input to the function gen_sine() the output numpy array should be:
array([ 0.54030231, -0.63332387, -0.93171798, 0.05749049, 0.96724906])
To generate a sinewave that you can hear, it should be longer and with a higher sampling rate. For example you can use A=1.0, f = 440.0, phi = 1.0, fs = 5000 and t = 0.5. To play it import the Ipython.display package and use ipd.display(ipd.Audio(data=x, rate=fs)).
End of explanation
"""
# E2 - 2.1: Complete function the function gen_complex_sine()
def gen_complex_sine(k, N):
"""Generate one of the complex sinusoids used in the DFT from its frequency index and the DFT lenght.
Args:
k (integer): frequency index of the complex sinusoid of the DFT
N (integer) = length of complex sinusoid, DFT length, in samples
Returns:
np.array: array with generated complex sinusoid (length N)
"""
### your code here
"""
Explanation: Part 2 - Generate a complex sinusoid
The gen_complex_sine() function should generate the complex sinusoid that is used in DFT computation of length N (samples), corresponding to the frequency index k. [Note that in the DFT we use the conjugate of this complex sinusoid.]
The amplitude of such a complex sinusoid is 1, the length is N, and the frequency in radians is 2*pi*k/N.
The input arguments to the function are two positive integers, k and N, such that k < N-1. The function should return c_sine, a numpy array of the complex sinusoid. Use the function exp() of the numpy package to compute the complex sinusoidal values.
End of explanation
"""
# E2 - 2.2: Call gen_complex_sine() with the values suggested above and plot the real and imaginary parts of the
# output complex sinusoid
### your code here
"""
Explanation: If you run the function gen_complex_sine() using k=1 and N=5, it should return the following numpy array:
array([ 1. + 0.j, 0.30901699 + 0.95105652j, -0.80901699 + 0.58778525j, -0.80901699 - 0.58778525j, 0.30901699 - 0.95105652j])
End of explanation
"""
# E2 - 3.1: Complete the function dft()
def dft(x):
"""Compute the DFT of a signal.
Args:
x (numpy array): input sequence of length N
Returns:
np.array: N point DFT of the input sequence x
"""
## Your code here
"""
Explanation: Part 3 - Implement the discrete Fourier transform (DFT)
The function dft() should implement the discrete Fourier transform (DFT) equation given above. Given a sequence x of length N, the function should return its spectrum of length N with the frequency indexes ranging from 0 to N-1.
The input argument to the function is a numpy array x and the function should return a numpy array X, the DFT of x.
End of explanation
"""
# E2 - 3.2: Call dft() with the values suggested above and plot the real and imaginary parts of output spectrum
### your code here
"""
Explanation: If you run dft() using as input x = np.array([1, 2, 3, 4]), the function shoulds return the following numpy array:
array([10.0 + 0.0j, -2. +2.0j, -2.0 - 9.79717439e-16j, -2.0 - 2.0j])
Note that you might not get an exact 0 in the output because of the small numerical errors due to the limited precision of the data in your computer. Usually these errors are of the order 1e-15 depending on your machine.
End of explanation
"""
# E2 - 4.1: Complete the function idft()
def idft(X):
"""Compute the inverse-DFT of a spectrum.
Args:
X (np.array): frequency spectrum (length N)
Returns:
np.array: N point IDFT of the frequency spectrum X
"""
### Your code here
"""
Explanation: Part 4 - Implement the inverse discrete Fourier transform (IDFT)
The function idft() should implement the inverse discrete Fourier transform (IDFT) equation given above. Given a frequency spectrum X of length N, the function should return its IDFT x, also of length N. Assume that the frequency index of the input spectrum ranges from 0 to N-1.
The input argument to the function is a numpy array X of the frequency spectrum and the function should return a numpy array of the IDFT of X.
Remember to scale the output appropriately.
End of explanation
"""
# E2 - 4.2: Plot input spectrum (real and imaginary parts) suggested above, call idft(), and plot output signal
# (real and imaginary parts)
### Your code here
"""
Explanation: If you run idft() with the input X = np.array([1, 1, 1, 1]), the function should return the following numpy array:
array([ 1.00000000e+00 +0.00000000e+00j, -4.59242550e-17 +5.55111512e-17j, 0.00000000e+00 +6.12323400e-17j, 8.22616137e-17 +8.32667268e-17j])
Notice that the output numpy array is essentially [1, 0, 0, 0]. Instead of exact 0 we get very small numerical values of the order of 1e-15, which can be ignored. Also, these small numerical errors are machine dependent and might be different in your case.
In addition, an interesting test of the IDFT function can be done by providing the output of the DFT of a sequence as the input to the IDFT. See if you get back the original time domain sequence.
End of explanation
"""
# E2 - 5.1: Complete the function gen_mag_spec()
def gen_mag_spec(x):
"""Compute magnitude spectrum of a signal.
Args:
x (np.array): input sequence of length N
Returns:
np.array: magnitude spectrum of the input sequence x (length N)
"""
### your code here
"""
Explanation: Part 5 - Compute the magnitude spectrum
The function gen_mag_spectrum() should compute the magnitude spectrum of an input sequence x of length N. The function should return an N point magnitude spectrum with frequency index ranging from 0 to N-1.
The input argument to the function is a numpy array x and the function should return a numpy array of the magnitude spectrum of x.
End of explanation
"""
import IPython.display as ipd
import matplotlib.pyplot as plt
# E2 - 5.2: Plot input cosine signal suggested above, call gen_mag_spec(), and plot the output result
### Your code here
"""
Explanation: If you run gen_mag_spec() using as input x = np.array([1, 2, 3, 4]), it should return the following numpy array:
array([10.0, 2.82842712, 2.0, 2.82842712])
For a more realistic use of gen_mag_spec() use as input a longer signal, such as x = np.cos(2*np.pi*200.0*np.arange(512)/1000), and to get a visual representation of the input and output, import the matplotlib.pyplot package and use plt.plot(x) and plt.plot(X).
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment03/NumpyEx04.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: Numpy Exercise 4
Imports
End of explanation
"""
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
"""
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
"""
def complete_deg(n):
k_n = np.identity((n)) * (n-1)
answer = k_n.astype(dtype=np.int)
return answer
print(complete_deg(4))
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
"""
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
"""
def complete_adj(n):
ones = np.ones((n,n))
diag = np.identity(n)
adj = ones-diag
adj = adj.astype(dtype=np.int)
return adj
print complete_adj(4)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
"""
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
"""
print(np.linalg.eigvals(complete_deg(4)))
print(np.linalg.eigvals(complete_adj(4)))
L = (np.linalg.eigvals(complete_deg(5) - complete_adj(5)))
J = L.astype(dtype=np.int)
print L
print J
"""
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation
"""
|
AllenDowney/ThinkStats2 | workshop/sampling_soln.ipynb | gpl-3.0 | %matplotlib inline
import numpy
import scipy.stats
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# seed the random number generator so we all get the same results
numpy.random.seed(18)
"""
Explanation: Random Sampling
Copyright 2016 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
weight = scipy.stats.lognorm(0.23, 0, 70.8)
weight.mean(), weight.std()
"""
Explanation: Part One
Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters:
End of explanation
"""
xs = numpy.linspace(20, 160, 100)
ys = weight.pdf(xs)
plt.plot(xs, ys, linewidth=4, color='C0')
plt.xlabel('weight (kg)')
plt.ylabel('PDF');
"""
Explanation: Here's what that distribution looks like:
End of explanation
"""
def make_sample(n=100):
sample = weight.rvs(n)
return sample
"""
Explanation: make_sample draws a random sample from this distribution. The result is a NumPy array.
End of explanation
"""
sample = make_sample(n=100)
sample.mean(), sample.std()
"""
Explanation: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
End of explanation
"""
def sample_stat(sample):
return sample.mean()
"""
Explanation: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean:
End of explanation
"""
def compute_sampling_distribution(n=100, iters=1000):
stats = [sample_stat(make_sample(n)) for i in range(iters)]
return numpy.array(stats)
"""
Explanation: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
End of explanation
"""
sample_means = compute_sampling_distribution(n=100, iters=1000)
"""
Explanation: The next line runs the simulation 1000 times and puts the results in
sample_means:
End of explanation
"""
plt.hist(sample_means, color='C1', alpha=0.5)
plt.xlabel('sample mean (n=100)')
plt.ylabel('count');
"""
Explanation: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
End of explanation
"""
sample_means.mean()
"""
Explanation: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
End of explanation
"""
std_err = sample_means.std()
std_err
"""
Explanation: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
End of explanation
"""
conf_int = numpy.percentile(sample_means, [5, 95])
conf_int
"""
Explanation: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results:
End of explanation
"""
def plot_sampling_distribution(n, xlim=None):
"""Plot the sampling distribution.
n: sample size
xlim: [xmin, xmax] range for the x axis
"""
sample_stats = compute_sampling_distribution(n, iters=1000)
se = numpy.std(sample_stats)
ci = numpy.percentile(sample_stats, [5, 95])
plt.hist(sample_stats, color='C1', alpha=0.5)
plt.xlabel('sample statistic')
plt.xlim(xlim)
text(0.03, 0.95, 'CI [%0.2f %0.2f]' % tuple(ci))
text(0.03, 0.85, 'SE %0.2f' % se)
plt.show()
def text(x, y, s):
"""Plot a string at a given location in axis coordinates.
x: coordinate
y: coordinate
s: string
"""
ax = plt.gca()
plt.text(x, y, s,
horizontalalignment='left',
verticalalignment='top',
transform=ax.transAxes)
"""
Explanation: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
End of explanation
"""
plot_sampling_distribution(100)
"""
Explanation: Here's a test run with n=100:
End of explanation
"""
def sample_stat(sample):
return sample.mean()
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_sampling_distribution, n=slider, xlim=fixed([55, 95]));
"""
Explanation: Now we can use interact to run plot_sampling_distribution with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.
End of explanation
"""
def sample_stat(sample):
# TODO: replace the following line with another sample statistic
return sample.mean()
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_sampling_distribution, n=slider, xlim=fixed([0, 100]));
"""
Explanation: Other sample statistics
This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.
Exercise 1: Fill in sample_stat below with any of these statistics:
Standard deviation of the sample.
Coefficient of variation, which is the sample standard deviation divided by the sample standard mean.
Min or Max
Median (which is the 50th percentile)
10th or 90th percentile.
Interquartile range (IQR), which is the difference between the 75th and 25th percentiles.
NumPy array methods you might find useful include std, min, max, and percentile.
Depending on the results, you might want to adjust xlim.
End of explanation
"""
class Resampler(object):
"""Represents a framework for computing sampling distributions."""
def __init__(self, sample, xlim=None):
"""Stores the actual sample."""
self.sample = sample
self.n = len(sample)
self.xlim = xlim
def resample(self):
"""Generates a new sample by choosing from the original
sample with replacement.
"""
new_sample = numpy.random.choice(self.sample, self.n, replace=True)
return new_sample
def sample_stat(self, sample):
"""Computes a sample statistic using the original sample or a
simulated sample.
"""
return sample.mean()
def compute_sampling_distribution(self, iters=1000):
"""Simulates many experiments and collects the resulting sample
statistics.
"""
stats = [self.sample_stat(self.resample()) for i in range(iters)]
return numpy.array(stats)
def plot_sampling_distribution(self):
"""Plots the sampling distribution."""
sample_stats = self.compute_sampling_distribution()
se = sample_stats.std()
ci = numpy.percentile(sample_stats, [5, 95])
plt.hist(sample_stats, color='C1', alpha=0.5)
plt.xlabel('sample statistic')
plt.xlim(self.xlim)
text(0.03, 0.95, 'CI [%0.2f %0.2f]' % tuple(ci))
text(0.03, 0.85, 'SE %0.2f' % se)
plt.show()
"""
Explanation: STOP HERE
We will regroup and discuss before going on.
Part Two
So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't be doing statistical inference in the first place!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
End of explanation
"""
def interact_func(n, xlim):
sample = weight.rvs(n)
resampler = Resampler(sample, xlim=xlim)
resampler.plot_sampling_distribution()
"""
Explanation: The following function instantiates a Resampler and runs it.
End of explanation
"""
interact_func(n=100, xlim=[50, 100])
"""
Explanation: Here's a test run with n=100
End of explanation
"""
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(interact_func, n=slider, xlim=fixed([50, 100]));
"""
Explanation: Now we can use interact_func in an interaction:
End of explanation
"""
# Solution goes here
class StdResampler(Resampler):
"""Computes the sampling distribution of the standard deviation."""
def sample_stat(self, sample):
"""Computes a sample statistic using the original sample or a
simulated sample.
"""
return sample.std()
"""
Explanation: Exercise 2: write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.
End of explanation
"""
def interact_func2(n, xlim):
sample = weight.rvs(n)
resampler = StdResampler(sample, xlim=xlim)
resampler.plot_sampling_distribution()
interact_func2(n=100, xlim=[0, 100])
"""
Explanation: Test your code using the cell below:
End of explanation
"""
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(interact_func2, n=slider, xlim=fixed([0, 100]));
"""
Explanation: When your StdResampler is working, you should be able to interact with it:
End of explanation
"""
female_weight = scipy.stats.lognorm(0.23, 0, 70.8)
female_weight.mean(), female_weight.std()
"""
Explanation: STOP HERE
We will regroup and discuss before going on.
Part Three
We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data):
End of explanation
"""
male_weight = scipy.stats.lognorm(0.20, 0, 87.3)
male_weight.mean(), male_weight.std()
"""
Explanation: And here's the men's distribution:
End of explanation
"""
female_sample = female_weight.rvs(100)
male_sample = male_weight.rvs(100)
"""
Explanation: I'll simulate a sample of 100 men and 100 women:
End of explanation
"""
male_sample.mean() - female_sample.mean()
"""
Explanation: The difference in means should be about 17 kg, but will vary from one random sample to the next:
End of explanation
"""
def CohenEffectSize(group1, group2):
"""Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
"""
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
"""
Explanation: Here's the function that computes Cohen's effect size again:
End of explanation
"""
CohenEffectSize(male_sample, female_sample)
"""
Explanation: The difference in weight between men and women is about 1 standard deviation:
End of explanation
"""
class CohenResampler(Resampler):
def __init__(self, group1, group2, xlim=None):
self.group1 = group1
self.group2 = group2
self.xlim = xlim
def resample(self):
n, m = len(self.group1), len(self.group2)
group1 = numpy.random.choice(self.group1, n, replace=True)
group2 = numpy.random.choice(self.group2, m, replace=True)
return group1, group2
def sample_stat(self, groups):
group1, group2 = groups
return CohenEffectSize(group1, group2)
"""
Explanation: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
End of explanation
"""
resampler = CohenResampler(male_sample, female_sample)
resampler.plot_sampling_distribution()
"""
Explanation: Now we can instantiate a CohenResampler and plot the sampling distribution.
End of explanation
"""
|
jasonkitbaby/udacity-homework | boston_housing/boston_housing.ipynb | apache-2.0 | # 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
# 检查你的Python版本
from sys import version_info
if version_info.major != 2 and version_info.minor != 7:
raise Exception('请使用Python 2.7来完成此项目')
# 让结果在notebook中显示
%matplotlib inline
# 载入波士顿房屋的数据集
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# 完成
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
"""
Explanation: 机器学习工程师纳米学位
模型评价与验证
项目 1: 预测波士顿房价
欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以编程练习开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以TODO标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
第一步. 导入数据
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。
此项目的数据集来自UCI机器学习知识库(数据集已下线)。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:
- 有16个'MEDV' 值为50.0的数据点被移除。 这很可能是由于这些数据点包含遗失或看不到的值。
- 有1个数据点的 'RM' 值为8.78. 这是一个异常值,已经被移除。
- 对于本项目,房屋的'RM', 'LSTAT','PTRATIO'以及'MEDV'特征是必要的,其余不相关特征已经被移除。
- 'MEDV'特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。
运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。
End of explanation
"""
#TODO 1
#目标:计算价值的最小值
minimum_price = np.min(prices)
#目标:计算价值的最大值
maximum_price = np.max(prices)
#目标:计算价值的平均值
mean_price =np.mean(prices)
#目标:计算价值的中值
median_price = np.median(prices)
#目标:计算价值的标准差
std_price = np.std(prices)
#目标:输出计算的结果
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
"""
Explanation: 第二步. 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。
- 特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。
- 目标变量:'MEDV',是我们希望预测的变量。
他们分别被存在features和prices两个变量名中。
编程练习 1:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
End of explanation
"""
# 载入画图所需要的库 matplotlib
import matplotlib.pyplot as plt
# 使输出的图像以更高清的方式显示
%config InlineBackend.figure_format = 'retina'
# 调整图像的宽高
plt.figure(figsize=(16, 4))
for i, key in enumerate(['RM', 'LSTAT', 'PTRATIO']):
plt.subplot(1, 3, i+1)
plt.xlabel(key)
plt.scatter(data[key], data['MEDV'], alpha=0.5)
"""
Explanation: 问题 1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点:
- 'RM' 是该地区中每个房屋的平均房间数量;
- 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄);
- 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。
凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。
提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢?
问题 1 - 回答:
增大 RM MEDV 会增大, 房屋的价格与房间数量是正相关关系
增大 LSTAT MEDV 会减小, 该区域的房价与收入水平有一点联系,大部分房东如果收入较低可以说明该地区房价可能偏低。
增大 PTRATIO MEDV 会减小, 说明该地区学生多,老师少,可能教育资源比较优秀。影响到房价。
End of explanation
"""
# TODO 2
# 提示: 导入train_test_split
from sklearn.cross_validation import train_test_split
def generate_train_and_test(X, y):
"""打乱并分割数据为训练集和测试集"""
X_train,X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=0)
return (X_train, X_test, y_train, y_test)
X_train, X_test, y_train, y_test = generate_train_and_test(features, prices)
"""
Explanation: 编程练习 2: 数据分割与重排
接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重排列,以消除数据集中由于顺序而产生的偏差。
在下面的代码中,你需要
使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。
- 分割比例为:80%的数据用于训练,20%用于测试;
- 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性;
End of explanation
"""
# TODO 3
# 提示: 导入r2_score
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
"""计算并返回预测值相比于预测值的分数"""
score = r2_score(y_true,y_predict)
return score
# TODO 3 可选
# 不允许导入任何计算决定系数的库
def performance_metric2(y_true, y_predict):
"""计算并返回预测值相比于预测值的分数"""
score = None
return score
"""
Explanation: 问题 2 - 训练及测试
将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?
如果用模型已经见过的数据,例如部分训练集数据进行测试,又有什么坏处?
提示: 如果没有数据来对模型进行测试,会出现什么问题?
问题 2 - 回答:
一部分数据用于训练 拟合参数, 一部分数据用于测试,来验证模型是否准确。
如果用已经见过的数据来做测试,会导致测试的结果非常准确,预测的结果不准确, 因为见过的数据参与训练,拟合的参数非常贴近这些数据。
第三步. 模型衡量标准
在项目的第三步中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。
编程练习3:定义衡量标准
如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
R<sup>2</sup>的数值范围从0至1,表示目标变量的预测值和实际值之间的相关程度平方的百分比。一个模型的R<sup>2</sup> 值为0还不如直接用平均值来预测效果好;而一个R<sup>2</sup> 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用特征来解释。模型也可能出现负值的R<sup>2</sup>,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。
在下方代码的 performance_metric 函数中,你要实现:
- 使用 sklearn.metrics 中的 r2_score 来计算 y_true 和 y_predict的R<sup>2</sup>值,作为对其表现的评判。
- 将他们的表现评分储存到score变量中。
或
(可选) 不使用任何外部库,参考决定系数的定义进行计算,这也可以帮助你更好的理解决定系数在什么情况下等于0或等于1。
End of explanation
"""
# 计算这个模型的预测结果的决定系数
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
"""
Explanation: 问题 3 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
你觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。
提示:运行下方的代码,使用performance_metric函数来计算模型的决定系数。
End of explanation
"""
# 根据不同的训练集大小,和最大深度,生成学习曲线
vs.ModelLearning(X_train, y_train)
"""
Explanation: 问题 3 - 回答:
成功描述了目标变量的变化, R^2 的值贴近1.
第四步. 分析模型的表现
在项目的第四步,我们来看一下不同参数下,模型在训练集和验证集上的表现。这里,我们专注于一个特定的算法(带剪枝的决策树,但这并不是这个项目的重点),和这个算法的一个参数 'max_depth'。用全部训练集训练,选择不同'max_depth' 参数,观察这一参数的变化如何影响模型的表现。画出模型的表现来对于分析过程十分有益,这可以让我们看到一些单看结果看不到的行为。
学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观得显示了随着训练数据量的增加,模型学习曲线的在训练集评分和验证集评分的变化,评分使用决定系数R<sup>2</sup>。曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。
运行下方区域中的代码,并利用输出的图形回答下面的问题。
End of explanation
"""
# 根据不同的最大深度参数,生成复杂度曲线
vs.ModelComplexity(X_train, y_train)
"""
Explanation: 问题 4 - 学习曲线
选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练集曲线的评分有怎样的变化?验证集曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?
提示:学习曲线的评分是否最终会收敛到特定的值?
问题 4 - 回答:
figure 1
训练的评分一直很低, 表示 其模型拟合的不是特好,有更多的训练数据不会提高模型的表现, 需要提高模型复杂度,
复杂度曲线
下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练集的变化,一个是验证集的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 performance_metric 函数。
运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。
End of explanation
"""
# TODO 4
#提示: 导入 'KFold' 'DecisionTreeRegressor' 'make_scorer' 'GridSearchCV'
from sklearn.model_selection import KFold
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
def fit_model(X, y):
""" 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型"""
cross_validator = KFold(n_splits=2, random_state=None, shuffle=False)
regressor = DecisionTreeRegressor()
params = {"max_depth": [1,2,3,4,5,6,7,8,9,10]}
scoring_fnc = make_scorer(performance_metric)
grid = GridSearchCV(regressor,params,scoring=scoring_fnc,cv=cross_validator)
# 基于输入数据 [X,y],进行网格搜索
grid = grid.fit(X, y)
# print pd.DataFrame(grid.cv_results_)
# 返回网格搜索后的最优模型
return grid.best_estimator_
"""
Explanation: 问题 5 - 偏差(bias)与方差(variance)之间的权衡取舍
当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?
提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?
问题 5 - 回答:
最大深度为1时, 是偏差问题,模型不够敏感,,欠拟合,需要提高复杂度
最大深度为10时, 是方差问题, 模型很好的贴近训练数据,但是验证评分较差, 过拟合
问题 6- 最优模型的猜测
你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么?
问题 6 - 回答:
最大深度 是4的时候能够很好地对未见的数据进行预测。
4之后的 验证评分没有较好的提升,训练评分越来越高,过拟合了
4只前的 验证评分和训练评分都在不断提升,处理欠拟合阶段
第五步. 选择最优参数
问题 7- 网格搜索(Grid Search)
什么是网格搜索法?如何用它来优化模型?
问题 7 - 回答:
网格搜索算法是一种通过遍历给定的参数组合来优化模型表现的方法
将各参数变量值的可行区间(可从小到大),划分为一系列的小区,由计算机顺序算出对应各参数变量值组合,所对应的误差目标值并逐一比较择优,从而求得该区间内最小目标值及其对应的最佳特定参数值。
这种估值方法可保证所得的搜索解基本是全局最优解,可避免重大误差
以决策树为例,为了能够更好地拟合和预测,我们需要调整它的参数,通常选择的参数是决策树的最大深度, 所以会给出一系列的最大深度的值,如{'max_depth': [1,2,3,4,5]}, 对各个深度的进行验证评分。
求得 这系列深度的最优值。
问题 8 - 交叉验证
什么是K折交叉验证法(k-fold cross-validation)?
GridSearchCV是如何结合交叉验证来完成对最佳参数组合的选择的?
GridSearchCV中的'cv_results_'属性能告诉我们什么?
网格搜索时如果不使用交叉验证会有什么问题?交叉验证又是如何解决这个问题的?
提示: 在下面 fit_model函数最后加入 print pd.DataFrame(grid.cv_results_) 可以帮你查看更多信息。
问题 8 - 回答:
数据集分为 训练集 和测试集
训练集用以模型训练
测试集合用以苹果模型最终评分
一般的划分比例是 8:2
什么是K折交叉验证法
`K-fold cross-validation (k-CV)则是Double cross-validation的延伸。
做法是将训练集分成k个子集,选去其中一个作为验证集, 其余k-1个子集作为训练 进行训练,训练完 用验证集进行验证评分
k-CV交叉验证会重复这个步骤k次,每次选择不同的一个子集作为验证集。
如:总共划分了10个子集, 第一次 选取第一个子集作为验证,剩余其他9个参与训练。 第二次 选取第二子集作为验证,剩余其他9个参与训练.... 以此类推。 总共做10次训练和验证。 保证每个子集都做一次验证集。
`
GridSearchCV是如何结合交叉验证来完成对最佳参数组合的选择的
首先列出模型各种可能的参数集合, 如线程模型取 函数的指数项D,决策树模型取 树的深度D
针对每一个可能的参数 进行k次交叉验证取平均值做为模型评分。
在这些评分里面选取一个获得最优评分的参数D
GridSearchCV中的'cv_results_'属性能告诉我们什么?
cv_results_ 属性告诉我们 交叉验证的每一次的运算结果和各个指标
网格搜索时如果不使用交叉验证会有什么问题?交叉验证又是如何解决这个问题的?
网络搜索如果不使用交叉验证,仅仅只是在各个区间里面取最优值,可能出现偶然性评分偏高或偏低的问题(欠拟合、过拟合问题)。
交叉验证 避免了数据集划分的偶然性造成的评分偏高或偏低的问题,通过使用不同的训练集和验证集训练然后取K次评分的平均来得到最终成绩来保证评分的客观和准确
编程练习 4:训练最优模型
在这个练习中,你将需要将所学到的内容整合,使用决策树算法训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。
在下方 fit_model 函数中,你需要做的是:
1. 定义 'cross_validator' 变量: 使用 sklearn.model_selection 中的 KFold 创建一个交叉验证生成器对象;
2. 定义 'regressor' 变量: 使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树的回归函数;
3. 定义 'params' 变量: 为 'max_depth' 参数创造一个字典,它的值是从1至10的数组;
4. 定义 'scoring_fnc' 变量: 使用 sklearn.metrics 中的 make_scorer 创建一个评分函数;
将 ‘performance_metric’ 作为参数传至这个函数中;
5. 定义 'grid' 变量: 使用 sklearn.model_selection 中的 GridSearchCV 创建一个网格搜索对象;将变量'regressor', 'params', 'scoring_fnc'和 'cross_validator' 作为参数传至这个对象构造函数中;
如果你对python函数的默认参数定义和传递不熟悉,可以参考这个MIT课程的视频。
End of explanation
"""
# TODO 4 可选
'''
不允许使用 DecisionTreeRegressor 以外的任何 sklearn 库
提示: 你可能需要实现下面的 cross_val_score 函数
def cross_val_score(estimator, X, y, scoring = performance_metric, cv=3):
""" 返回每组交叉验证的模型分数的数组 """
scores = [0,0,0]
return scores
'''
def fit_model2(X, y):
""" 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型"""
#最优交叉验证分数对应的最优模型
best_estimator = None
return best_estimator
"""
Explanation: 编程练习 4:训练最优模型 (可选)
在这个练习中,你将需要将所学到的内容整合,使用决策树算法训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。
在下方 fit_model 函数中,你需要做的是:
遍历参数‘max_depth’的可选值 1~10,构造对应模型
计算当前模型的交叉验证分数
返回最优交叉验证分数对应的模型
End of explanation
"""
# 基于熟练数据,获得最优模型
optimal_reg = fit_model(X_train, y_train)
# 输出最优模型的 'max_depth' 参数
print "Parameter 'max_depth' is {} for the optimal model.".format(optimal_reg.get_params()['max_depth'])
"""
Explanation: 问题 9 - 最优模型
最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
End of explanation
"""
# 生成三个客户的数据
client_data = [[5, 17, 15], # 客户 1
[4, 32, 22], # 客户 2
[8, 3, 12]] # 客户 3
# 进行预测
predicted_price = optimal_reg.predict(client_data)
for i, price in enumerate(predicted_price):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
"""
Explanation: 问题 9 - 回答:
最优化的答案是4, 与估计的一致
第六步. 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
问题 10 - 预测销售价格
想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:
| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |
| :---: | :---: | :---: | :---: |
| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |
| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |
| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |
你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗?为什么?
提示:用你在分析数据部分计算出来的统计信息来帮助你证明你的答案。
运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。
End of explanation
"""
#TODO 5
# 提示:你可能需要用到 X_test, y_test, optimal_reg, performance_metric
# 提示:你可能需要参考问题10的代码进行预测
# 提示:你可能需要参考问题3的代码来计算R^2的值
y_pre = optimal_reg.predict(X_test)
r2 = r2_score(y_test,y_pre)
print "Optimal model has R^2 score {:,.2f} on test data".format(r2)
"""
Explanation: 问题 10 - 回答:
从统计的结果可以看出
增大 RM ,MEDV 会增大, 房屋的价格与房间数量是正相关关系
增大 LSTAT ,MEDV 会减小, 该区域的房价与收入水平有一点联系,大部分房东如果收入较低可以说明该地区房价可能偏低。
增大 PTRATIO , MEDV 会减小, 说明该地区学生多,老师少,可能教育资源比较优秀。影响到房价。
所以
- 客户1 预测价格 $391,183.33,房间数5 RM数量中 ,穷人数中等, LSTAT一般,教育资源一般,15个学生对1位老师, PTRATIO中等
- 客户2 预测价格 $189,123.53,房间数4 RM数量低,穷人数偏多 LSTAT低,教育资源稀缺,22位学生对1位老师, PTRATIO高。
- 客户3 预测价格 $942,666.67,房间书8 RN数量高 ,穷人占比少 LSTAT高 ,教育资源较为优秀,12位学生对应1位老师, PTRATIO高
预测价格比较合理。
编程练习 5
你刚刚预测了三个客户的房子的售价。在这个练习中,你将用你的最优模型在整个测试数据上进行预测, 并计算相对于目标变量的决定系数 R<sup>2</sup>的值**。
End of explanation
"""
# 请先注释掉 fit_model 函数里的所有 print 语句
vs.PredictTrials(features, prices, fit_model, client_data)
"""
Explanation: 问题11 - 分析决定系数
你刚刚计算了最优模型在测试集上的决定系数,你会如何评价这个结果?
问题11 - 回答
决定系数没有绝对的高低, 视模型和环境因素而定。
这次的决定系数 R^2 是0.77 比较接近 1了。 结果还不错。
说明选择的特征比较好,大部分能都能解释到房屋价格, 可以尝试添加一些其他的有用特征,看是否能进一步提升决定系数。
模型健壮性
一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。
问题 12 - 模型健壮性
模型是否足够健壮来保证预测的一致性?
提示: 执行下方区域中的代码,采用不同的训练和测试集执行 fit_model 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。
End of explanation
"""
# TODO 6
# 导入数据
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
# 让结果在notebook中显示
%matplotlib inline
# 1.导入数据
data = pd.read_csv('bj_housing.csv')
area = data['Area']
Room = data['Room']
living = data['Living']
school = data['School']
year = data['Year']
Floor = data['Floor']
prices = data['Value']
features = data.drop('Value', axis = 1)
# 完成
print "BJ housing dataset has {} data points with {} variables each.".format(*data.shape)
# 2. 分析数据
minimum_price = prices.min()
maximum_price = prices.max()
mean_price = prices.mean()
median_price = prices.median()
std_price = prices.std()
#目标:输出计算的结果
print "Statistics for BJ housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
X_train, X_test, y_train, y_test = generate_train_and_test(features, prices)
# 学习曲线
vs.ModelLearning(X_train, y_train)
# 根据不同的最大深度参数,生成复杂度曲线
vs.ModelComplexity(X_train, y_train)
optimal_reg = fit_model(X_train, y_train)
# 输出最优模型的 'max_depth' 参数
print "Parameter 'max_depth' is {} for the optimal model.".format(optimal_reg.get_params()['max_depth'])
# 模型评价
y_pre = optimal_reg.predict(X_test)
r2 = r2_score(y_test,y_pre)
print "Optimal model has R^2 score {:,.2f} on test data".format(r2)
"""
Explanation: 问题 12 - 回答:
问题 13 - 实用性探讨
简单地讨论一下你建构的模型能否在现实世界中使用?
提示:回答以下几个问题,并给出相应结论的理由:
- 1978年所采集的数据,在已考虑通货膨胀的前提下,在今天是否仍然适用?
- 数据中呈现的特征是否足够描述一个房屋?
- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?
- 你觉得仅仅凭房屋所在社区的环境来判断房屋价值合理吗?
问题 13 - 回答:
1978年所采集的数据,在已考虑通货膨胀的前提下,在今天仍具有参考性
数据中呈现的特征不够描述房屋的特征
在波士顿这样的大都市采集的数据,并不能沿用到其他乡镇地区
单纯凭借社区环境来判断房屋价格并不合理
构建的模型是可以在现实世界中使用,需要添加更多有用的特征,比如房屋开放商,房间的楼层等, 另外需要更多的数据来进行训练和验证提高精准度。
可选问题 - 预测北京房价
(本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中 bj_housing.csv。
免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。
这个数据集的特征有:
- Area:房屋面积,平方米
- Room:房间数,间
- Living: 厅数,间
- School: 是否为学区房,0或1
- Year: 房屋建造时间,年
- Floor: 房屋所处楼层,层
目标变量:
- Value: 房屋人民币售价,万
你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。
End of explanation
"""
|
dennisproppe/fp_python | fp_lesson_3_monads.ipynb | apache-2.0 | class Company():
def __init__(self, name, address=None):
self.address = address
self.name = name
def get_name(self):
return self.name
def get_address(self):
return self.address
"""
Explanation: Monads
Monads are the most feared concept of FP, so I reserve a complete chapter for understanding this concept.
What is a monad?
Right now, my understanding is that monads are a very flexible concept that basically allows to attach context to an otherwise stateless system. This means, that through a monad, the application of a otherwise pure function can be made dependent on context, so that a function will be executed differently in different contexts.
An easy example: The maybe monad
We will start with an easy example: Let's assume we have the task of looking up a street name from a company record. If we'd do it the normal, non-functional way, we'd have to write functions that look up these records and check if the results are not NULL:
This example is heavily inspired by https://unpythonic.com/01_06_monads/
The following is a simple company class, where the address attribute is a simple dict containing the detailed address information.
End of explanation
"""
cp1 = Company(name="Meier GmbH", address={"street":"Herforstweg 4"})
cp1.get_name()
cp1.get_address()
cp1.get_address().get("street")
"""
Explanation: I now instatiate an instance of this class with a correctly set street attribute in the address dict. Then, everything works well when we want the query the street address from this company:
End of explanation
"""
cp2 = Company("Schultze AG")
cp2.get_name()
cp2.get_address().get("street")
"""
Explanation: However, when we want to get the street name when the company doesn't have a street attribute, this lookup will fail and throw an error:
End of explanation
"""
def get_street(company):
address = company.get_address()
if address:
if address.has_key("street"):
return address.get("street")
return None
return None
get_street(cp2)
cp3 = Company(name="Wifi GbR", address={"zipcode": 11476} )
get_street(cp3)
"""
Explanation: What we would normally do to allieviate this issue is to write a function that deals with null values:
End of explanation
"""
class Maybe():
def __init__(self, value):
self.value = value
def bind(self, fn):
if self.value is None:
return self
return fn(self.value)
def get_value(self):
return self.value
"""
Explanation: We now see that we are able to complete the request without an error, returning None, if there is no address given or if there is no dict entry for "street" in the address.
But wouldn't it be nice to have this handled once and for all?
Enter the "Maybe" monad!
End of explanation
"""
def get_address(company):
return Maybe(company.get_address())
def get_street(address):
return Maybe(address.get('street'))
def get_street_from_company(company):
return (Maybe(company)
.bind(get_address)
.bind(get_street)
.get_value())
get_street_from_company(cp1)
get_street_from_company(cp3)
"""
Explanation: Now, we can rewrite the get_street as get_street_from_company, using two helper function
End of explanation
"""
|
wutienyang/facebook_fanpage_analysis | Facebook粉絲頁分析三部曲-爬取篇(posts).ipynb | mit | # 載入python 套件
import requests
import datetime
import time
import pandas as pd
"""
Explanation: 如何爬取Facebook粉絲頁資料 (posts) ?
基本上是透過 Facebook Graph API 去取得粉絲頁的資料,但是使用 Facebook Graph API 還需要取得權限,有兩種方法 :
第一種是取得 Access Token
第二種是建立 Facebook App的應用程式,用該應用程式的帳號,密碼當作權限
兩者的差別在於第一種會有時效限制,必須每隔一段時間去更新Access Token,才能使用
Access Token
本文是採用第二種方法
要先取得應用程式的帳號,密碼 app_id, app_secret
End of explanation
"""
# 分析的粉絲頁的id
page_id = "appledaily.tw"
app_id = ""
app_secret = ""
access_token = app_id + "|" + app_secret
"""
Explanation: 第一步 - 要先取得應用程式的帳號,密碼 (app_id, app_secret)
第二步 - 輸入要分析的粉絲團的 id (page_id)
[教學]如何申請建立 Facebook APP ID 應用程式ID
End of explanation
"""
# 判斷response有無正常 正常 200,若無隔五秒鐘之後再試
def request_until_succeed(url):
success = False
while success is False:
try:
req = requests.get(url)
if req.status_code == 200:
success = True
except Exception as e:
print(e)
time.sleep(5)
print("Error for URL %s: %s" % (url, datetime.datetime.now()))
print("Retrying.")
return req
"""
Explanation: 爬取的基本概念是送request透過Facebook Graph API來取得資料
而request就是一個url,這個url會根據你的設定(你要拿的欄位)而回傳你需要的資料
但是在爬取大型粉絲頁時,很可能會因為你送的request太多了,就發生錯誤
這邊的解決方法很簡單用一個while迴圈,發生錯誤就休息5秒,5秒鐘後,再重新送request
基本上由5個function來完成:
request_until_succeed
來確保完成爬取
getFacebookPageFeedData
來產生post的各種資料(message,link,created_time,type,name,id...)
getReactionsForStatus
來獲得該post的各reaction數目(like, angry, sad ...)
processFacebookPageFeedStatus
是處理getFacebookPageFeedData得到的各種資料,把它們結構化
scrapeFacebookPageFeedStatus
為主程式
End of explanation
"""
# 取得Facebook data
def getFacebookPageFeedData(page_id, access_token, num_statuses):
# Construct the URL string; see http://stackoverflow.com/a/37239851 for
# Reactions parameters
base = "https://graph.facebook.com/v2.6"
node = "/%s/posts" % page_id
fields = "/?fields=message,link,created_time,type,name,id," + \
"comments.limit(0).summary(true),shares,reactions" + \
".limit(0).summary(true)"
parameters = "&limit=%s&access_token=%s" % (num_statuses, access_token)
url = base + node + fields + parameters
# 取得data
data = request_until_succeed(url).json()
return data
# 取得該篇文章的 reactions like,love,wow,haha,sad,angry數目
def getReactionsForStatus(status_id, access_token):
# See http://stackoverflow.com/a/37239851 for Reactions parameters
# Reactions are only accessable at a single-post endpoint
base = "https://graph.facebook.com/v2.6"
node = "/%s" % status_id
reactions = "/?fields=" \
"reactions.type(LIKE).limit(0).summary(total_count).as(like)" \
",reactions.type(LOVE).limit(0).summary(total_count).as(love)" \
",reactions.type(WOW).limit(0).summary(total_count).as(wow)" \
",reactions.type(HAHA).limit(0).summary(total_count).as(haha)" \
",reactions.type(SAD).limit(0).summary(total_count).as(sad)" \
",reactions.type(ANGRY).limit(0).summary(total_count).as(angry)"
parameters = "&access_token=%s" % access_token
url = base + node + reactions + parameters
# 取得data
data = request_until_succeed(url).json()
return data
"""
Explanation: url = base + node + fields + parameters
base : 可以設定Facebook Graph API的版本,這邊設定v2.6
node : 分析哪個粉絲頁的post 由page_id去設定
fields : 你要取得資料的種類
parameters : 權限設定和每次取多少筆(num_statuses)
End of explanation
"""
def processFacebookPageFeedStatus(status, access_token):
# 要去確認抓到的資料是否為空
status_id = status['id']
status_type = status['type']
if 'message' not in status.keys():
status_message = ''
else:
status_message = status['message']
if 'name' not in status.keys():
link_name = ''
else:
link_name = status['name']
link = status_id.split('_')
# 此連結可以回到該臉書上的post
status_link = 'https://www.facebook.com/'+link[0]+'/posts/'+link[1]
status_published = datetime.datetime.strptime(status['created_time'],'%Y-%m-%dT%H:%M:%S+0000')
# 根據所在時區 TW +8
status_published = status_published + datetime.timedelta(hours=8)
status_published = status_published.strftime('%Y-%m-%d %H:%M:%S')
# 要去確認抓到的資料是否為空
if 'reactions' not in status:
num_reactions = 0
else:
num_reactions = status['reactions']['summary']['total_count']
if 'comments' not in status:
num_comments = 0
else:
num_comments = status['comments']['summary']['total_count']
if 'shares' not in status:
num_shares = 0
else:
num_shares = status['shares']['count']
def get_num_total_reactions(reaction_type, reactions):
if reaction_type not in reactions:
return 0
else:
return reactions[reaction_type]['summary']['total_count']
# 取得該篇文章的 reactions like,love,wow,haha,sad,angry數目
reactions = getReactionsForStatus(status_id, access_token)
num_loves = get_num_total_reactions('love', reactions)
num_wows = get_num_total_reactions('wow', reactions)
num_hahas = get_num_total_reactions('haha', reactions)
num_sads = get_num_total_reactions('sad', reactions)
num_angrys = get_num_total_reactions('angry', reactions)
num_likes = get_num_total_reactions('like', reactions)
# 回傳tuple形式的資料
return (status_id, status_message, link_name, status_type, status_link,
status_published, num_reactions, num_comments, num_shares,
num_likes, num_loves, num_wows, num_hahas, num_sads, num_angrys)
"""
Explanation: 生成status_link ,此連結可以回到該臉書上的post
status_published = status_published + datetime.timedelta(hours=8) 根據所在時區 TW +8
End of explanation
"""
def scrapeFacebookPageFeedStatus(page_id, access_token):
# all_statuses 用來儲存的list,先放入欄位名稱
all_statuses = [('status_id', 'status_message', 'link_name', 'status_type', 'status_link',
'status_published', 'num_reactions', 'num_comments', 'num_shares',
'num_likes', 'num_loves', 'num_wows', 'num_hahas', 'num_sads', 'num_angrys')]
has_next_page = True
num_processed = 0 # 計算處理多少post
scrape_starttime = datetime.datetime.now()
print("Scraping %s Facebook Page: %s\n" % (page_id, scrape_starttime))
statuses = getFacebookPageFeedData(page_id, access_token, 100)
while has_next_page:
for status in statuses['data']:
# 確定有 reaction 再把結構化後的資料存入 all_statuses
if 'reactions' in status:
all_statuses.append(processFacebookPageFeedStatus(status,access_token))
# 觀察爬取進度,每處理100篇post,就輸出時間,
num_processed += 1
if num_processed % 100 == 0:
print("%s Statuses Processed: %s" % (num_processed, datetime.datetime.now()))
# 每超過100個post就會有next,可以從next中取得下100篇, 直到沒有next
if 'paging' in statuses.keys():
statuses = request_until_succeed(statuses['paging']['next']).json()
else:
has_next_page = False
print("\nDone!\n%s Statuses Processed in %s" % \
(num_processed, datetime.datetime.now() - scrape_starttime))
return all_statuses
all_statuses = scrapeFacebookPageFeedStatus(page_id, access_token)
"""
Explanation: 假設一個粉絲頁,有250個posts
第一次用 getFacebookPageFeedData 得到 url 送入 request_until_succeed
得到第一個dictionary
dictionary中有兩個key,一個是data(100筆資料都在其中)
而另一個是next(下一個100筆的url在裡面,把它送出去會在得到另一個dictionary,裡面又含兩個key,一樣是data和next)
第一次送的 request data: 第100筆資料 next: 下100筆資料的url
第二次送的 request data: 第101-200筆資料 next: 下100筆資料的url
第三次送的 request data: 第201- 250筆資料 next: 無 (因為沒有下一百筆了)
總共送3次request
由於Facebook限制每次最多抓100篇posts,因此當粉絲頁超過100篇時,
就會有 next 的 url,必須送出此url在獲得下100篇,由 has_next_page 來決定
是否下100篇
num_processed是用來計算處理多少posts,每處理100筆就輸出時間
最後會把結果輸出成csv,供後續章節繼續分析和預測
End of explanation
"""
df = pd.DataFrame(all_statuses[1:], columns=all_statuses[0])
df.head()
path = 'post/'+page_id+'_post.csv'
df.to_csv(path,index=False,encoding='utf8')
"""
Explanation: 5234篇post共花了20分鐘,把結果存成csv交給下一章去分析
all_statuses[0] 為 column name
all_statuses[1:] 為處理後結構化的資料
End of explanation
"""
|
daniel-acuna/python_data_science_intro | notebooks/Introduction to Python and Notebook.ipynb | mit | import sklearn
"""
Explanation: Jypter notebook
Before starting, let's take a look at the Jupyter notebook.
Stopping and halting a kernel
Looking at which notebooks are running
Cells
Adding cells above and below
Changing type of cell from Markdown to Code
Adding math
Class and objects
To import a module, you use the word import and then the name of the module
End of explanation
"""
# this it below
from sklearn import
# this also works with submodules
from sklearn.linear_model import
# from the submodule linear_model, lets import LinearRegression
from sklearn.linear_model import LinearRegression
"""
Explanation: You are able to import this because the module sklearn is already part of the Anaconda distribution.
You can explore the modules that are part of sklearn by doing from sklearn import and then pressing Tab.
End of explanation
"""
LinearRegression.__bases__
"""
Explanation: Python is based on object-oriented programming (OOP).
Objects are containers of data and funcionality
Objects are of a class and that class might inherit funcionality from other classes
A class defines when and how the objects of that class would store data and how those objects would behave
The imported LinearRegression is a class definition. You can know the parents of a class by retrieving the __bases__ property
End of explanation
"""
# try it below
LinearRegression()
"""
Explanation: To create an object, you call the class with parameters. To retrieve the possible parameters of class (or function) in the notebook,
you can Shift-Tab (preview), double Shift-Tab (expanded window), triple Shift-Tab (expanded window with no time out), quadruple Shift-Tab (for split view of help)
End of explanation
"""
lr = LinearRegression()
"""
Explanation: Now, lets create a linear regression object
End of explanation
"""
# try it here
lr.
"""
Explanation: Again, we can explore that object by typing the name of object, then ., and then Tab
End of explanation
"""
lr
"""
Explanation: if we type lr into the notebook, we will get a customize description of the object
End of explanation
"""
type(lr)
"""
Explanation: we can obtain a more programmatically class description by calling the built-in type command
End of explanation
"""
id(lr)
"""
Explanation: Now, objects have a global identity
End of explanation
"""
from sklearn.datasets import load_diabetes
diabetes_ds = load_diabetes()
X = diabetes_ds['data']
y = diabetes_ds['target']
"""
Explanation: Datasets
sklearn has many datasets. We will take a diabetes dataset from it
End of explanation
"""
[type(X), type(y)]
"""
Explanation: sklearn works mostly with numpy array, which are $n$-dimensional arrays.
End of explanation
"""
X.ndim
"""
Explanation: Numpy arrays
You can check the number of dimensions of an array
End of explanation
"""
X.shape
"""
Explanation: Check the size of the dimensions
End of explanation
"""
X[0:2]
X[:2]
X[0:2, :]
"""
Explanation: Get slices of the dimensions. The following are all the same thing: grab the first two rows of a matrix
End of explanation
"""
X[:, 0:2]
"""
Explanation: We can also grab columns in the same way
End of explanation
"""
X[:, 2].shape
"""
Explanation: Sometimes you want to grab just one column (feature), but the numpy returns a one dimensional object
End of explanation
"""
X[:, 2].reshape([-1, 1])
X[:, 2].reshape([-1, 1]).shape
"""
Explanation: We can reshape the $nd$-array and add one dimension:
End of explanation
"""
# transpose
X.T.shape
X.dot(X.T).shape
"""
Explanation: You can do matrix algebra:
End of explanation
"""
import numpy.linalg as la
la.inv(X.dot(X.T)).shape
"""
Explanation: For more functions, you can importa numpy
End of explanation
"""
# explore the parameters of fit
lr.fit
lr2 = lr.fit(X[:, [2]], y)
"""
Explanation: Fitting models
OK, let's go back to our example with linear regression.
Usually sklearn objects starts by fitting the data, then either predicting or transforming new data. Predicting is usually for supervised learning and transforming is for unsupervised learning.
End of explanation
"""
id(lr2)
id(lr)
"""
Explanation: fit returns an object. If we examine the id of the object it returns:
End of explanation
"""
lr.intercept_
lr.coef_
"""
Explanation: We realize that it is the same object lr, therefore, the call is fitting the data and modifying the internal structure of the object and it is returning itself.
Therefore, you can chain calls, which is very powerful feature.
Explore the fitted object
By looking at the online documentation of the LinearRegression, we can know the parameters it found.
End of explanation
"""
# explore the parameters
lr.predict
y_pred = lr.predict(X[:, [2]])
"""
Explanation: Predicting
End of explanation
"""
y_pred2 = lr.intercept_ + X[:, [2]].dot(lr.coef_)
# this checks that all entries in the comparison are True
np.all(y_pred2 == y_pred)
"""
Explanation: Because we know how linear regression works, we can produce the predictions ourselves
End of explanation
"""
y_pred3 = lr.fit(X[:, [2]], y).predict(X[:, [2]])
np.all(y_pred3 == y_pred)
"""
Explanation: Now, due to the powerful concept of chaining, we can combine fit and predict in one line
End of explanation
"""
import quandl
import quandl
mydata = quandl.get("YAHOO/AAPL")
mydata.head()
"""
Explanation: Additional packages
Sometimes you want to use a package that you found online. Many of these packages are available throught the Python Install Packages (PIP) package manager.
For example, the package quandl allows quants to load financial data in Python.
We can install it in the console simply by typing
pip install quandl
And now we should be able to import that package
End of explanation
"""
# this helps put the plot results in the browser
%matplotlib inline
"""
Explanation: Pandas
End of explanation
"""
import pandas as pd
"""
Explanation: Pandas is a package for loading, manipulating, and display data sets. It tries to mimick the funcionality of data.frame in R
End of explanation
"""
apple_stocks = quandl.get("YAHOO/AAPL")
type(apple_stocks)
"""
Explanation: Many packages return data in pandas DataFrame objects
End of explanation
"""
apple_stocks.head()
apple_stocks.tail()
"""
Explanation: We can display the beginning of a data frame:
End of explanation
"""
apple_stocks.plot(y='Close');
"""
Explanation: And also, we can plot it with pandas
End of explanation
"""
apple_stocks[['Close']].pct_change().head()
apple_stocks[['Close']].pct_change().plot();
apple_stocks[['Close']].pct_change().hist(bins=100);
"""
Explanation: We can manipulate it too. Let's say we want to compute the stock returns
$$ r = \frac{V_t - V_{t-1}}{V_{t-1}} - 1$$
But for this, we need to compute a rolling filter
End of explanation
"""
# explore the variables and functions availabe in the Spark context
sc
"""
Explanation: Spark
Spark is a distributed in-memory big data analytics framework. It is hadoop on steriods.
Because we launched this jupyter notebook with pyspark, we have available automatically a variable called Spark context sc which gives us access to the master and therefore to the workers.
If we go to see the Spark dashboard (usually in port 4040), we can see some of the variables.
With Spark context you can read data from many sources, including HDFS (Hadoop File System), Hive, Amazon's S3, files, and databases.
End of explanation
"""
rdd_example = sc.parallelize([1, 2, 3, 4, 5, 6, 7])
"""
Explanation: Spark usually works with RDD (Resilient Distributed Dataset) and more recently they are moving towards DataFrame, which are similar to Pandas but distributed instead.
End of explanation
"""
rdd_example.id()
# this is a RDD
type(rdd_example)
"""
Explanation: We can check the id of the RDD in the cluster
End of explanation
"""
rdd_example.
"""
Explanation: Let's explore the funcions we have available
End of explanation
"""
rdd_example.take(3)
"""
Explanation: One such function is take that allows you to get a taste of what the file contains
End of explanation
"""
def square(x):
return x**2
"""
Explanation: Let's say you want to apply an operation to each element of the list
End of explanation
"""
rdd_result = rdd_example.map(square)
"""
Explanation: now we can apply that transformation to the RDD with the map function
End of explanation
"""
type(rdd_result)
"""
Explanation: Now you might notice that this returns immediately. Well, this is because operations on RDD are lazily evaluated
End of explanation
"""
rdd_result.id()
"""
Explanation: So rdd_result is another RDD
End of explanation
"""
rdd_result.take(3)
rdd_result.count()
rdd_result.first()
"""
Explanation: Now in fact, there is no duplication of data. Spark builds a computational graph that keeps tracks of dependencies and recomputes if something crashes.
We can take a look at the contents of the results by using take again. Since take is an action, it will trigger a job in the Spark cluster
End of explanation
"""
# this function can save into HDFS using Pickle (Python's internal) format
rdd_result.saveAsPickleFile()
"""
Explanation: Usually, one you have your results, you write it back to Hadoop for later preprocessing, because they usually won't fit in memory.
End of explanation
"""
from sklearn.datasets import load_diabetes
import pandas as pd
diabetes_ds = load_diabetes()
"""
Explanation: Spark's DataFrame
Now, DataFrame has some structure. Again, you can create them from different sources. In this case, DataFrame funcionality is available from another context called the sqlContext. This gives us access to SQL-like transformations.
In this example, we will use the sklearn diabetes dataset again
End of explanation
"""
from pyspark.mllib.regression import LabeledPoint
l
from pyspark.ml.linalg import Vectors
d
Xy_df = sqlContext.createDataFrame([
[float(l), Vectors.dense(d)] for d, l in zip(diabetes_ds['data'], diabetes_ds['target'])],
["y", "features"])
Xy_df
"""
Explanation: To create a dataset useful for machine learning we need to use certain datatypes
End of explanation
"""
Xy_df.registerTempTable('Xy')
"""
Explanation: We can register the table in Spark as an SQL
End of explanation
"""
sql_result1_df = sqlContext.sql('select count(*) from Xy')
# which again is lazily executed
sql_result1_df
sql_result1_df.take(1)
"""
Explanation: And then run queries
End of explanation
"""
from pyspark.ml.regression import LinearRegression
lr_spark = LinearRegression(featuresCol='features', labelCol="y")
lr_spark.coefficients
lr_results = lr_spark.fit(Xy_df)
lr_results.coefficients
lr_results.intercept
"""
Explanation: We can again run large scale regression using DataFrame
End of explanation
"""
|
0u812/nteract | example-notebooks/omex-basics.ipynb | bsd-3-clause | // -- Begin Antimony block
model *myModel()
// Compartments and Species:
species S1, S2;
// Reactions:
_J0: S1 -> S2; k1*S1;
// Species initializations:
S1 = 10;
S2 = 0;
// Variable initializations:
k1 = 1;
// Other declarations:
const k1;
end
// -- End Antimony block
// -- Begin PhraSEDML block
// Models
model1 = model "myModel"
// Simulations
sim1 = simulate uniform(0, 5, 100)
// Tasks
task1 = run sim1 on model1
// Outputs
plot "Figure 1" time vs S1, S2
// -- End PhraSEDML block
"""
Explanation: <a id="topcell"></a>
COMBINE Archive Basics
Tellurium offers the ability to embed entire COMBINE archives within notebook cells. These inline OMEX cells are represented in a human-readable format that can be easily edited by hand.
This notebook shows some basic examples of COMBINE archives and how to use them in Tellurium. You can export any of these examples by clicking on the diskette icon in the upper right part of the cell.
OMEX cells
About
In Tellurium, a special type of IPython cell exists for Combine archives (a.k.a. the Open Modeling EXchange format, OMEX). It is denoted by a small Combine archive icon next to the execution counter in the upper left corner of the cell.
Creating
A Combine archive cell may be created by importing from a Combine archive on disk. Move the mouse past the last cell in a notebook to show the cell creator bar. Click on the "Import" button, and choose "Import Combine archive...". Alternatively, simply click "New" -> "Combine archive" to create an empty Combine archive cell.
Contents
Example 1: Basic demo
Example 2: Dual Simulations
Example 3: Stochastic Ensemble
Example 4: Phase portrait
Example 5: Parameter scanning
<a id="basicdemo"></a>
Example 1: Basic demo
The example shows a minimal Combine archive containing an SBML model (myModel) representing conversion of species S1 to S2, a single timecourse simulation, and a plot.
Back to top
End of explanation
"""
// SBML Part
model *myModel()
// Reactions:
J0: A -> B; k*A;
A = 10;
k = 1;
end
// SED-ML Part
// Models
model1 = model "myModel"
// Simulations
simulation1 = simulate uniform(0, 5, 100)
simulation2 = simulate uniform_stochastic(0, 5, 100)
// Tasks
task1 = run simulation1 on model1
task2 = run simulation2 on model1
// Outputs
plot "Deterministic Solution" task1.time vs task1.A, task1.B
plot "Stochastic Solution" task2.time vs task2.A, task2.B
"""
Explanation: <a id="dualdemo"></a>
Example 2: Dual Simulations
This example plots a deterministic simulation and a stochastic simulation of the same system.
Back to top
End of explanation
"""
// SBML Part
model *myModel()
// Reactions:
J0: A -> B; k*A;
A = 100;
k = 1;
end
// SED-ML Part
// Models
model1 = model "myModel"
// Simulations
simulation1 = simulate uniform_stochastic(0, 5, 100)
// Tasks
task1 = run simulation1 on model1
repeat1 = repeat task1 for \
local.x in uniform(0,25,25), reset=True
// Outputs
plot "Stochastic Ensemble" repeat1.time vs repeat1.A, repeat1.B
"""
Explanation: <a id="ensemble"></a>
Example 3: Stochastic Ensemble
This example uses a repeated task to run multiple copies of a stochastic simulation, then plots the ensemble.
Back to top
End of explanation
"""
// -- Begin Antimony block
model *lorenz()
// Rate Rules:
x' = sigma*(y - x);
y' = x*(rho - z) - y;
z' = x*y - beta*z;
// Variable initializations:
x = 0.96259;
sigma = 10;
y = 2.07272;
rho = 28;
z = 18.65888;
beta = 2.67;
// Other declarations:
var x, y, z;
const sigma, rho, beta;
end
// -- End Antimony block
// -- Begin PhraSEDML block
// Models
model1 = model "lorenz"
// Simulations
sim1 = simulate uniform(0, 15, 2000)
// Tasks
task1 = run sim1 on model1
// Outputs
plot "Phase Portrait" z vs x
// -- End PhraSEDML block
"""
Explanation: <a id="phaseportrait"></a>
Example 4: Phase portrait
In addition to timecourse plots, SED-ML can also be used to create phase portraits. This is useful to show the presenence (or absence, in this case) of limit cycles. Here, we use the well-known Lorenz attractor to show this feature.
Back to top
End of explanation
"""
// -- Begin Antimony block
model *MAPKcascade()
// Compartments and Species:
compartment compartment_;
species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_;
species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_;
species MAPK_P in compartment_, MAPK_PP in compartment_;
// Reactions:
J0: MKKK => MKKK_P; J0_V1*MKKK/((1 + (MAPK_PP/J0_Ki)^J0_n)*(J0_K1 + MKKK));
J1: MKKK_P => MKKK; J1_V2*MKKK_P/(J1_KK2 + MKKK_P);
J2: MKK => MKK_P; J2_k3*MKKK_P*MKK/(J2_KK3 + MKK);
J3: MKK_P => MKK_PP; J3_k4*MKKK_P*MKK_P/(J3_KK4 + MKK_P);
J4: MKK_PP => MKK_P; J4_V5*MKK_PP/(J4_KK5 + MKK_PP);
J5: MKK_P => MKK; J5_V6*MKK_P/(J5_KK6 + MKK_P);
J6: MAPK => MAPK_P; J6_k7*MKK_PP*MAPK/(J6_KK7 + MAPK);
J7: MAPK_P => MAPK_PP; J7_k8*MKK_PP*MAPK_P/(J7_KK8 + MAPK_P);
J8: MAPK_PP => MAPK_P; J8_V9*MAPK_PP/(J8_KK9 + MAPK_PP);
J9: MAPK_P => MAPK; J9_V10*MAPK_P/(J9_KK10 + MAPK_P);
// Species initializations:
MKKK = 90;
MKKK_P = 10;
MKK = 280;
MKK_P = 10;
MKK_PP = 10;
MAPK = 280;
MAPK_P = 10;
MAPK_PP = 10;
// Compartment initializations:
compartment_ = 1;
// Variable initializations:
J0_V1 = 2.5;
J0_Ki = 9;
J0_n = 1;
J0_K1 = 10;
J1_V2 = 0.25;
J1_KK2 = 8;
J2_k3 = 0.025;
J2_KK3 = 15;
J3_k4 = 0.025;
J3_KK4 = 15;
J4_V5 = 0.75;
J4_KK5 = 15;
J5_V6 = 0.75;
J5_KK6 = 15;
J6_k7 = 0.025;
J6_KK7 = 15;
J7_k8 = 0.025;
J7_KK8 = 15;
J8_V9 = 0.5;
J8_KK9 = 15;
J9_V10 = 0.5;
J9_KK10 = 15;
// Other declarations:
const compartment_, J0_V1, J0_Ki, J0_n, J0_K1, J1_V2, J1_KK2, J2_k3, J2_KK3;
const J3_k4, J3_KK4, J4_V5, J4_KK5, J5_V6, J5_KK6, J6_k7, J6_KK7, J7_k8;
const J7_KK8, J8_V9, J8_KK9, J9_V10, J9_KK10;
end
// -- End Antimony block
// -- Begin PhraSEDML block
// Models
model1 = model "MAPKcascade"
// Simulations
sim1 = simulate uniform(0, 4000, 1000)
// Tasks
task1 = run sim1 on model1
// Repeated Tasks
repeat1 = repeat task1 for model1.J1_KK2 in [1, 10, 40], reset=true
// Outputs
plot "Sampled Simulation" repeat1.time vs repeat1.MKK, repeat1.MKK_P, repeat1.MAPK_PP
// -- End PhraSEDML block
"""
Explanation: <a id="paramscan"></a>
Example 5: Parameter scanning
Through the use of repeated tasks, SED-ML can be used to scan through parameter values. This example shows how to scan through a set of predefined values for a kinetic parameter (J1_KK2).
Back to top
End of explanation
"""
|
cliburn/sta-663-2017 | worksheet/Mock Midterms 2 Solutions.ipynb | mit | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%load_ext rpy2.ipython
"""
Explanation: Midterm exams
This is a "closed book" examination - in particular, you are not to use any resources outside of this notebook (except possibly pen and paper). You may consult help from within the notebook using ? but not any online references. You should turn wireless off or set your laptop in "Airplane" mode prior to taking the exam.
You have 2 hours to complete the exam.
End of explanation
"""
df = pd.read_csv('data/iris.csv')
df.head(4)
df.groupby('Species').mean()
sns.lmplot(x='Petal.Length', y='Sepal.Length', hue='Species', data=df, fit_reg=False)
pass
"""
Explanation: Q1 (10 points).
Read the data/iris.csv data set into a Pandas DataFrame. Dispaly the first 4 lines of the DataFrame. (2 points)
Create a new DataFrame showing the mean SepalLength, SepalWidth, PetalLength and PetalWidth for the 3 different types of irises. (4 points)
Make a scatter plot of SepalLength against PetalLength where each species is assigned a different color. (4 points)
End of explanation
"""
def peek(df, n):
"""Display a random selection of n rows of datafraem df."""
if df.shape[0] < n:
return df
idx = np.random.choice(df.shape[0], n)
return df.iloc[idx]
peek(df, 5)
"""
Explanation: Q2 (10 points)
Write a function peek(df, n) to display a random selection of $n$ rows of any dataframe (without repetition). Use it to show 5 random rows from the iris data set. The function should take as inputs a dataframe and an integer. Do not use the pandas sample method.
End of explanation
"""
import numpy as np
def norm(A):
return np.sqrt(np.sum(A**2))
def cosine_distance(A, B):
"""Cosine distance."""
return (A @ B)/(norm(A)*norm(B))
A = np.array([1,2,3,4])
B = np.array([2,3,1,4])
cosine_distance(A, B)
def cosine_distance_matrix(M, N):
"""Cosine distance between each pair of vectors in M and N."""
m = len(M)
n = len(N)
res = np.zeros((m,n))
for i in range(m):
for j in range(n):
res[i, j] = cosine_distance(M[i], N[j])
return res
M = [A, B]
cosine_distance_matrix(M, M)
"""
Explanation: Q3 (10 points)
Write a function that when given $m$ vectors of length $k$ and another $n$ vectors of length $k$, returns an $m \times n$ matrix of the cosine distance between each pair of vectors. Take the cosine distance to be
$$
\frac{A \cdot B}{\|A} \|B\|}
$$
for any two vectors $A$ and $B$.
Do not use the scipy.spatial.distance.cosine function or any functions from np.linalg or scipy.llnalg.
End of explanation
"""
A = np.array([[5, 5, 2, 6, 2, 0],
[8, 6, 7, 8, 9, 7],
[9, 5, 0, 4, 6, 8],
[8, 7, 9, 3, 6, 1]])
v = np.array([1,2,3,4])
A + v[:, np.newaxis]
row_means = np.mean(A, axis=1)
A1 = A - row_means[:, np.newaxis]
A1
U, s, V = np.linalg.svd(A1)
s
np.linalg.eigvals(np.cov(A1))
y = np.array([1,2,3,4])
x, res, rank, s = np.linalg.lstsq(A, y)
x
"""
Explanation: Q4 (10 points)
Consider the following matrix $A$ with dimensions (4,6), to be interpreted as 4 rows of the measurements of 6 features.
python
np.array([[5, 5, 2, 6, 2, 0],
[8, 6, 7, 8, 9, 7],
[9, 5, 0, 4, 6, 8],
[8, 7, 9, 3, 6, 1]])
Add 1 to the first row, 2 to the second row, 3 to the third row and 4 to the fourth row using a vector v = np.array([1,2,3,4]) and broadcasting. (2 points)
Normalize A so that its row means are all 0 and call it A1. (2 points)
What are the singular values of A1? (2 points)
What are the eigenvalues of the covariance matrix of A1? (2 points)
Find the least squares solution vector $x$ if $Ax = y$ where y = np.array([1,2,3,4]).T (2 points)
End of explanation
"""
def catalan(n):
k = np.arange(2, n+1)
return np.prod((n+k)/k)
def catalan_python(n):
ans = np.zeros(n)
for m in range(1, n+1):
ans[m-1] = catalan(m)
return ans
from numba import jit
@jit(nopython=True)
def catalan_(n):
s = 1
for k in range(2, n+1):
s *= (n+k)/k
return s
@jit(nopython=True)
def catalan_numba(n):
ans = np.zeros(n)
for m in range(1, n+1):
ans[m-1] = catalan_(m)
return ans
%load_ext cython
%%cython -a
cimport cython
import numpy as np
# @cython.cdivision
cdef double catalan(int n):
cdef double s = 1
cdef int k
# s = 1
for k in range(2, n+1):
s *= (n+k)/k
return s
@cython.wraparound(False)
@cython.boundscheck(False)
def catalan_cython(int n):
cdef int m
cdef double[:] ans = np.zeros(n)
for m in range(1, n+1):
ans[m-1] = catalan(m)
return np.array(ans)
catalan_python(10)
catalan_cython(10)
%timeit ans0 = catalan_python(100)
%timeit ans1 = catalan_numba(100)
%timeit ans2 = catalan_cython(100)
"""
Explanation: Q10 (10 points)
We want to calculate the first 100 Catalan numbers. The $n^\text{th}$ Catalan number is given by
$$
C_n = \prod_{k=2}^n \frac{n+k}{k}
$$
for $n \ge 0$.
Use numpy to find the first 100 Catalan number - the function should take a single argument $n$ and return an array [Catalan(1), Catalan(2), ..., Catalan(n)] (4 points).
Use numba to find the first 100 Catalan numbers (starting from 1) fast using a JIT compilation 4 points)
Use cython to find the first 100 Catalan numbers (starting from 1) fast both AOT compilation (4 points)
In each case, code readability and efficiency is important.
End of explanation
"""
|
openfisca/openfisca-france-indirect-taxation | openfisca_france_indirect_taxation/examples/notebooks/depenses_agregats_transports.ipynb | agpl-3.0 | import seaborn
seaborn.set_palette(seaborn.color_palette("Set2", 12))
%matplotlib inline
from ipp_macro_series_parser.agregats_transports.transports_cleaner import a3_a
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_line, graph_builder_line_percent
"""
Explanation: Ce script réalise des graphiques à partir des données des comptes des transports, i.e. nos agrégats de référence pour les transports : dépenses totales des ménages en carburants, et part de ces dépenses dans leur consommation
Import de fonctions spécifiques à Openfisca indirect taxation et de bases de données des Comptes des Transports
End of explanation
"""
a3_a['to_be_used'] = 0
a3_a.loc[a3_a['index'] == u'0722 Carburants et lubrifiants (1)', 'to_be_used'] = 1
a3_a.loc[a3_a['index'] == u'0722 Carburants et lubrifiants (1)', 'index'] = u'Dépenses carburants et lubrifiants'
a3_a.loc[a3_a['index'] == '07 Transport', 'to_be_used'] = 1
a3_a.loc[a3_a['index'] == '07 Transport', 'index'] = u'Dépenses totales en transports'
a3_a.loc[a3_a['index'] == u'Ensemble des dépenses de consommation des ménages ', 'to_be_used'] = 1
depenses_menages_transports = a3_a[a3_a['to_be_used'] == 1]
depenses_menages_transports = depenses_menages_transports.drop(['to_be_used'] + ['categorie'], axis = 1)
depenses_menages_transports = depenses_menages_transports.set_index(['index'])
depenses_menages_transports = depenses_menages_transports.transpose()
"""
Explanation: Sélection des variables utilisées dans les graphiques
End of explanation
"""
depenses_menages_transports[u'part carburants dépenses totales'] = (
depenses_menages_transports[u'Dépenses carburants et lubrifiants'] /
depenses_menages_transports[u'Ensemble des dépenses de consommation des ménages ']
)
depenses_menages_transports[u'part transports dépenses totales'] = (
depenses_menages_transports[u'Dépenses totales en transports'] /
depenses_menages_transports[u'Ensemble des dépenses de consommation des ménages ']
)
"""
Explanation: Calcul des parts des transports et des carburants dans les dépenses totales des ménages
End of explanation
"""
print 'Evolution des dépenses des ménages en carburants'
graph_builder_line(depenses_menages_transports[u'Dépenses carburants et lubrifiants'])
print 'Evolution de la part des carburants et des transports dans les dépenses totales des ménages'
graph_builder_line_percent(depenses_menages_transports[[u'part transports dépenses totales'] +
[u'part carburants dépenses totales']], 1, 0.5)
"""
Explanation: Réalisation des graphiques
End of explanation
"""
|
dh7/ML-Tutorial-Notebooks | tf-char-RNN-explained.ipynb | bsd-2-clause | import numpy as np
import tensorflow as tf
"""
Explanation: A minimal Char RNN using TensorFlow
This Jupyter Notebook implement RNN at char level and is inspired by the Minimal character-level Vanilla RNN model written by Andrej Karpathy
Decoding is based on this code from Sherjil Ozair
I did some modifications to the original code to accomodate Jupyter, for instance the orginial code is splited in several files and are optimized to run using parameters from a shell command line.
I added comments, some code to test some parts line by line.
Also I've removed the ability to use LSTM or GRU and the embedings. The results are less impressive than original code, but closer to Karpathy's Minimal character-level Vanilla RNN model
Let's dive in :)
Imports
Import needed for Tensorflow
End of explanation
"""
%matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
"""
Explanation: Import needed for Jupiter
End of explanation
"""
import codecs
import os
import collections
from six.moves import cPickle
from six import text_type
import time
from __future__ import print_function
"""
Explanation: Imports needed for utilities
to load the text and transform it as a vector
End of explanation
"""
class Args():
def __init__(self):
'''data directory containing input.txt'''
self.data_dir = 'data_rnn/tinyshakespeare'
'''directory to store checkpointed models'''
self.save_dir = 'save_vec'
'''size of RNN hidden state'''
self.rnn_size = 128
'''minibatch size'''
self.batch_size = 1 #was 40
'''RNN sequence length'''
self.seq_length = 50
'''number of epochs'''
self.num_epochs = 1 # was 5
'''save frequency'''
self.save_every = 500 # was 500
'''Print frequency'''
self.print_every = 100 # was 100
'''clip gradients at this value'''
self.grad_clip = 5.
'''learning rate'''
self.learning_rate = 0.002 # was ?
'''decay rate for rmsprop'''
self.decay_rate = 0.98 # was 0.97?
"""continue training from saved model at this path. Path must contain files saved by previous training process:
'config.pkl' : configuration;
'chars_vocab.pkl' : vocabulary definitions;
'checkpoint' : paths to model file(s) (created by tf).
Note: this file contains absolute paths, be careful when moving files around;
'model.ckpt-*' : file(s) with model definition (created by tf)
"""
self.init_from = 'save_vec'
#self.init_from = None
'''number of characters to sample'''
self.n = 500
'''prime text'''
self.prime = u' '
"""
Explanation: Args, to define all parameters
The original code use argparse to manage the args.
Here we define a class for it. Feel free to edit to try different settings
End of explanation
"""
class TextLoader():
def __init__(self, data_dir, batch_size, seq_length, encoding='utf-8'):
self.data_dir = data_dir
self.batch_size = batch_size
self.seq_length = seq_length
self.encoding = encoding
input_file = os.path.join(data_dir, "input.txt")
vocab_file = os.path.join(data_dir, "vocab.pkl")
tensor_file = os.path.join(data_dir, "data.npy")
if not (os.path.exists(vocab_file) and os.path.exists(tensor_file)):
print("reading text file")
self.preprocess(input_file, vocab_file, tensor_file)
else:
print("loading preprocessed files")
self.load_preprocessed(vocab_file, tensor_file)
self.create_batches()
self.reset_batch_pointer()
def preprocess(self, input_file, vocab_file, tensor_file):
with codecs.open(input_file, "r", encoding=self.encoding) as f:
data = f.read()
counter = collections.Counter(data)
count_pairs = sorted(counter.items(), key=lambda x: -x[1])
self.chars, _ = zip(*count_pairs)
self.vocab_size = len(self.chars)
self.vocab = dict(zip(self.chars, range(len(self.chars))))
with open(vocab_file, 'wb') as f:
cPickle.dump(self.chars, f)
self.tensor = np.array(list(map(self.vocab.get, data)))
np.save(tensor_file, self.tensor)
def load_preprocessed(self, vocab_file, tensor_file):
with open(vocab_file, 'rb') as f:
self.chars = cPickle.load(f)
self.vocab_size = len(self.chars)
self.vocab = dict(zip(self.chars, range(len(self.chars))))
self.tensor = np.load(tensor_file)
self.num_batches = int(self.tensor.size / (self.batch_size *
self.seq_length))
def create_batches(self):
self.num_batches = int(self.tensor.size / (self.batch_size *
self.seq_length))
# When the data (tensor) is too small, let's give them a better error message
if self.num_batches==0:
assert False, "Not enough data. Make seq_length and batch_size small."
self.tensor = self.tensor[:self.num_batches * self.batch_size * self.seq_length]
xdata = self.tensor
ydata = np.copy(self.tensor)
ydata[:-1] = xdata[1:]
ydata[-1] = xdata[0]
self.x_batches = np.split(xdata.reshape(self.batch_size, -1), self.num_batches, 1)
self.y_batches = np.split(ydata.reshape(self.batch_size, -1), self.num_batches, 1)
def vectorize(self, x):
vectorized = np.zeros((len(x), len(x[0]), self.vocab_size))
for i in range(0, len(x)):
for j in range(0, len(x[0])):
vectorized[i][j][x[i][j]] = 1
return vectorized
def next_batch(self):
x, y = self.x_batches[self.pointer], self.y_batches[self.pointer]
self.pointer += 1
x_vectorized = self.vectorize(x)
y_vectorized = self.vectorize(y)
return x_vectorized, y_vectorized
def reset_batch_pointer(self):
self.pointer = 0
"""
Explanation: Load the data
Transforming the original dataset in vector that can be use by a NN is always necessary.
This Class need to be replaced if you want to deal with other kind of data.
This class is able to cache the preprocessed data:
* Check if the data are processed allready
* if yes load the data using Numpy (not tensorflow)
* if not
* process the data
* save them using Numpy
Process the data
End of explanation
"""
## First we open the file
args = Args()
input_file = os.path.join(args.data_dir, "input.txt")
f = codecs.open(input_file, "r", 'utf-8')
data = f.read()
print (data[0:300])
"""
Explanation: Let's see how preprocessing works:
End of explanation
"""
counter = collections.Counter(data)
print ('histogram of char from the input data file:', counter)
count_pairs = sorted(counter.items(), key=lambda x: -x[1])
print (count_pairs)
chars, _ = zip(*count_pairs)
print ('chars', chars)
vocab_size = len(chars)
print (vocab_size)
vocab = dict(zip(chars, range(len(chars))))
print (vocab)
"""
Explanation: Then we have:
python
counter = collections.Counter(data)
count_pairs = sorted(counter.items(), key=lambda x: -x[1])
chars, _ = zip(*count_pairs)
vocab_size = len(chars)
vocab = dict(zip(chars, range(len(chars))))
Witch do the same than this:
python
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
vocab = { ch:i for i,ch in enumerate(chars) }
Let's see the details here:
End of explanation
"""
print (vocab['a'])
"""
Explanation: It can be used to calculate an ID from vocab
End of explanation
"""
# Karpathy orginal code seems to do the same:
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
vocab = { ch:i for i,ch in enumerate(chars) }
print (vocab)
"""
Explanation: This is equivalent of the following code by Karpathy:
it associate a unique int to any all char used in the file.
End of explanation
"""
data_in_array = map(vocab.get, data)
print (len(data_in_array))
print (data_in_array[0:200])
print (data_in_array[0], 'means', data[0],'witch is the first letter in data' )
"""
Explanation: Now we have to make a tensor out of the data.
The tensor is done using this:
python
tensor = np.array(list(map(vocab.get, data)))
Let's split the line to see in details how it works:
End of explanation
"""
tensor = np.array(data_in_array)
"""
Explanation: Then we create a numpy array out of it!
End of explanation
"""
data_loader = TextLoader(args.data_dir, args.batch_size, args.seq_length)
data_loader.create_batches()
x, y = data_loader.next_batch()
print ('x and y are matrix ', len(x), 'x', len(x[0]) )
print ('there are', len(x), 'batch that contains', len(x[0]), 'vector that have a size of', len(x[0][0]))
print ('x[0] is the first batch of input:')
print (x[0])
print ('x[0][0] is the first char:')
print (x[0][0])
print ('y[0][0] is the first batch of expected char:')
print (y[0][0])
print ('y[0] is x[0] shifted by one, in other words: y[0][x] == x[0][x+1]')
print ('y[0][10] ==', y[0][10])
print ('x[0][11] ==', x[0][11])
"""
Explanation: Let's see how batching works:
Here a reminder about the "create batches" function
```python
def create_batches(self):
self.num_batches = int(self.tensor.size / (self.batch_size *
self.seq_length))
# When the data (tesor) is too small, let's give them a better error message
if self.num_batches==0:
assert False, "Not enough data. Make seq_length and batch_size small."
self.tensor = self.tensor[:self.num_batches * self.batch_size * self.seq_length]
xdata = self.tensor
ydata = np.copy(self.tensor)
ydata[:-1] = xdata[1:]
ydata[-1] = xdata[0]
self.x_batches = np.split(xdata.reshape(self.batch_size, -1), self.num_batches, 1)
self.y_batches = np.split(ydata.reshape(self.batch_size, -1), self.num_batches, 1)
```
Let's try!
End of explanation
"""
class Model():
def __init__(self, args, infer=False):
self.args = args
if infer:
'''Infer is true when the model is used for sampling'''
args.seq_length = 1
hidden_size = args.rnn_size
vocab_size = args.vocab_size
# define place holder to for the input data and the target.
self.input_data = tf.placeholder(tf.float32, [args.batch_size, args.seq_length, vocab_size], name='input_data')
self.target_data = tf.placeholder(tf.float32, [args.batch_size, args.seq_length, vocab_size], name='target_data')
# define the input xs
one_batch_input = tf.squeeze(tf.slice(self.input_data, [0, 0, 0], [1, args.seq_length, vocab_size]),[0])
xs = tf.split(0, args.seq_length, one_batch_input)
# define the target
one_batch_target = tf.squeeze(tf.slice(self.target_data, [0, 0, 0], [1, args.seq_length, vocab_size]),[0])
targets = tf.split(0, args.seq_length, one_batch_target)
#initial_state
self.initial_state = tf.zeros((hidden_size,1))
#last_state = tf.placeholder(tf.float32, (hidden_size, 1))
# model parameters
Wxh = tf.Variable(tf.random_uniform((hidden_size, vocab_size))*0.01, name='Wxh') # input to hidden
Whh = tf.Variable(tf.random_uniform((hidden_size, hidden_size))*0.01, name='Whh') # hidden to hidden
Why = tf.Variable(tf.random_uniform((vocab_size, hidden_size))*0.01, name='Why') # hidden to output
bh = tf.Variable(tf.zeros((hidden_size, 1)), name='bh') # hidden bias
by = tf.Variable(tf.zeros((vocab_size, 1)), name='by') # output bias
loss = tf.zeros([1], name='loss')
hs, ys, ps = {}, {}, {}
hs[-1] = self.initial_state
# forward pass
for t in xrange(args.seq_length):
xs_t = tf.transpose(xs[t])
targets_t = tf.transpose(targets[t])
hs[t] = tf.tanh(tf.matmul(Wxh, xs_t) + tf.matmul(Whh, hs[t-1]) + bh) # hidden state
ys[t] = tf.matmul(Why, hs[t]) + by # unnormalized log probabilities for next chars
ps[t] = tf.exp(ys[t]) / tf.reduce_sum(tf.exp(ys[t])) # probabilities for next chars
loss += -tf.log(tf.reduce_sum(tf.mul(ps[t], targets_t))) # softmax (cross-entropy loss)
#self.probs = ps[t]
self.cost = loss / args.batch_size / args.seq_length
self.final_state = hs[args.seq_length-1]
self.lr = tf.Variable(0.0, trainable=False, name='learning_rate')
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(self.cost, tvars),
args.grad_clip)
optimizer = tf.train.AdamOptimizer(self.lr)
self.train_op = optimizer.apply_gradients(zip(grads, tvars))
def sample(self, sess, chars, vocab, num=200, prime='The '):
state = model.initial_state.eval()
for char in prime[:-1]:
x = np.zeros((1,1, 65))
x[0,0, vocab[char]] = 1
feed = {self.input_data: x, self.initial_state:state}
[state] = sess.run([self.final_state], feed)
def weighted_pick(weights):
t = np.cumsum(weights)
s = np.sum(weights)
return(int(np.searchsorted(t, np.random.rand(1)*s)))
ret = prime
char = prime[-1]
for n in range(num):
x = np.zeros((1,1, 65))
x[0,0, vocab[char]] = 1
feed = {self.input_data: x, self.initial_state:state}
[probs, state] = sess.run([self.probs, self.final_state], feed)
#print ('p', probs.ravel())
#print ('state', state.ravel())
sample = weighted_pick(probs)
#print ('sample', sample)
pred = chars[sample]
ret += pred
char = pred
return ret
def inspect(self, draw=False):
for var in tf.all_variables():
if var in tf.trainable_variables():
print ('t', var.name, var.eval().shape)
if draw:
plt.figure(figsize=(1,1))
plt.figimage(var.eval())
plt.show()
else:
print ('nt', var.name, var.eval().shape)
"""
Explanation: The Model
End of explanation
"""
tf.reset_default_graph()
args = Args()
data_loader = TextLoader(args.data_dir, args.batch_size, args.seq_length)
args.vocab_size = data_loader.vocab_size
print (args.vocab_size)
model = Model(args)
print ("model created")
# Open a session to inspect the model
with tf.Session() as sess:
tf.initialize_all_variables().run()
print('All variable initialized')
model.inspect()
'''
saver = tf.train.Saver(tf.all_variables())
ckpt = tf.train.get_checkpoint_state(args.save_dir)
print (ckpt)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
model.inspect()
plt.figure(figsize=(1,1))
plt.figimage(model.vectorize.eval())
plt.show()'''
"""
Explanation: Inspect the model variables
Looking at the shape of use variable can help to understand the flow.
't' as a prefix means trainable
'nt' as a prefix means not trainable
End of explanation
"""
# this code from:
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# write the graph to help visualizing it
model_fn = 'model.pb'
tf.train.write_graph(sess.graph.as_graph_def(),'.', model_fn, as_text=False)
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
#show_graph(tmp_def)
"""
Explanation: Visualize the graph
The following code came from the deepdream jupyter tutorial
It allow to draw a graph in Jupyter. It looks cool, but I'm not sure it is usefull.
End of explanation
"""
args = Args()
data_loader = TextLoader(args.data_dir, args.batch_size, args.seq_length)
args.vocab_size = data_loader.vocab_size
# check compatibility if training is continued from previously saved model
if args.init_from is not None:
print ("need to load file from", args.init_from)
# check if all necessary files exist
assert os.path.isdir(args.init_from)," %s must be a a path" % args.init_from
assert os.path.isfile(os.path.join(args.init_from,"config.pkl")),"config.pkl file does not exist in path %s"%args.init_from
assert os.path.isfile(os.path.join(args.init_from,"chars_vocab.pkl")),"chars_vocab.pkl.pkl file does not exist in path %s" % args.init_from
ckpt = tf.train.get_checkpoint_state(args.init_from)
assert ckpt,"No checkpoint found"
assert ckpt.model_checkpoint_path,"No model path found in checkpoint"
# open old config and check if models are compatible
with open(os.path.join(args.init_from, 'config.pkl')) as f:
saved_model_args = cPickle.load(f)
print (saved_model_args)
need_be_same=["model","rnn_size","seq_length"]
for checkme in need_be_same:
assert vars(saved_model_args)[checkme]==vars(args)[checkme],"Command line argument and saved model disagree on '%s' "%checkme
# open saved vocab/dict and check if vocabs/dicts are compatible
with open(os.path.join(args.init_from, 'chars_vocab.pkl')) as f:
saved_chars, saved_vocab = cPickle.load(f)
assert saved_chars==data_loader.chars, "Data and loaded model disagreee on character set!"
assert saved_vocab==data_loader.vocab, "Data and loaded model disagreee on dictionary mappings!"
print ("config loaded")
with open(os.path.join(args.save_dir, 'config.pkl'), 'wb') as f:
cPickle.dump(args, f)
with open(os.path.join(args.save_dir, 'chars_vocab.pkl'), 'wb') as f:
cPickle.dump((data_loader.chars, data_loader.vocab), f)
"""
Explanation: Trainning
Loading the data and process them if needed
End of explanation
"""
print (args.print_every)
tf.reset_default_graph()
model = Model(args)
print ("model created")
cost_optimisation = []
with tf.Session() as sess:
tf.initialize_all_variables().run()
print ("variable initialized")
saver = tf.train.Saver(tf.all_variables())
# restore model
if args.init_from is not None:
saver.restore(sess, ckpt.model_checkpoint_path)
print ("model restored")
for e in range(args.num_epochs):
sess.run(tf.assign(model.lr, args.learning_rate * (args.decay_rate ** e)))
data_loader.reset_batch_pointer()
state = model.initial_state.eval()
for b in range(data_loader.num_batches):
start = time.time()
# Get learning data
x, y = data_loader.next_batch()
# Create the structure for the learning data
feed = {model.input_data: x, model.target_data: y, model.initial_state: state}
# Run a session using train_op
[train_loss], state, _ = sess.run([model.cost, model.final_state, model.train_op], feed)
end = time.time()
if (e * data_loader.num_batches + b) % args.print_every == 0:
cost_optimisation.append(train_loss)
print("{}/{} (epoch {}), train_loss = {:.6f}, time/batch = {:.3f}" \
.format(e * data_loader.num_batches + b,
args.num_epochs * data_loader.num_batches,
e, train_loss, end - start))
if (e * data_loader.num_batches + b) % args.save_every == 0\
or (e==args.num_epochs-1 and b == data_loader.num_batches-1): # save for the last result
checkpoint_path = os.path.join(args.save_dir, 'model.ckpt')
saver.save(sess, checkpoint_path, global_step = e * data_loader.num_batches + b)
print("model saved to {}".format(checkpoint_path))
plt.figure(figsize=(12,5))
plt.plot(range(len(cost_optimisation)), cost_optimisation, label='cost')
plt.legend()
plt.show()
"""
Explanation: Instanciate the model and train it.
End of explanation
"""
tf.reset_default_graph()
model_fn = 'model.pb'
with open(os.path.join(args.save_dir, 'config.pkl'), 'rb') as f:
saved_args = cPickle.load(f)
with open(os.path.join(args.save_dir, 'chars_vocab.pkl'), 'rb') as f:
chars, vocab = cPickle.load(f)
model = Model(saved_args, True) # True to generate the model in sampling mode
with tf.Session() as sess:
tf.initialize_all_variables().run()
saver = tf.train.Saver(tf.all_variables())
ckpt = tf.train.get_checkpoint_state(args.save_dir)
print (ckpt)
model.inspect(draw=True)
"""
Explanation: Check Learning
End of explanation
"""
with tf.Session() as sess:
tf.initialize_all_variables().run()
saver = tf.train.Saver(tf.all_variables())
ckpt = tf.train.get_checkpoint_state(args.save_dir)
print (ckpt)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
print(model.sample(sess, chars, vocab, args.n, args.prime))
"""
Explanation: Sampling
End of explanation
"""
|
dipanjank/ml | data_analysis/haberman_uci.ipynb | gpl-3.0 | %pylab inline
pylab.style.use('ggplot')
import pandas as pd
import numpy as np
import seaborn as sns
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/haberman/haberman.data'
data_df = pd.read_csv(url, header=None)
data_df.head()
data_df.columns = ['age', 'year_of_op', 'n_nodes', 'survival']
data_df.head()
"""
Explanation: <h1 align="center">Haberman's Survival</h1>
Relevant Information:
The dataset contains cases from a study that was conducted between
1958 and 1970 at the University of Chicago's Billings Hospital on
the survival of patients who had undergone surgery for breast
cancer.
Number of Instances: 306
Number of Attributes: 4 (including the class attribute)
Attribute Information:
Age of patient at time of operation (numerical)
Patient's year of operation (year - 1900, numerical)
Number of positive axillary nodes detected (numerical)
Survival status (class attribute) * 1 = the patient survived 5 years or longer
* 2 = the patient died within 5 year
Getting the Data
End of explanation
"""
counts = data_df['survival'].value_counts()
counts.plot(kind='bar')
features = data_df.drop('survival', axis=1)
labels = data_df['survival']
"""
Explanation: Class Imbalance
End of explanation
"""
gp = features.groupby(labels)
f_means = gp.mean()
f_means.plot(kind='bar')
"""
Explanation: Feature Means for Both Classes
End of explanation
"""
gp = features.groupby(labels)
f_std = gp.std()
f_std.plot(kind='bar')
"""
Explanation: Feature Stddev for Both Classes
End of explanation
"""
for colname in data_df.columns.drop('survival'):
fg = sns.FacetGrid(col='survival', data=data_df)
fg = fg.map(pylab.hist, colname)
"""
Explanation: Distribution of Features with Classes
End of explanation
"""
from sklearn.feature_selection import f_classif
t_stats, p_vals = f_classif(features, labels)
f_results = pd.DataFrame.from_dict({'t_stats': t_stats, 'p_vals': p_vals})
f_results.index = features.columns.copy()
f_results.plot(kind='bar', subplots=True)
"""
Explanation: Feature Importances
End of explanation
"""
from sklearn.model_selection import cross_val_score, StratifiedKFold
from sklearn.tree import DecisionTreeClassifier
estimator = DecisionTreeClassifier()
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=12345)
dt_scores = cross_val_score(estimator, features, labels, cv=cv, scoring='f1_macro')
dt_scores = pd.Series(dt_scores)
dt_scores.plot(kind='bar')
"""
Explanation: Model 1: Decision Tree Classifier
End of explanation
"""
from sklearn.metrics import f1_score
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=12345)
def train_test_scores(train_index, test_index):
"""
* Fit a DecisionTreeClassifier model on train data.
* Calculate f1_score(s1) on predictions based on training data.
* Calculate f1_score(s2) on predictions based on test data.
* Return the 2-tuple(s1, s2)
"""
train_features, train_labels = features.iloc[train_index], labels.iloc[train_index]
test_features, test_labels = features.iloc[test_index], labels.iloc[test_index]
model = DecisionTreeClassifier().fit(train_features, train_labels)
train_score = f1_score(train_labels, model.predict(train_features), average='macro')
test_score = f1_score(test_labels, model.predict(test_features), average='macro')
return (train_score, test_score)
scores = [train_test_scores(idx1, idx2)
for idx1, idx2
in cv.split(features, np.ones(len(features)))]
scores = pd.DataFrame.from_records(scores, columns=['train_scores', 'test_scores'])
scores.plot(kind='bar')
"""
Explanation: Visualizing Overfitting in the Decision Tree
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
estimator = GaussianNB()
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=12345)
nb_scores_1 = cross_val_score(estimator, features, labels, cv=cv, scoring='f1_macro')
nb_scores_1 = pd.Series(nb_scores_1)
nb_scores_1.plot(kind='bar')
"""
Explanation: Model 2: Gaussian Naive Bayes
End of explanation
"""
combined_scores = pd.concat([nb_scores_1, dt_scores], axis=1, keys=['GaussianNB', 'DecisionTree'])
combined_scores.plot(kind='bar')
combined_scores.mean(axis=0)
"""
Explanation: Compare GaussianNB Scores with DT Scores
End of explanation
"""
estimator = GaussianNB(priors=[0.5, 0.5])
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=12345)
nb_scores_2 = cross_val_score(estimator, features, labels, cv=cv, scoring='f1_macro')
nb_scores_2 = pd.Series(nb_scores_2)
nb_scores_2.plot(kind='bar')
nb_scores_2.mean()
"""
Explanation: Model 3: GaussianNB with Equal Priors
End of explanation
"""
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=12345)
def calc_scores(train_index, test_index):
"""
Return a `DataFrame` of two columns. Each column contains the predictions from two models, a `GaussianNB`
with equal priors and a `GaussianNB` with sample proportional priors.
"""
train_features, train_labels = features.iloc[train_index], labels.iloc[train_index]
test_features, test_labels = features.iloc[test_index], labels.iloc[test_index]
model_1 = GaussianNB(priors=[0.5, 0.5])
model_2 = GaussianNB()
pred_1 = model_1.fit(train_features, train_labels).predict(test_features)
pred_2 = model_2.fit(train_features, train_labels).predict(test_features)
return pd.DataFrame({'equal_priors': pred_1, 'proportional_priors': pred_2})
# Append all the runs from cross-validation
runs = [calc_scores(t1, t2) for t1, t2 in cv.split(features, np.ones(len(features)))]
runs = pd.concat(runs, axis=0)
"""
Explanation: McNemar's Test
We have two models
GaussianNB with non-equal priors
GaussianNB with equal priors
Which one is the better model?
End of explanation
"""
from statsmodels.sandbox.stats.runs import mcnemar
t_stat, p_val = mcnemar(runs['equal_priors'], runs['proportional_priors'])
print('t_stat from Mcnemar test: {:.2f}, p_val: {:.4f}'.format(t_stat, p_val))
"""
Explanation: Now we can apply McNemar's test:
End of explanation
"""
|
rasbt/algorithms_in_ipython_notebooks | ipython_nbs/essentials/greedy-algorithm-intro.ipynb | gpl-3.0 | def coinchanger(cents, denominations=[1, 5, 10, 20]):
coins = {d: 0 for d in denominations}
for c in sorted(coins.keys(), reverse=True):
coins[c] += cents // c
cents = cents % c
if not cents:
total_coins = sum([i for i in coins.values()])
return sorted(coins.items(), reverse=True), total_coins
"""
Explanation: Introduction to Greedy Algorithms
The subfamily of Greedy Algorithms is one of the main paradigms of algorithmic problem solving next to Dynamic Programming and Divide & Conquer Algorithms. The main goal behind greedy algorithms is to implement an efficient procedure for often computationally more complex, often infeasible brute-force methods such as exhaustive search algorithms.
The main outline of a greedy algorithms consists of 3 steps:
make a greedy choice
reduce the problem to a subproblem (a smaller problem of the similar type as the original one)
repeat
So, greedy algorithms are essentially a problem solving heuristic, an iterative process of tackling a problem in multiple stages while making an locally optimal choice at each stage. In practice, and depending on the problem task, making this series of locally optimal ("greedy") choices must not necessarily lead to a globally optimal solution.
Example 1: Coin Changer
To look at a first, naive example of a greedy algorithm, let's implement a simple coin changing machine. Given a money value in cents, we want to return the minimum possible number of coins, whereas the possible denominations are 1, 5, 10, and 20 cents.
End of explanation
"""
coinchanger(cents=100)
"""
Explanation: The funtion above creates a dictionary, coins, which tracks the number of coins of each denomination to be returned. Then, we iterate though the denominations in descending order of their value. Now, in a "greedy" fashion, we count the highest possible number of coins from the largest denomination in the first step. We repeat this process until we reached the smallest denomination or the number of remaining cents reaches 0.
End of explanation
"""
coinchanger(cents=5)
coinchanger(cents=4)
coinchanger(cents=23)
"""
Explanation: Calling out coinchanger function with 100 cents as input, it returns 5 coins a 20 cents, the smallest, possible number of coins that can be returned in this case. Below are some more examples:
End of explanation
"""
def knapsack_01(capacity, weights, values):
val_by_weight = [value / weight
for value, weight in zip(values, weights)]
sort_idx = [i[0] for i in sorted(enumerate(val_by_weight),
key=lambda x:x[1],
reverse=True)]
knapsack = [0 for _ in values]
total_weight, total_value = 0, 0
for i in sort_idx:
if total_weight + weights[i] <= capacity:
knapsack[i] = 1
total_weight += weights[i]
total_value += values[i]
if total_weight == capacity:
break
return knapsack, total_weight, total_value
"""
Explanation: Example 2: Knapsack
Now, let us take a look at a classic, combinatorial optimization problem, the so-called "knapsack" problem. Here, we can think of a "knapsack" as a rucksack, and our goal is to fill it with items so that the rucksack's contents have the highest possible value. Of course, the knappsack has a certain weight capacity, and each item is associated with a certain value and a weight. In other words, we want to maximize the value of the knapsack subject to the constraint that we don't exceed its weight capacity.
As trivial as it sounds, the knapsack problem is still one of the most popular algorithmic problems in the modern computer science area. There are numerous applications of knapsack problems, and to provide an intuitive real-world example: We could think of sports betting or daily fantasy soccer predictions as a knapsack problem, where we want to construct a squad of players with the highest possible points to salary ratio.
0-1 Knapsack
Let's us take a look the probably simplest variation of the knapsack problem, the 0-1 knapsack, and tackle it using a "greedy" strategy. In the 0-1 knapsack, we have a given set of items, $i_1, i_2, ..., i_m$, that we can use to fill the knapsack. Again, the knapsack has a fixed capacity, and the items are associated with weights, $w_1, w_2, ..., w_m$, and values $v_1, v_2, ..., v_m$. While our goal is still to pack the knapsack with a combination of items so that it carries the highest possible value, the 0-1 knapsack variation comes with the constraint that we can only carry 1 copy of each item. Thus, the runtime complexity of this algorithm is $O(nW)$, where $n$ is the number of items in the list and $W$ is the maximum capacity of the knapsack.
For example, if we are given 3 items with weights $[w_1: 20, w_2: 50, w_3: 30]$ and values
$[v_1: 60, v_2: 100, v_3: 120]$, a knapsack with capacity 50 may carry to 1 copy of item 1 and 1 copy of item 3 to maximize its value (180) in contrast to just carrying 1 copy of item 2 (value: 100).
Let's see how one "greedy" code implementation may look like:
End of explanation
"""
weights = [20, 50, 30]
values = [60, 100, 120]
knapsack_01(capacity=50, weights=weights, values=values)
"""
Explanation: We start by creating an array val_by_weight, which contains the value/weight values of the items. Next, we create an array of index positions by sorting the value/weight array; here, we can think of the item with the highest value/weight ratio as the item that gives us the "best bang for the buck." Using a for-loop, we then iterate over the sort_idx and check if a given items fits in our knapsack or not, that is, if we can carry it without exceeding the knapsack's capacity. After we checked all items, or if reach the capacity limit prematurely, we exit the for-loop and return the contents of the knapsack as well as its current weight and total value, which we've been tracking all along.
A concrete example:
End of explanation
"""
weights = [40, 30, 20]
values = [70, 40, 35]
knapsack_01(capacity=50, weights=weights, values=values)
"""
Explanation: Running the knapsack_01 function on the example input above returns a knapsack containing item 1 and item 3, with a total weight equal to its maximum capacity and a value of 180.
Let us take a look at another example:
End of explanation
"""
def knapsack_fract(capacity, weights, values):
val_by_weight = [value / weight
for value, weight in zip(values, weights)]
sort_idx = [i[0] for i in sorted(enumerate(val_by_weight),
key=lambda x:x[1],
reverse=True)]
knapsack = [0 for _ in values]
total_weight, total_value = 0, 0
for i in sort_idx:
if total_weight + weights[i] <= capacity:
knapsack[i] = 1
total_weight += weights[i]
total_value += values[i]
else:
allowed = capacity - total_weight
frac = allowed / weights[i]
knapsack[i] = round(frac, 4)
total_weight += allowed
total_value += frac * values[i]
if total_weight == capacity:
break
return knapsack, total_weight, round(total_value, 4)
"""
Explanation: Notice the problem here? Our greedy algorithm suggests packing item 1 with weight 40 and a value of 70. Now, our knapsack can't pack any of the other items (weights 20 and 30), without exceeding its capacity. This is an example of where a greedy strategy leads to a globally suboptimal solution. An optimal solution would be to take 1 copy of item 2 and 1 copy of item 3, so that our knapsack carries a weight of 50 with a value of 75.
Fractional Knapsack
Now, let's implement a slightly different flavor of the knapsack problem, the fractional knapsack, which is guaranteed to find the optimal solution. Here, the rules are slightly different from the 0-1 knapsack that we implemented earlier. Instead of either just including or excluding an item in the knapsack, we can also add fractions $f$ of an item, subject to the constraint $0 \geq f \leq 1$.
Now, let's take our 0-1 knapsack implementation as a template and make some slight modifications to come up with a fractional knapsack algorithm:
End of explanation
"""
weights = [20, 50, 30]
values = [60, 100, 120]
knapsack_fract(capacity=50, weights=weights, values=values)
"""
Explanation: Let's give it a whirl on a simple example first:
End of explanation
"""
weights = [30]
values = [500]
knappsack_fract(capacity=10, weights=weights, values=values)
"""
Explanation: The solution is an optimal solution, and we notice that it is the same as the one we got by using the 0-1 knapsack previously.
To demonstrate the difference between the 0-1 knapsack and the fractional knapsack, let's do a second example:
End of explanation
"""
def min_points(intervals):
s_ints = sorted(intervals, key=lambda x: x[1])
points = [s_ints[0][-1]]
for interv in s_ints:
if not(points[-1] >= interv[0] and points[-1] <= interv[-1]):
points.append(interv[-1])
return points
pts = [[2, 5], [1, 3], [3, 6]]
min_points(pts)
pts = [[4, 7], [1, 3], [2, 5], [5, 6]]
min_points(pts)
"""
Explanation: Example 3: Point-Cover-Interval Problem
The classic Point-Cover-Interval problem is another example that is well suited for demonstrating greedy algorithms. Here, we are given a set of Intervals L, and we want to find the minimum set of points so that each interval is covered at least once by a given point as illustrated in the example below:
Our greedy strategy, which finds the optimal solution for this problem, can be as follows:
sort intervals in increasing order by the value of their endpoints
for interval in interval-set:
if interval is not yet covered:
add interval-endpoint to the set of points
End of explanation
"""
def max_summands(num):
summands = []
sum_summands = 0
next_int = 1
while sum_summands + next_int <= num:
sum_summands += next_int
summands.append(next_int)
next_int += 1
summands[-1] += num - sum_summands
return summands
"""
Explanation: Example 4: Pairwise Distinct Summands
In the pairwise distinct summands problem, we are given an integer $n$, and our goal is to find the maximum number of unique summands. For example, given an integer n=8, the maximum number of unique summands would be [1 + 2 + 5] = 3.
Implemented in code using a greedy strategy, it looks as follows:
End of explanation
"""
|
smharper/openmc | examples/jupyter/search.ipynb | mit | # Initialize third-party libraries and the OpenMC Python API
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.model
%matplotlib inline
"""
Explanation: This Notebook illustrates the usage of the OpenMC Python API's generic eigenvalue search capability. In this Notebook, we will do a critical boron concentration search of a typical PWR pin cell.
To use the search functionality, we must create a function which creates our model according to the input parameter we wish to search for (in this case, the boron concentration).
This notebook will first create that function, and then, run the search.
End of explanation
"""
# Create the model. `ppm_Boron` will be the parametric variable.
def build_model(ppm_Boron):
# Create the pin materials
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.741)
water.add_element('H', 2.)
water.add_element('O', 1.)
# Include the amount of boron in the water based on the ppm,
# neglecting the other constituents of boric acid
water.add_element('B', ppm_Boron * 1e-6)
# Instantiate a Materials object
materials = openmc.Materials([fuel, zircaloy, water])
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(r=0.39218)
clad_outer_radius = openmc.ZCylinder(r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & (+min_x & -max_x & +min_y & -max_y)
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cells([fuel_cell, clad_cell, moderator_cell])
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Finish with the settings file
settings = openmc.Settings()
settings.batches = 300
settings.inactive = 20
settings.particles = 1000
settings.run_mode = 'eigenvalue'
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -10, 0.63, 0.63, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.source.Source(space=uniform_dist)
# We dont need a tallies file so dont waste the disk input/output time
settings.output = {'tallies': False}
model = openmc.model.Model(geometry, materials, settings)
return model
"""
Explanation: Create Parametrized Model
To perform the search we will use the openmc.search_for_keff function. This function requires a different function be defined which creates an parametrized model to analyze. This model is required to be stored in an openmc.model.Model object. The first parameter of this function will be modified during the search process for our critical eigenvalue.
Our model will be a pin-cell from the Multi-Group Mode Part II assembly, except this time the entire model building process will be contained within a function, and the Boron concentration will be parametrized.
End of explanation
"""
# Perform the search
crit_ppm, guesses, keffs = openmc.search_for_keff(build_model, bracket=[1000., 2500.],
tol=1e-2, bracketed_method='bisect',
print_iterations=True)
print('Critical Boron Concentration: {:4.0f} ppm'.format(crit_ppm))
"""
Explanation: Search for the Critical Boron Concentration
To perform the search we imply call the openmc.search_for_keff function and pass in the relvant arguments. For our purposes we will be passing in the model building function (build_model defined above), a bracketed range for the expected critical Boron concentration (1,000 to 2,500 ppm), the tolerance, and the method we wish to use.
Instead of the bracketed range we could have used a single initial guess, but have elected not to in this example. Finally, due to the high noise inherent in using as few histories as are used in this example, our tolerance on the final keff value will be rather large (1.e-2) and a bisection method will be used for the search.
End of explanation
"""
plt.figure(figsize=(8, 4.5))
plt.title('Eigenvalue versus Boron Concentration')
# Create a scatter plot using the mean value of keff
plt.scatter(guesses, [keffs[i].nominal_value for i in range(len(keffs))])
plt.xlabel('Boron Concentration [ppm]')
plt.ylabel('Eigenvalue')
plt.show()
"""
Explanation: Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration.
End of explanation
"""
|
minxuancao/shogun | doc/ipython-notebooks/classification/SupportVectorMachines.ipynb | gpl-3.0 | import matplotlib.pyplot as plt
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import matplotlib.patches as patches
#To import all shogun classes
import modshogun as sg
import numpy as np
#Generate some random data
X = 2 * np.random.randn(10,2)
traindata=np.r_[X + 3, X + 7].T
feats_train=sg.RealFeatures(traindata)
trainlab=np.concatenate((np.ones(10),-np.ones(10)))
labels=sg.BinaryLabels(trainlab)
# Plot the training data
plt.figure(figsize=(6,6))
plt.gray()
_=plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.title("Training Data")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
p1 = patches.Rectangle((0, 0), 1, 1, fc="k")
p2 = patches.Rectangle((0, 0), 1, 1, fc="w")
plt.legend((p1, p2), ["Class 1", "Class 2"], loc=2)
plt.gray()
"""
Explanation: Classification with Support Vector Machines
by Soeren Sonnenburg | Saurabh Mahindre - <a href=\"https://github.com/Saurabh7\">github.com/Saurabh7</a> as a part of <a href=\"http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616\">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href=\"https://github.com/karlnapf\">github.com/karlnapf</a> - <a href=\"http://herrstrathmann.de/\">herrstrathmann.de</a>
This notebook illustrates how to train a <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machine</a> (SVM) <a href="http://en.wikipedia.org/wiki/Statistical_classification">classifier</a> using Shogun. The <a href="http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CLibSVM.html">CLibSVM</a> class of Shogun is used to do binary classification. Multiclass classification is also demonstrated using CGMNPSVM.
Introduction
Linear Support Vector Machines
Prediction using Linear SVM
SVMs using kernels
Kernels in Shogun
Prediction using kernel based SVM
Probabilistic Outputs using SVM
Soft margins and slack variables
Binary classification using different kernels
Kernel Normalizers
Multiclass classification using SVM
Introduction
Support Vector Machines (SVM's) are a learning method used for binary classification. The basic idea is to find a hyperplane which separates the data into its two classes. However, since example data is often not linearly separable, SVMs operate in a kernel induced feature space, i.e., data is embedded into a higher dimensional space where it is linearly separable.
Linear Support Vector Machines
In a supervised learning problem, we are given a labeled set of input-output pairs $\mathcal{D}=(x_i,y_i)^N_{i=1}\subseteq \mathcal{X} \times \mathcal{Y}$ where $x\in\mathcal{X}$ and $y\in{-1,+1}$. SVM is a binary classifier that tries to separate objects of different classes by finding a (hyper-)plane such that the margin between the two classes is maximized. A hyperplane in $\mathcal{R}^D$ can be parameterized by a vector $\bf{w}$ and a constant $\text b$ expressed in the equation:$${\bf w}\cdot{\bf x} + \text{b} = 0$$
Given such a hyperplane ($\bf w$,b) that separates the data, the discriminating function is: $$f(x) = \text {sign} ({\bf w}\cdot{\bf x} + {\text b})$$
If the training data are linearly separable, we can select two hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations
$$({\bf w}\cdot{\bf x} + {\text b}) = 1$$
$$({\bf w}\cdot{\bf x} + {\text b}) = -1$$
the distance between these two hyperplanes is $\frac{2}{\|\mathbf{w}\|}$, so we want to minimize $\|\mathbf{w}\|$.
$$
\arg\min_{(\mathbf{w},b)}\frac{1}{2}\|\mathbf{w}\|^2 \qquad\qquad(1)$$
This gives us a hyperplane that maximizes the geometric distance to the closest data points.
As we also have to prevent data points from falling into the margin, we add the following constraint: for each ${i}$ either
$$({\bf w}\cdot{x}_i + {\text b}) \geq 1$$ or
$$({\bf w}\cdot{x}_i + {\text b}) \leq -1$$
which is similar to
$${y_i}({\bf w}\cdot{x}_i + {\text b}) \geq 1 \forall i$$
Lagrange multipliers are used to modify equation $(1)$ and the corresponding dual of the problem can be shown to be:
\begin{eqnarray}
\max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j {\bf x_i} \cdot {\bf x_j}\
\mbox{s.t.} && \alpha_i\geq 0\
&& \sum_{i}^{N} \alpha_i y_i=0\
\end{eqnarray}
From the derivation of these equations, it was seen that the optimal hyperplane can be written as:
$$\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x}_i. $$
here most $\alpha_i$ turn out to be zero, which means that the solution is a sparse linear combination of the training data.
Prediction using Linear SVM
Now let us see how one can train a linear Support Vector Machine with Shogun. Two dimensional data (having 2 attributes say: attribute1 and attribute2) is now sampled to demonstrate the classification.
End of explanation
"""
#prameters to svm
#parameter C is described in a later section.
C=1
epsilon=1e-3
svm=sg.LibLinear(C, feats_train, labels)
svm.set_liblinear_solver_type(sg.L2R_L2LOSS_SVC)
svm.set_epsilon(epsilon)
#train
svm.train()
w=svm.get_w()
b=svm.get_bias()
"""
Explanation: Liblinear, a library for large- scale linear learning focusing on SVM, is used to do the classification. It supports different solver types.
End of explanation
"""
#solve for w.x+b=0
x1=np.linspace(-1.0, 11.0, 100)
def solve (x1):
return -( ( (w[0])*x1 + b )/w[1] )
x2=map(solve, x1)
#plot
plt.figure(figsize=(6,6))
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.plot(x1,x2, linewidth=2)
plt.title("Separating hyperplane")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
"""
Explanation: We solve ${\bf w}\cdot{\bf x} + \text{b} = 0$ to visualise the separating hyperplane. The methods get_w() and get_bias() are used to get the necessary values.
End of explanation
"""
size=100
x1_=np.linspace(-5, 15, size)
x2_=np.linspace(-5, 15, size)
x, y=np.meshgrid(x1_, x2_)
#Generate X-Y grid test data
grid=sg.RealFeatures(np.array((np.ravel(x), np.ravel(y))))
#apply on test grid
predictions = svm.apply(grid)
#Distance from hyperplane
z=predictions.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
#Class predictions
z=predictions.get_labels().reshape((size, size))
#plot
plt.subplot(122)
plt.title("Separating hyperplane")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
"""
Explanation: The classifier is now applied on a X-Y grid of points to get predictions.
End of explanation
"""
gaussian_kernel=sg.GaussianKernel(feats_train, feats_train, 100)
#Polynomial kernel of degree 2
poly_kernel=sg.PolyKernel(feats_train, feats_train, 2, True)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
"""
Explanation: SVMs using kernels
If the data set is not linearly separable, a non-linear mapping $\Phi:{\bf x} \rightarrow \Phi({\bf x}) \in \mathcal{F} $ is used. This maps the data into a higher dimensional space where it is linearly separable. Our equation requires only the inner dot products ${\bf x_i}\cdot{\bf x_j}$. The equation can be defined in terms of inner products $\Phi({\bf x_i}) \cdot \Phi({\bf x_j})$ instead. Since $\Phi({\bf x_i})$ occurs only in dot products with $ \Phi({\bf x_j})$ it is sufficient to know the formula (kernel function) : $$K({\bf x_i, x_j} ) = \Phi({\bf x_i}) \cdot \Phi({\bf x_j})$$ without dealing with the maping directly. The transformed optimisation problem is:
\begin{eqnarray} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\ \mbox{s.t.} && \alpha_i\geq 0\ && \sum_{i=1}^{N} \alpha_i y_i=0 \qquad\qquad(2)\ \end{eqnarray}
Kernels in Shogun
Shogun provides many options for the above mentioned kernel functions. CKernel is the base class for kernels. Some commonly used kernels :
Gaussian kernel : Popular Gaussian kernel computed as $k({\bf x},{\bf x'})= exp(-\frac{||{\bf x}-{\bf x'}||^2}{\tau})$
Linear kernel : Computes $k({\bf x},{\bf x'})= {\bf x}\cdot {\bf x'}$
Polynomial kernel : Polynomial kernel computed as $k({\bf x},{\bf x'})= ({\bf x}\cdot {\bf x'}+c)^d$
Simgmoid Kernel : Computes $k({\bf x},{\bf x'})=\mbox{tanh}(\gamma {\bf x}\cdot{\bf x'}+c)$
Some of these kernels are initialised below.
End of explanation
"""
plt.jet()
def display_km(kernels, svm):
plt.figure(figsize=(20,6))
plt.suptitle('Kernel matrices for different kernels', fontsize=12)
for i, kernel in enumerate(kernels):
plt.subplot(1, len(kernels), i+1)
plt.title(kernel.get_name())
km=kernel.get_kernel_matrix()
plt.imshow(km, interpolation="nearest")
plt.colorbar()
display_km(kernels, svm)
"""
Explanation: Just for fun we compute the kernel matrix and display it. There are clusters visible that are smooth for the gaussian and polynomial kernel and block-wise for the linear one. The gaussian one also smoothly decays from some cluster centre while the polynomial one oscillates within the clusters.
End of explanation
"""
C=1
epsilon=1e-3
svm=sg.LibSVM(C, gaussian_kernel, labels)
_=svm.train()
"""
Explanation: Prediction using kernel based SVM
Now we train an SVM with a Gaussian Kernel. We use LibSVM but we could use any of the other SVM from Shogun. They all utilize the same kernel framework and so are drop-in replacements.
End of explanation
"""
libsvm_obj=svm.get_objective()
primal_obj, dual_obj=svm.compute_svm_primal_objective(), svm.compute_svm_dual_objective()
print libsvm_obj, primal_obj, dual_obj
"""
Explanation: We could now check a number of properties like what the value of the objective function returned by the particular SVM learning algorithm or the explictly computed primal and dual objective function is
End of explanation
"""
print "duality_gap", dual_obj-primal_obj
"""
Explanation: and based on the objectives we can compute the duality gap (have a look at reference [2]), a measure of convergence quality of the svm training algorithm . In theory it is 0 at the optimum and in reality at least close to 0.
End of explanation
"""
out=svm.apply(sg.RealFeatures(grid))
z=out.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
z=out.get_labels().reshape((size, size))
plt.subplot(122)
plt.title("Decision boundary")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
"""
Explanation: Let's now apply on the X-Y grid data and plot the results.
End of explanation
"""
n=10
x1t_=np.linspace(-5, 15, n)
x2t_=np.linspace(-5, 15, n)
xt, yt=np.meshgrid(x1t_, x2t_)
#Generate X-Y grid test data
test_grid=sg.RealFeatures(np.array((np.ravel(xt), np.ravel(yt))))
labels_out=svm.apply(sg.RealFeatures(test_grid))
#Get values (Distance from hyperplane)
values=labels_out.get_values()
#Get probabilities
labels_out.scores_to_probabilities()
prob=labels_out.get_values()
#plot
plt.gray()
plt.figure(figsize=(10,6))
p1=plt.scatter(values, prob)
plt.title('Probabilistic outputs')
plt.xlabel('Distance from hyperplane')
plt.ylabel('Probability')
plt.legend([p1], ["Test samples"], loc=2)
"""
Explanation: Probabilistic Outputs
Calibrated probabilities can be generated in addition to class predictions using scores_to_probabilities() method of BinaryLabels, which implements the method described in [3]. This should only be used in conjunction with SVM. A parameteric form of a sigmoid function $$\frac{1}{{1+}exp(af(x) + b)}$$ is used to fit the outputs. Here $f(x)$ is the signed distance of a sample from the hyperplane, $a$ and $b$ are parameters to the sigmoid. This gives us the posterier probabilities $p(y=1|f(x))$.
Let's try this out on the above example. The familiar "S" shape of the sigmoid should be visible.
End of explanation
"""
def plot_sv(C_values):
plt.figure(figsize=(20,6))
plt.suptitle('Soft and hard margins with varying C', fontsize=12)
for i in range(len(C_values)):
plt.subplot(1, len(C_values), i+1)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
svm1=sg.LibSVM(C_values[i], linear_kernel, labels)
svm1.train()
vec1=svm1.get_support_vectors()
X_=[]
Y_=[]
new_labels=[]
for j in vec1:
X_.append(traindata[0][j])
Y_.append(traindata[1][j])
new_labels.append(trainlab[j])
out1=svm1.apply(sg.RealFeatures(grid))
z1=out1.get_labels().reshape((size, size))
plt.jet()
c=plt.pcolor(x1_, x2_, z1)
plt.contour(x1_ , x2_, z1, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(X_, Y_, c=new_labels, s=150)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=20)
plt.title('Support vectors for C=%.2f'%C_values[i])
plt.xlabel('attribute1')
plt.ylabel('attribute2')
C_values=[0.1, 1000]
plot_sv(C_values)
"""
Explanation: Soft margins and slack variables
If there is no clear classification possible using a hyperplane, we need to classify the data as nicely as possible while incorporating the misclassified samples. To do this a concept of soft margin is used. The method introduces non-negative slack variables, $\xi_i$, which measure the degree of misclassification of the data $x_i$.
$$
y_i(\mathbf{w}\cdot\mathbf{x_i} + b) \ge 1 - \xi_i \quad 1 \le i \le N $$
Introducing a linear penalty function leads to
$$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n \xi_i) }$$
This in its dual form is leads to a slightly modified equation $\qquad(2)$.
\begin{eqnarray} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\ \mbox{s.t.} && 0\leq\alpha_i\leq C\ && \sum_{i=1}^{N} \alpha_i y_i=0 \ \end{eqnarray}
The result is that soft-margin SVM could choose decision boundary that has non-zero training error even if dataset is linearly separable but is less likely to overfit.
Here's an example using LibSVM on the above used data set. Highlighted points show support vectors. This should visually show the impact of C and how the amount of outliers on the wrong side of hyperplane is controlled using it.
End of explanation
"""
num=50;
dist=1.0;
gmm=sg.GMM(2)
gmm.set_nth_mean(np.array([-dist,-dist]),0)
gmm.set_nth_mean(np.array([dist,dist]),1)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),0)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),1)
gmm.set_coef(np.array([1.0,0.0]))
xntr=np.array([gmm.sample() for i in xrange(num)]).T
gmm.set_coef(np.array([0.0,1.0]))
xptr=np.array([gmm.sample() for i in xrange(num)]).T
traindata=np.concatenate((xntr,xptr), axis=1)
trainlab=np.concatenate((-np.ones(num), np.ones(num)))
#shogun format features
feats_train=sg.RealFeatures(traindata)
labels=sg.BinaryLabels(trainlab)
gaussian_kernel=sg.GaussianKernel(feats_train, feats_train, 10)
#Polynomial kernel of degree 2
poly_kernel=sg.PolyKernel(feats_train, feats_train, 2, True)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
#train machine
C=1
svm=sg.LibSVM(C, gaussian_kernel, labels)
_=svm.train()
"""
Explanation: You can see that lower value of C causes classifier to sacrifice linear separability in order to gain stability, in a sense that influence of any single datapoint is now bounded by C. For hard margin SVM, support vectors are the points which are "on the margin". In the picture above, C=1000 is pretty close to hard-margin SVM, and you can see the highlighted points are the ones that will touch the margin. In high dimensions this might lead to overfitting. For soft-margin SVM, with a lower value of C, it's easier to explain them in terms of dual (equation $(2)$) variables. Support vectors are datapoints from training set which are are included in the predictor, ie, the ones with non-zero $\alpha_i$ parameter. This includes margin errors and points on the margin of the hyperplane.
Binary classification using different kernels
Two-dimensional Gaussians are generated as data for this section.
$x_-\sim{\cal N_2}(0,1)-d$
$x_+\sim{\cal N_2}(0,1)+d$
and corresponding positive and negative labels. We create traindata and testdata with num of them being negatively and positively labelled in traindata,trainlab and testdata, testlab. For that we utilize Shogun's Gaussian Mixture Model class (GMM) from which we sample the data points and plot them.
End of explanation
"""
size=100
x1=np.linspace(-5, 5, size)
x2=np.linspace(-5, 5, size)
x, y=np.meshgrid(x1, x2)
grid=sg.RealFeatures(np.array((np.ravel(x), np.ravel(y))))
grid_out=svm.apply(grid)
z=grid_out.get_labels().reshape((size, size))
plt.jet()
plt.figure(figsize=(16,5))
z=grid_out.get_values().reshape((size, size))
plt.subplot(121)
plt.title('Classification')
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.subplot(122)
plt.title('Original distribution')
gmm.set_coef(np.array([1.0,0.0]))
gmm.set_features(grid)
grid_out=gmm.get_likelihood_for_all_examples()
zn=grid_out.reshape((size, size))
gmm.set_coef(np.array([0.0,1.0]))
grid_out=gmm.get_likelihood_for_all_examples()
zp=grid_out.reshape((size, size))
z=zp-zn
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
"""
Explanation: Now lets plot the contour output on a $-5...+5$ grid for
The Support Vector Machines decision function $\mbox{sign}(f(x))$
The Support Vector Machines raw output $f(x)$
The Original Gaussian Mixture Model Distribution
End of explanation
"""
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Binary Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.set_kernel(kernels[i])
svm.train()
grid_out=svm.apply(grid)
z=grid_out.get_values().reshape((size, size))
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
"""
Explanation: And voila! The SVM decision rule reasonably distinguishes the red from the blue points. Despite being optimized for learning the discriminative function maximizing the margin, the SVM output quality wise remotely resembles the original distribution of the gaussian mixture model.
Let us visualise the output using different kernels.
End of explanation
"""
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/ionosphere/ionosphere.data'))
mat = []
labels = []
# read data from file
for line in f:
words = line.rstrip().split(',')
mat.append([float(i) for i in words[0:-1]])
if str(words[-1])=='g':
labels.append(1)
else:
labels.append(-1)
f.close()
mat_train=mat[:30]
mat_test=mat[30:110]
lab_train=sg.BinaryLabels(np.array(labels[:30]).reshape((30,)))
lab_test=sg.BinaryLabels(np.array(labels[30:110]).reshape((len(labels[30:110]),)))
feats_train = sg.RealFeatures(np.array(mat_train).T)
feats_test = sg.RealFeatures(np.array(mat_test).T)
#without normalization
gaussian_kernel=sg.GaussianKernel()
gaussian_kernel.init(feats_train, feats_train)
gaussian_kernel.set_width(0.1)
C=1
svm=sg.LibSVM(C, gaussian_kernel, lab_train)
_=svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print 'Error:', error
#set normalization
gaussian_kernel=sg.GaussianKernel()
# TODO: currently there is a bug that makes it impossible to use Gaussian kernels and kernel normalisers
# See github issue #3504
#gaussian_kernel.set_normalizer(sg.SqrtDiagKernelNormalizer())
gaussian_kernel.init(feats_train, feats_train)
gaussian_kernel.set_width(0.1)
svm.set_kernel(gaussian_kernel)
svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print 'Error with normalization:', error
"""
Explanation: Kernel Normalizers
Kernel normalizers post-process kernel values by carrying out normalization in feature space. Since kernel based SVMs use a non-linear mapping, in most cases any normalization in input space is lost in feature space. Kernel normalizers are a possible solution to this. Kernel Normalization is not strictly-speaking a form of preprocessing since it is not applied directly on the input vectors but can be seen as a kernel interpretation of the preprocessing. The CKernelNormalizer class provides tools for kernel normalization. Some of the kernel normalizers in Shogun:
SqrtDiagKernelNormalizer : This normalization in the feature space amounts to defining a new kernel $k'({\bf x},{\bf x'}) = \frac{k({\bf x},{\bf x'})}{\sqrt{k({\bf x},{\bf x})k({\bf x'},{\bf x'})}}$
AvgDiagKernelNormalizer : Scaling with a constant $k({\bf x},{\bf x'})= \frac{1}{c}\cdot k({\bf x},{\bf x'})$
ZeroMeanCenterKernelNormalizer : Centers the kernel in feature space and ensures each feature must have zero mean after centering.
The set_normalizer() method of CKernel is used to add a normalizer.
Let us try it out on the ionosphere dataset where we use a small training set of 30 samples to train our SVM. Gaussian kernel with and without normalization is used. See reference [1] for details.
End of explanation
"""
num=30;
num_components=4
means=np.zeros((num_components, 2))
means[0]=[-1.5,1.5]
means[1]=[1.5,-1.5]
means[2]=[-1.5,-1.5]
means[3]=[1.5,1.5]
covs=np.array([[1.0,0.0],[0.0,1.0]])
gmm=sg.GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.set_coef(np.array([1.0,0.0,0.0,0.0]))
xntr=np.array([gmm.sample() for i in xrange(num)]).T
xnte=np.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(np.array([0.0,1.0,0.0,0.0]))
xntr1=np.array([gmm.sample() for i in xrange(num)]).T
xnte1=np.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(np.array([0.0,0.0,1.0,0.0]))
xptr=np.array([gmm.sample() for i in xrange(num)]).T
xpte=np.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(np.array([0.0,0.0,0.0,1.0]))
xptr1=np.array([gmm.sample() for i in xrange(num)]).T
xpte1=np.array([gmm.sample() for i in xrange(5000)]).T
traindata=np.concatenate((xntr,xntr1,xptr,xptr1), axis=1)
testdata=np.concatenate((xnte,xnte1,xpte,xpte1), axis=1)
l0 = np.array([0.0 for i in xrange(num)])
l1 = np.array([1.0 for i in xrange(num)])
l2 = np.array([2.0 for i in xrange(num)])
l3 = np.array([3.0 for i in xrange(num)])
trainlab=np.concatenate((l0,l1,l2,l3))
testlab=np.concatenate((l0,l1,l2,l3))
plt.title('Toy data for multiclass classification')
plt.jet()
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=75)
feats_train=sg.RealFeatures(traindata)
labels=sg.MulticlassLabels(trainlab)
"""
Explanation: Multiclass classification
Multiclass classification can be done using SVM by reducing the problem to binary classification. More on multiclass reductions in this notebook. CGMNPSVM class provides a built in one vs rest multiclass classification using GMNPlib. Let us see classification using it on four classes. CGMM class is used to sample the data.
End of explanation
"""
gaussian_kernel=sg.GaussianKernel(feats_train, feats_train, 2)
poly_kernel=sg.PolyKernel(feats_train, feats_train, 4, True)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
svm=sg.GMNPSVM(1, gaussian_kernel, labels)
_=svm.train(feats_train)
size=100
x1=np.linspace(-6, 6, size)
x2=np.linspace(-6, 6, size)
x, y=np.meshgrid(x1, x2)
grid=sg.RealFeatures(np.array((np.ravel(x), np.ravel(y))))
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Multiclass Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.set_kernel(kernels[i])
svm.train(feats_train)
grid_out=svm.apply(grid)
z=grid_out.get_labels().reshape((size, size))
plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
"""
Explanation: Let us try the multiclass classification for different kernels.
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/feature_engineering/labs/5_tftransform_taxifare.ipynb | apache-2.0 | !pip install --user apache-beam[gcp]==2.16.0
!pip install --user tensorflow-transform==0.15.0
"""
Explanation: TfTransform
Learning Objectives
1. Preproccess data and engineer new features using TfTransform
1. Create and deploy Apache Beam pipeline
1. Use processed data to train taxifare model locally then serve a prediction
Overview
While Pandas is fine for experimenting, for operationalization of your workflow it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam allows for streaming. In this lab we will pull data from BigQuery then use Apache Beam TfTransform to process the data.
Only specific combinations of TensorFlow/Beam are supported by tf.transform so make sure to get a combo that works. In this lab we will be using:
* TFT 0.15.0
* TF 2.0
* Apache Beam [GCP] 2.16.0
End of explanation
"""
!pip download tensorflow-transform==0.15.0 --no-deps
"""
Explanation: NOTE: You may ignore specific incompatibility errors and warnings. These components and issues do not impact your ability to complete the lab.
Download .whl file for tensorflow-transform. We will pass this file to Beam Pipeline Options so it is installed on the DataFlow workers
End of explanation
"""
%%bash
pip freeze | grep -e 'flow\|beam'
import shutil
import tensorflow as tf
import tensorflow_transform as tft
print(tf.__version__)
import os
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
"""
Explanation: <b>Restart the kernel</b> (click on the reload button above).
End of explanation
"""
from google.cloud import bigquery
def create_query(phase, EVERY_N):
"""Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
"""
base_query = """
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
"""
if EVERY_N is None:
if phase < 2:
# training
query = """{} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2""".format(
base_query
)
else:
query = """{} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {}""".format(
base_query, phase
)
else:
query = """{} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {})) = {}""".format(
base_query, EVERY_N, phase
)
return query
query = create_query(2, 100000)
df_valid = bigquery.Client().query(query).to_dataframe()
display(df_valid.head())
df_valid.describe()
"""
Explanation: Input source: BigQuery
Get data from BigQuery but defer the majority of filtering etc. to Beam.
Note that the dayofweek column is now strings.
End of explanation
"""
import datetime
import apache_beam as beam
import tensorflow as tf
import tensorflow_metadata as tfmd
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def is_valid(inputs):
"""Check to make sure the inputs are valid.
Args:
inputs: dict, dictionary of TableRow data from BigQuery.
Returns:
True if the inputs are valid and False if they are not.
"""
try:
pickup_longitude = inputs["pickuplon"]
dropoff_longitude = inputs["dropofflon"]
pickup_latitude = inputs["pickuplat"]
dropoff_latitude = inputs["dropofflat"]
hourofday = inputs["hourofday"]
dayofweek = inputs["dayofweek"]
passenger_count = inputs["passengers"]
fare_amount = inputs["fare_amount"]
return (
fare_amount >= 2.5
and pickup_longitude > -78
and pickup_longitude < -70
and dropoff_longitude > -78
and dropoff_longitude < -70
and pickup_latitude > 37
and pickup_latitude < 45
and dropoff_latitude > 37
and dropoff_latitude < 45
and passenger_count > 0
)
except:
return False
def preprocess_tft(inputs):
"""Preproccess the features and add engineered features with tf transform.
Args:
dict, dictionary of TableRow data from BigQuery.
Returns:
Dictionary of preprocessed data after scaling and feature engineering.
"""
import datetime
print(inputs)
result = {}
result["fare_amount"] = tf.identity(inputs["fare_amount"])
# Build a vocabulary
# Convert day of week from string->int with tft.string_to_int
# TODO: Your code goes here
result["hourofday"] = tf.identity(inputs["hourofday"]) # pass through
# Scale pickup/dropoff lat/lon between 0 and 1 with tft.scale_to_0_1
# TODO: Your code goes here
result["passengers"] = tf.cast(inputs["passengers"], tf.float32) # a cast
# Arbitrary TF func
result["key"] = tf.as_string(tf.ones_like(inputs["passengers"]))
# Engineered features
latdiff = inputs["pickuplat"] - inputs["dropofflat"]
londiff = inputs["pickuplon"] - inputs["dropofflon"]
# Scale our engineered features latdiff and londiff between 0 and 1
# TODO: Your code goes here
dist = tf.sqrt(latdiff * latdiff + londiff * londiff)
result["euclidean"] = tft.scale_to_0_1(dist)
return result
def preprocess(in_test_mode):
"""Sets up preprocess pipeline.
Args:
in_test_mode: bool, False to launch DataFlow job, True to run locally.
"""
import os
import os.path
import tempfile
from apache_beam.io import tfrecordio
from tensorflow_transform.beam import tft_beam_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import (
dataset_metadata,
dataset_schema,
)
job_name = "preprocess-taxi-features" + "-"
job_name += datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
import shutil
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc_tft"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EVERY_N = 100000
else:
print(f"Launching Dataflow job {job_name} ... hang on")
OUTPUT_DIR = f"gs://{BUCKET}/taxifare/preproc_tft/"
import subprocess
subprocess.call(f"gsutil rm -r {OUTPUT_DIR}".split())
EVERY_N = 10000
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"num_workers": 1,
"max_num_workers": 1,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True,
"direct_num_workers": 1,
"extra_packages": ["tensorflow-transform-0.15.0.tar.gz"],
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
# Set up raw data metadata
raw_data_schema = {
colname: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()
)
for colname in "dayofweek,key".split(",")
}
raw_data_schema.update(
{
colname: dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation()
)
for colname in "fare_amount,pickuplon,pickuplat,dropofflon,dropofflat".split(
","
)
}
)
raw_data_schema.update(
{
colname: dataset_schema.ColumnSchema(
tf.int64, [], dataset_schema.FixedColumnRepresentation()
)
for colname in "hourofday,passengers".split(",")
}
)
raw_data_metadata = dataset_metadata.DatasetMetadata(
dataset_schema.Schema(raw_data_schema)
)
# Run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, "tmp")):
# Save the raw data metadata
(
raw_data_metadata
| "WriteInputMetadata"
>> tft_beam_io.WriteMetadata(
os.path.join(OUTPUT_DIR, "metadata/rawdata_metadata"),
pipeline=p,
)
)
# Analyze and transform our training data using beam_impl.AnalyzeAndTransformDataset()
# TODO: Your code goes here
raw_dataset = (raw_data, raw_data_metadata)
# Analyze and transform training data
(
transformed_dataset,
transform_fn,
) = raw_dataset | beam_impl.AnalyzeAndTransformDataset(
preprocess_tft
)
transformed_data, transformed_metadata = transformed_dataset
# Save transformed train data to disk in efficient tfrecord format
transformed_data | "WriteTrainData" >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, "train"),
file_name_suffix=".gz",
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema
),
)
# Read eval data from BigQuery using beam.io.BigQuerySource and filter rows using our is_valid function
# TODO: Your code goes here
raw_test_dataset = (raw_test_data, raw_data_metadata)
# Transform eval data
transformed_test_dataset = (
raw_test_dataset,
transform_fn,
) | beam_impl.TransformDataset()
transformed_test_data, _ = transformed_test_dataset
# Save transformed train data to disk in efficient tfrecord format
(
transformed_test_data
| "WriteTestData"
>> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, "eval"),
file_name_suffix=".gz",
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema
),
)
)
# Save transformation function to disk for use at serving time
(
transform_fn
| "WriteTransformFn"
>> transform_fn_io.WriteTransformFn(
os.path.join(OUTPUT_DIR, "metadata")
)
)
# Change to True to run locally
preprocess(in_test_mode=False)
"""
Explanation: Create ML dataset using tf.transform and Dataflow
Let's use Cloud Dataflow to read in the BigQuery data and write it out as TFRecord files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
transformed_data is type pcollection.
Exercise. There are five TODO's in the following cell block
1. Convert day of week from string->int with tft.string_to_int
1. Scale pickuplat, pickuplon, dropofflat, dropofflon between 0 and 1 with tft.scale_to_0_1
1. Scale our engineered features latdiff and londiff between 0 and 1
1. Analyze and transform our training data using beam_impl.AnalyzeAndTransformDataset()
1. Read eval data from BigQuery using beam.io.BigQuerySource and filter rows using our is_valid function
End of explanation
"""
%%bash
# ls preproc_tft
gsutil ls gs://${BUCKET}/taxifare/preproc_tft/
"""
Explanation: This will take 10-15 minutes. You cannot go on in this lab until your DataFlow job has succesfully completed.
Let's check to make sure that there is data where we expect it to be now.
End of explanation
"""
%%bash
rm -r ./taxi_trained
export PYTHONPATH=${PYTHONPATH}:$PWD
python3 -m tft_trainer.task \
--train_data_path="gs://${BUCKET}/taxifare/preproc_tft/train*" \
--eval_data_path="gs://${BUCKET}/taxifare/preproc_tft/eval*" \
--output_dir=./taxi_trained \
!ls $PWD/taxi_trained/export/exporter
"""
Explanation: Train off preprocessed data
Now that we have our data ready and verified it is in the correct location we can train our taxifare model locally.
End of explanation
"""
%%writefile /tmp/test.json
{"dayofweek":0, "hourofday":17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2.0}
%%bash
sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete
%%bash
model_dir=$(ls $PWD/taxi_trained/export/exporter/)
gcloud ai-platform local predict \
--model-dir=./taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
"""
Explanation: Now let's create fake data in JSON format and use it to serve a prediction with gcloud ai-platform local predict
End of explanation
"""
|
AllenDowney/ThinkBayes2 | examples/distribution.ipynb | mit | from __future__ import print_function, division
%matplotlib inline
%precision 6
import warnings
warnings.filterwarnings('ignore')
from thinkbayes2 import Pmf, Cdf
import thinkplot
import numpy as np
from numpy.fft import fft, ifft
from inspect import getsourcelines
def show_code(func):
lines, _ = getsourcelines(func)
for line in lines:
print(line, end='')
"""
Explanation: What is a distribution?
An object-oriented exploration of one of the most useful concepts in statistics.
Copyright 2016 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
"""
d6 = Pmf()
for x in range(1, 7):
d6[x] = 1
d6.Print()
"""
Explanation: Playing dice with the universe
One of the recurring themes of my books is the use of object-oriented programming to explore mathematical ideas. Many mathematical entities are hard to define because they are so abstract. Representing them in Python puts the focus on what operations each entity supports -- that is, what the objects can do -- rather than on what they are.
In this notebook, I explore the idea of a probability distribution, which is one of the most important ideas in statistics, but also one of the hardest to explain.
To keep things concrete, I'll start with one of the usual examples: rolling dice. When you roll a standard six-sided die, there are six possible outcomes -- numbers 1 through 6 -- and all outcomes are equally likely.
If you roll two dice and add up the total, there are 11 possible outcomes -- numbers 2 through 12 -- but they are not equally likely. The least likely outcomes, 2 and 12, only happen once in 36 tries; the most likely outcome happens 1 times in 6.
And if you roll three dice and add them up, you get a different set of possible outcomes with a different set of probabilities.
What I've just described are three random number generators, which are also called random processes. The output from a random process is a random variable, or more generally a set of random variables. And each random variable has probability distribution, which is the set of possible outcomes and the corresponding set of probabilities.
There are many ways to represent a probability distribution. The most obvious is a probability mass function, or PMF, which is a function that maps from each possible outcome to its probability. And in Python, the most obvious way to represent a PMF is a dictionary that maps from outcomes to probabilities.
thinkbayes2 provides a class called Pmf that represents a probability mass function. Each Pmf contains a dictionary named d that contains the values and probabilities. To show how this class is used, I'll create a Pmf that represents a six-sided die:
End of explanation
"""
show_code(Pmf.Normalize)
"""
Explanation: Initially the "probabilities" are all 1, so the total probability in the Pmf is 6, which doesn't make a lot of sense. In a proper, meaningful, PMF, the probabilities add up to 1, which implies that one outcome, and only one outcome, will occur (for any given roll of the die).
We can take this "unnormalized" distribution and make it a proper Pmf using the Normalize method. Here's what the method looks like:
End of explanation
"""
d6.Normalize()
d6.Print()
"""
Explanation: Normalize adds up the probabilities in the PMF and divides through by the total. The result is a Pmf with probabilities that add to 1.
Here's how it's used:
End of explanation
"""
d6[3]
"""
Explanation: The fundamental operation provided by a Pmf is a "lookup"; that is, we can look up an outcome and get the corresponding probability. Pmf provides __getitem__, so we can use bracket notation to look up an outcome:
End of explanation
"""
d6[7]
"""
Explanation: And if you look up a value that's not in the Pmf, the probability is 0.
End of explanation
"""
# Solution
die = Pmf(dict(red=2, blue=4))
die.Normalize()
die.Print()
"""
Explanation: Exerise: Create a Pmf that represents a six-sided die that is red on two sides and blue on the other four.
End of explanation
"""
show_code(Pmf.__getitem__)
"""
Explanation: Is that all there is?
So is a Pmf a distribution? No. At least in this framework, a Pmf is one of several representations of a distribution. Other representations include the cumulative distribution function, or CDF, and the characteristic function.
These representations are equivalent in the sense that they all contain the same informaton; if I give you any one of them, you can figure out the others (and we'll see how soon).
So why would we want different representations of the same information? The fundamental reason is that there are many different operations we would like to perform with distributions; that is, questions we would like to answer. Some representations are better for some operations, but none of them is the best for all operations.
So what are the questions we would like a distribution to answer? They include:
What is the probability of a given outcome?
What is the mean of the outcomes, taking into account their probabilities?
What is the variance, and other moments, of the outcome?
What is the probability that the outcome exceeds (or falls below) a threshold?
What is the median of the outcomes, that is, the 50th percentile?
What are the other percentiles?
How can get generate a random sample from this distribution, with the appropriate probabilities?
If we run two random processes and choose the maximum of the outcomes (or minimum), what is the distribution of the result?
If we run two random processes and add up the results, what is the distribution of the sum?
Each of these questions corresponds to a method we would like a distribution to provide. But as I said, there is no one representation that answers all of them easily and efficiently. So let's look at the different representations and see what they can do.
Getting back to the Pmf, we've already seen how to look up the probability of a given outcome. Here's the code:
End of explanation
"""
show_code(Pmf.Mean)
"""
Explanation: Python dictionaries are implemented using hash tables, so we expect __getitem__ to be fast. In terms of algorithmic complexity, it is constant time, or $O(1)$.
Moments and expecations
The Pmf representation is also good for computing mean, variance, and other moments. Here's the implementation of Pmf.Mean:
End of explanation
"""
show_code(Pmf.Var)
"""
Explanation: This implementation is efficient, in the sense that it is $O(n)$, and because it uses a comprehension to traverse the outcomes, the overhead is low.
The implementation of Pmf.Var is similar:
End of explanation
"""
d6.Mean(), d6.Var()
"""
Explanation: And here's how they are used:
End of explanation
"""
show_code(Pmf.Expect)
"""
Explanation: The structure of Mean and Var is the same: they traverse the outcomes and their probabilities, x and p, and add up the product of p and some function of x.
We can generalize this structure to compute the expectation of any function of x, which is defined as
$E[f] = \sum_x p(x) f(x)$
Pmf provides Expect, which takes a function object, func, and returns the expectation of func:
End of explanation
"""
mu = d6.Mean()
d6.Expect(lambda x: (x-mu)**3)
"""
Explanation: As an example, we can use Expect to compute the third central moment of the distribution:
End of explanation
"""
show_code(Pmf.AddPmf)
"""
Explanation: Because the distribution is symmetric, the third central moment is 0.
Addition
The next question we'll answer is the last one on the list: if we run two random processes and add up the results, what is the distribution of the sum? In other words, if the result of the first process is a random variable, $X$, and the result of the second is $Y$, what is the distribution of $X+Y$?
The Pmf representation of the distribution can answer this question pretty well, but we'll see later that the characteristic function is even better.
Here's the implementation:
End of explanation
"""
thinkplot.Pdf(d6)
"""
Explanation: The outer loop traverses the outcomes and probabilities of the first Pmf; the inner loop traverses the second Pmf. Each time through the loop, we compute the sum of the outcome pair, v1 and v2, and the probability that the pair occurs.
Note that this method implicitly assumes that the two processes are independent; that is, the outcome from one does not affect the other. That's why we can compute the probability of the pair by multiplying the probabilities of the outcomes.
To demonstrate this method, we'll start with d6 again. Here's what it looks like:
End of explanation
"""
twice = d6 + d6
thinkplot.Pdf(twice, color='green')
"""
Explanation: When we use the + operator, Python invokes __add__, which invokes AddPmf, which returns a new Pmf object. Here's the Pmf that represents the sum of two dice:
End of explanation
"""
thrice = twice + d6
thinkplot.Pdf(d6)
thinkplot.Pdf(twice, color='green')
thinkplot.Pdf(thrice, color='red')
"""
Explanation: And here's the Pmf that represents the sum of three dice.
End of explanation
"""
# Solution
dice = die + die
dice.Print()
"""
Explanation: As we add up more dice, the result converges to the bell shape of the Gaussian distribution.
Exercise: If you did the previous exercise, you have a Pmf that represents a die with red on 2 sides and blue on the other 4. Use the + operator to compute the outcomes of rolling two of these dice and the probabilities of the outcomes.
Note: if you represent the outcomes as strings, AddPmf concatenates them instead of adding, which actually works.
End of explanation
"""
show_code(Cdf.__init__)
"""
Explanation: Cumulative probabilities
The next few questions on the list are related to the median and other percentiles. They are harder to answer with the Pmf representation, but easier with a cumulative distribution function (CDF).
A CDF is a map from an outcome, $x$, to its cumulative probability, which is the probability that the outcome is less than or equal to $x$. In math notation:
$CDF(x) = Prob(X \le x)$
where $X$ is the outcome of a random process, and $x$ is the threshold we are interested in. For example, if $CDF$ is the cumulative distribution for the sum of three dice, the probability of getting 5 or less is $CDF(5)$, and the probability of getting 6 or more is $1 - CDF(5)$.
thinkbayes2 provides a class called Cdf that represents a cumulative distribution function. It uses a sorted list of outcomes and the corresponding list of cumulative probabilities. The __init__ method is complicated because it accepts a lot of different parameters. The important part is the last 4 lines.
End of explanation
"""
cdf = Cdf(thrice)
cdf.Print()
"""
Explanation: xs is the sorted list of values, and freqs are their frequencies or probabilities.
ps is the list of cumulative frequencies or probabilities, which we normalize by dividing through by the last element.
Here's how we use it to create a Cdf object for the sum of three dice:
End of explanation
"""
thinkplot.Cdf(cdf);
"""
Explanation: Because we have to sort the values, the time to compute a Cdf is $O(n \log n)$.
Here's what the CDF looks like:
End of explanation
"""
show_code(Cdf.Probs)
"""
Explanation: The range of the CDF is always from 0 to 1.
Now we can compute $CDF(x)$ by searching the xs to find the right location, or index, and then looking up the corresponding probability. Because the xs are sorted, we can use bisection search, which is $O(\log n)$.
Cdf provides Probs, which takes an array of values and returns the corresponding probabilities:
End of explanation
"""
cdf.Probs((2, 10, 18))
"""
Explanation: The details here are a little tricky because we have to deal with some "off by one" problems, and if any of the values are less than the smallest value in the Cdf, we have to handle that as a special case. But the basic idea is simple, and the implementation is efficient.
Now we can look up probabilities for a sequence of values:
End of explanation
"""
cdf[5]
"""
Explanation: Cdf also provides __getitem__, so we can use brackets to look up a single value:
End of explanation
"""
# Solution
1 - cdf[14]
"""
Explanation: Exercise: If you roll three dice, what is the probability of getting 15 or more?
End of explanation
"""
show_code(Cdf.Values)
"""
Explanation: Reverse lookup
You might wonder why I represent a Cdf with two lists rather than a dictionary. After all, a dictionary lookup is constant time and bisection search is logarithmic. The reason is that we often want to use a Cdf to do a reverse lookup; that is, given a probability, we would like to find the corresponding value. With two sorted lists, a reverse lookup has the same performance as a forward loopup, $O(\log n)$.
Here's the implementation:
End of explanation
"""
cdf.Values((0.1, 0.5, 0.9))
"""
Explanation: And here's an example that finds the 10th, 50th, and 90th percentiles:
End of explanation
"""
show_code(Cdf.Sample)
"""
Explanation: The Cdf representation is also good at generating random samples, by choosing a probability uniformly from 0 to 1 and finding the corresponding value. Here's the method Cdf provides:
End of explanation
"""
cdf.Sample(1)
cdf.Sample(6)
cdf.Sample((2, 2))
"""
Explanation: The result is a NumPy array with the given shape. The time to generate each random choice is $O(\log n)$
Here are some examples that use it.
End of explanation
"""
# Solution
def iqr(cdf):
values = cdf.Values((0.25, 0.75))
return np.diff(values)[0]
iqr(cdf)
"""
Explanation: Exercise: Write a function that takes a Cdf object and returns the interquartile range (IQR), which is the difference between the 75th and 25th percentiles.
End of explanation
"""
show_code(Cdf.Max)
"""
Explanation: Max and min
The Cdf representation is particularly good for finding the distribution of a maximum. For example, in Dungeons and Dragons, players create characters with random properties like strength and intelligence. The properties are generated by rolling three dice and adding them, so the CDF for each property is the Cdf we used in this example. Each character has 6 properties, so we might wonder what the distribution is for the best of the six.
Here's the method that computes it:
End of explanation
"""
best = cdf.Max(6)
thinkplot.Cdf(best);
best[10]
"""
Explanation: To get the distribution of the maximum, we make a new Cdf with the same values as the original, and with the ps raised to the kth power. Simple, right?
To see how it works, suppose you generate six properties and your best is only a 10. That's unlucky, but you might wonder how unlucky. So, what is the chance of rolling 3 dice six times, and never getting anything better than 10?
Well, that means that all six values were 10 or less. The probability that each of them is 10 or less is $CDF(10)$, because that's what the CDF means. So the probability that all 6 are 10 or less is $CDF(10)^6$.
Now we can generalize that by replacing $10$ with any value of $x$ and $6$ with any integer $k$. The result is $CDF(x)^k$, which is the probability that all $k$ rolls are $x$ or less, and that is the CDF of the maximum.
Here's how we use Cdf.Max:
End of explanation
"""
# Solution
def Min(cdf, k):
return Cdf(cdf.xs, 1 - (1-cdf.ps)**k)
worst = Min(cdf, 6)
thinkplot.Cdf(worst);
"""
Explanation: So the chance of generating a character whose best property is 10 is less than 2%.
Exercise: Write a function that takes a CDF and returns the CDF of the minimum of k values.
Hint: If the minimum is less than $x$, that means all k values must be less than $x$.
End of explanation
"""
import matplotlib.pyplot as plt
class CharFunc:
def __init__(self, hs):
"""Initializes the CF.
hs: NumPy array of complex
"""
self.hs = hs
def __mul__(self, other):
"""Computes the elementwise product of two CFs."""
return CharFunc(self.hs * other.hs)
def make_pmf(self, thresh=1e-11):
"""Converts a CF to a PMF.
Values with probabilities below `thresh` are dropped.
"""
ps = ifft(self.hs)
d = dict((i, p) for i, p in enumerate(ps.real) if p > thresh)
return Pmf(d)
def plot_cf(self, **options):
"""Plots the real and imaginary parts of the CF."""
n = len(self.hs)
xs = np.arange(-n//2, n//2)
hs = np.roll(self.hs, len(self.hs) // 2)
plt.plot(xs, hs.real, label='real', **options)
plt.plot(xs, hs.imag, label='imag', **options)
plt.legend()
"""
Explanation: Characteristic function
At this point we've answered all the questions on the list, but I want to come back to addition, because the algorithm we used with the Pmf representation is not as efficient as it could be. It enumerates all pairs of outcomes, so if there are $n$ values in each Pmf, the run time is $O(n^2)$. We can do better.
The key is the characteristic function, which is the Fourier transform (FT) of the PMF. If you are familiar with the Fourier transform and the Convolution Theorem, keep reading. Otherwise, skip the rest of this cell and get to the code, which is much simpler than the explanation.
Details for people who know about convolution
If you are familiar with the FT in the context of spectral analysis of signals, you might wonder why we would possibly want to compute the FT of a PMF. The reason is the Convolution Theorem.
It turns out that the algorithm we used to "add" two Pmf objects is a form of convolution. To see how that works, suppose we are computing the distribution of $Z = X+Y$. To make things concrete, let's compute the probability that the sum, $Z$ is 5. To do that, we can enumerate all possible values of $X$ like this:
$Prob(Z=5) = \sum_x Prob(X=x) \cdot Prob(Y=5-x)$
Now we can write each of those probabilities in terms of the PMF of $X$, $Y$, and $Z$:
$PMF_Z(5) = \sum_x PMF_X(x) \cdot PMF_Y(5-x)$
And now we can generalize by replacing 5 with any value of $z$:
$PMF_Z(z) = \sum_x PMF_X(x) \cdot PMF_Y(z-x)$
You might recognize that computation as convolution, denoted with the operator $\ast$.
$PMF_Z = PMF_X \ast PMF_Y$
Now, according to the Convolution Theorem:
$FT(PMF_X \ast Y) = FT(PMF_X) \cdot FT(PMF_Y)$
Or, taking the inverse FT of both sides:
$PMF_X \ast PMF_Y = IFT(FT(PMF_X) \cdot FT(PMF_Y))$
In words, to compute the convolution of $PMF_X$ and $PMF_Y$, we can compute the FT of $PMF_X$ and $PMF_Y$ and multiply them together, then compute the inverse FT of the result.
Let's see how that works. Here's a class that represents a characteristic function.
End of explanation
"""
def compute_fft(d, n=256):
"""Computes the FFT of a PMF of integers.
Values must be integers less than `n`.
"""
xs, freqs = zip(*d.items())
ps = np.zeros(256)
ps[xs,] = freqs
hs = fft(ps)
return hs
"""
Explanation: The attribute, hs, is the Fourier transform of the Pmf, represented as a NumPy array of complex numbers.
The following function takes a dictionary that maps from outcomes to their probabilities, and computes the FT of the PDF:
End of explanation
"""
hs = compute_fft(thrice.d)
cf = CharFunc(hs)
cf.plot_cf()
"""
Explanation: fft computes the Fast Fourier Transform (FFT), which is called "fast" because the run time is $O(n \log n)$.
Here's what the characteristic function looks like for the sum of three dice (plotting the real and imaginary parts of hs):
End of explanation
"""
show_code(CharFunc.make_pmf)
"""
Explanation: The characteristic function contains all of the information from the Pmf, but it is encoded in a form that is hard to interpret. However, if we are given a characteristic function, we can find the corresponding Pmf.
CharFunc provides make_pmf, which uses the inverse FFT to get back to the Pmf representation. Here's the code:
End of explanation
"""
thinkplot.Pdf(cf.make_pmf())
"""
Explanation: And here's an example:
End of explanation
"""
show_code(CharFunc.__mul__)
"""
Explanation: Now we can use the characteristic function to compute a convolution. CharFunc provides __mul__, which multiplies the hs elementwise and returns a new CharFunc object:
End of explanation
"""
sixth = (cf * cf).make_pmf()
thinkplot.Pdf(sixth)
"""
Explanation: And here's how we can use it to compute the distribution of the sum of 6 dice.
End of explanation
"""
sixth.Print()
sixth.Mean(), sixth.Var()
"""
Explanation: Here are the probabilities, mean, and variance.
End of explanation
"""
#Solution
n = len(cf.hs)
mags = np.abs(cf.hs)
plt.plot(np.roll(mags, n//2))
None
# The result approximates a Gaussian curve because
# the PMF is approximately Gaussian and the FT of a
# Gaussian is also Gaussian
"""
Explanation: This might seem like a roundabout way to compute a convolution, but it is efficient. The time to Compute the CharFunc objects is $O(n \log n)$. Multiplying them together is $O(n)$. And converting back to a Pmf is $O(n \log n)$.
So the whole process is $O(n \log n)$, which is better than Pmf.__add__, which is $O(n^2)$.
Exercise: Plot the magnitude of cf.hs using np.abs. What does that shape look like?
Hint: it might be clearer if you us np.roll to put the peak of the CF in the middle.
End of explanation
"""
class Dist(Pmf, Cdf, CharFunc):
def __init__(self, d):
"""Initializes the Dist.
Calls all three __init__ methods.
"""
Pmf.__init__(self, d)
Cdf.__init__(self, d)
CharFunc.__init__(self, compute_fft(d))
def __add__(self, other):
"""Computes the distribution of the sum using Pmf.__add__.
"""
pmf = Pmf.__add__(self, other)
return Dist(pmf.d)
def __mul__(self, other):
"""Computes the distribution of the sum using CharFunc.__mul__.
"""
pmf = CharFunc.__mul__(self, other).make_pmf()
return Dist(pmf.d)
"""
Explanation: Distributions
Finally, let's back to the question we started with: what is a distribution?
I've said that Pmf, Cdf, and CharFunc are different ways to represent the same information. For the questions we want to answer, some representations are better than others. But how should we represent the distribution itself?
One option is to treat each representation as a mixin; that is, a class that provides a set of capabilities. A distribution inherits all of the capabilities from all of the representations. Here's a class that shows what I mean:
End of explanation
"""
dist = Dist(sixth.d)
thinkplot.Pdf(dist)
"""
Explanation: When you create a Dist, you provide a dictionary of values and probabilities.
Dist.__init__ calls the other three __init__ methods to create the Pmf, Cdf, and CharFunc representations. The result is an object that has all the attributes and methods of the three representations.
As an example, I'll create a Dist that represents the sum of six dice:
End of explanation
"""
dist[21]
"""
Explanation: We inherit __getitem__ from Pmf, so we can look up the probability of a value.
End of explanation
"""
dist.Mean(), dist.Var()
"""
Explanation: We also get mean and variance from Pmf:
End of explanation
"""
dist.ValueArray((0.25, 0.5, 0.75))
"""
Explanation: But we can also use methods from Cdf, like ValueArray:
End of explanation
"""
dist.Probs((18, 21, 24))
"""
Explanation: And Probs
End of explanation
"""
dist.Sample(10)
thinkplot.Cdf(dist.Max(6));
"""
Explanation: And Sample
End of explanation
"""
twelfth = dist + dist
thinkplot.Pdf(twelfth)
twelfth.Mean()
"""
Explanation: Dist.__add__ uses Pmf.__add__, which performs convolution the slow way:
End of explanation
"""
twelfth_fft = dist * dist
thinkplot.Pdf(twelfth_fft)
twelfth_fft.Mean()
"""
Explanation: Dist.__mul__ uses CharFunc.__mul__, which performs convolution the fast way.
End of explanation
"""
|
manchester9/intro-to-nltk | Text.ipynb | mit | import codecs
import requests
from urlparse import urljoin
from contextlib import closing
chunk_size = 10**6 # Download 1 MB at a time.
wpurl = "http://wpo.st/" # Washington Post provides short links
def fetch_webpage(url, path):
# Open up a stream request (to download large documents)
# Ensure that we will close when complete using contextlib
with closing(requests.get(url, stream=True)) as response:
# Check that the response was successful
if response.status_code == 200:
# Write each chunk to disk with the correct encoding
with codecs.open(path, 'w', response.encoding) as f:
for chunk in response.iter_content(chunk_size, decode_unicode=True):
f.write(chunk)
def fetch_wp_article(article_id):
path = "%s.html" % article_id
url = urljoin(wpurl, article_id)
return fetch_webpage(url, path)
fetch_webpage("http://www.koreadaily.com/news/read.asp?art_id=3283896", "korean.html")
fetch_wp_article("nrRB0")
fetch_wp_article("uyRB0")
"""
Explanation: Text Processing with Python
Packages Discussued:
readability-lxml and BeautifulSoup
Pattern
NLTK
TextBlob
spaCy
gensim
Other packages:
MITIE
NLP in Context
The science that has been developed around the facts of language passed through three stages before finding its true and unique object. First something called "grammar" was studied. This study, initiated by the Greeks and continued mainly by the French, was based on logic. It lacked a scientific approach and was detached from language itself. Its only aim was to give rules for distinguishing between correct and incorrect forms; it was a normative discipline, far removed from actual observation, and its scope was limited.
— Ferdinand de Saussure
The State of the Art
Academic design for use alongside intelligent agents (AI discipline)
Relies on formal models or representations of knowledge & language
Models are adapted and augment through probabilistic methods and machine learning.
A small number of algorithms comprise the standard framework.
Required:
Domain Knowledge
A Corpus in the Domain
Methods
The Data Science Pipeline
The NLP Pipeline
Morphology
The study of the forms of things, words in particular.
Consider pluralization for English:
Orthographic Rules: puppy → puppies
Morphological Rules: goose → geese or fish
Major parsing tasks:
stemming
lemmatization
tokenization.
Syntax
The study of the rules for the formation of sentences.
Major tasks:
chunking
parsing
feature parsing
grammars
NGram Models (perplexity)
Language generation
Semantics
The study of meaning.
I see what I eat.
I eat what I see.
He poached salmon.
Major Tasks
Frame extraction
creation of TMRs
Question and answer systems
Machine Learning
Solve Clustering Problems:
Topic Modeling
Language Similarity
Document Association (authorship)
Solve Classification Problems:
Language Detection
Sentiment Analysis
Part of Speech Tagging
Statistical Parsing
Much more
Use of word vectors to implement distance based metrics.
Setup and Dataset
To install the required packages (hopefully to a virtual environment) you can download the requirements.txt and run:
$ pip install -r requirements.txt
Or you can pip install each dependency as you need them.
Corpus Organization
Preprocessing HTML and XML Documents to Text
Much of the text that we're interested in is available on the web and formatted either as HTML or XML. It's not just web pages, however. Most eReader formats like ePub and Mobi are actually zip files containing XHTML. These semi-structured documents contain a lot of information, usually structural in nature. However, we want to get to the main body of the content of what we're looking for, disregarding other content that might be included such as headers for navigation, sidebars, ads and other extraneous content.
On the web, there are several services that provide web pages in a "readable" fashion like Instapaper and Clearly. Some browsers might even come with a clutter and distraction free "reading mode" that seems to give us exactly the content that we're looking for. An option that I've used in the past is to either programmatically access these renderers, Instapaper even provides an API. However, for large corpora, we need to quickly and repeatably perform extraction, while maintaining the original documents.
Corpus management requires that the original documents be stored alongside preprocessed documents - do not make changes to the originals in place! See discussions of data lakes and data pipelines for more on ingesting to WORM storages.
In Python, the fastest way to process HTML and XML text is with the lxml library - a superfast XML parser that binds the C libraries libxml2 and libxslt. However, the API for using lxml is a bit tricky, so instead use friendlier wrappers, readability-lxml and BeautifulSoup.
For example, consider the following code to fetch an HTML web article from The Washington Post:
End of explanation
"""
import bs4
def get_soup(path):
with open(path, 'r') as f:
return bs4.BeautifulSoup(f, "lxml") # Note the use of the lxml parser
for p in get_soup("nrRB0.html").find_all('p'):
print p
"""
Explanation: BeautifulSoup allows us to search the DOM to extract particular elements, for example to load our document and find all the <p> tags, we would do the following:
End of explanation
"""
for p in get_soup("nrRB0.html").find_all('p'):
print p.text
print
"""
Explanation: In order to print out only the text with no nodes, do the following:
End of explanation
"""
from readability.readability import Document
def get_paper(path):
with codecs.open(path, 'r', encoding='utf-8') as f:
return Document(f.read())
paper = get_paper("nrRB0.html")
print paper.title()
with codecs.open("nrRB0-clean.html", "w", encoding='utf-8') as f:
f.write(paper.summary())
"""
Explanation: While this allows us to easily traverse the DOM and find specific elements by their id, class, or element type - we still have a lot of cruft in the document. This is where readability-lxml comes in. This library is a Python port of the readability project, written in Ruby and inspired by Instapaper. This code uses readability.js and some other helper functions to extract the main body and even title of the document you're working with.
End of explanation
"""
def get_text(path):
with open(path, 'r') as f:
paper = Document(f.read())
soup = bs4.BeautifulSoup(paper.summary())
output = [paper.title()]
for p in soup.find_all('p'):
output.append(p.text)
return "\n\n".join(output)
print get_text("nrRB0.html")
"""
Explanation: Combine readability and BeautifulSoup as follows:
End of explanation
"""
from pattern.web import Twitter, plaintext
twitter = Twitter(language='en')
for tweet in twitter.search("#DataDC", cached=False):
print tweet.text
"""
Explanation: A note on binary formats
In order to transform PDF documents to XML, the best solution is currently PDFMiner, specificially their pdf2text tool. Note that this tool can output into multiple formats like XML or HTML, which is often better than the direct text export. Because of this it's often useful to convert PDF to XHTML and then use Readabiilty or BeautifulSoup to extract the text out of the document.
Unfortunately, the conversion from PDF to text is often not great, though statistical methodologies can help ease some of the errors in transformation. If PDFMiner is not sufficient, you can use tools like PyPDF2 to work directly on the PDF file, or write Python code to wrap other tools in Java and C like PDFBox.
Older binary formats like Pre-2007 Microsoft Word Documents (.doc) require special tools. Again, the best bet is to use Python to call another command line tool like antiword. Newer Microsoft formats are acutally zipped XML files (.docx) and can be either unzipped and handled using the XML tools mentioned above, or using Python packages like python-docx and python-excel.
Pattern
The pattern library by the CLiPS lab at the University of Antwerp is designed specifically for language processing of web data and contains a toolkit for fetching data via web APIS: Google, Gmail, Bing, Twitter, Facebook, Wikipedia, and more. It supports HTML DOM parsing and even includes a web crawler!
For example to ingest Twitter data:
End of explanation
"""
from pattern.en import parse, parsetree
s = "The man hit the building with a baseball bat."
print parse(s, relations=True, lemmata=True)
print
for clause in parsetree(s):
for chunk in clause.chunks:
for word in chunk.words:
print word,
print
"""
Explanation: Pattern also contains an NLP toolkit for English in the pattern.en module that utilizes statistical approcahes and regular expressions. Other languages include Spanish, French, Italian, German, and Dutch.
The patern parser will identify word classes (e.g. Part of Speech tagging), perform morphological inflection analysis, and includes a WordNet API for lemmatization.
End of explanation
"""
from pattern.search import search
s = "The man hit the building with a baseball bat."
pt = parsetree(s, relations=True, lemmata=True)
for match in search('NP VP', pt):
print match
"""
Explanation: The pattern.search module allows you to retreive N-Grams from text based on phrasal patterns, and can be used to mine dependencies from text, e.g.
End of explanation
"""
import nltk
text = get_text("nrRB0.html")
for idx, s in enumerate(nltk.sent_tokenize(text)): # Segmentation
words = nltk.wordpunct_tokenize(s) # Tokenization
tags = nltk.pos_tag(words) # Part of Speech tagging
print tags
print
if idx > 5:
break
from nltk import FreqDist
from nltk import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
text = get_text("nrRB0.html")
vocab = FreqDist()
words = FreqDist()
for s in nltk.sent_tokenize(text):
for word in nltk.wordpunct_tokenize(s):
words[word] += 1
lemma = lemmatizer.lemmatize(word)
vocab[lemma] += 1
print words
print vocab
"""
Explanation: Lastly the pattern.vector module has a toolkit for distance-based bag-of-words model machine learning including clustering (K-Means, Hierarhcical Clustering) and classification.
NLTK
Suite of libraries for a variety of academic text processing tasks:
tokenization, stemming, tagging,
chunking, parsing, classification,
language modeling, logical semantics
Pedagogical resources for teaching NLP theory in Python ...
Python interface to over 50 corpora and lexical resources
Focus on Machine Learning with specific domain knowledge
Free and Open Source
Numpy and Scipy under the hood
Fast and Formal
What is NLTK not?
Production ready out of the box*
Lightweight
Generally applicable
Magic
There are actually a few things that are production ready right out of the box.
The Good Parts:
Preprocessing
segmentation
tokenization
PoS tagging
Word level processing
WordNet
Lemmatization
Stemming
NGram
Utilities
Tree
FreqDist
ConditionalFreqDist
Streaming CorpusReader objects
Classification
Maximum Entropy (Megam Algorithm)
Naive Bayes
Decision Tree
Chunking, Named Entity Recognition
Parsers Galore!
The Bad Parts:
Syntactic Parsing
No included grammar (not a black box)
Feature/Dependency Parsing
No included feature grammar
The sem package
Toy only (lambda-calculus & first order logic)
Lots of extra stuff
papers, chat programs, alignments, etc.
End of explanation
"""
import os
import nltk
import time
import random
import pickle
import string
from bs4 import BeautifulSoup
from nltk.corpus import CategorizedPlaintextCorpusReader
# The first group captures the category folder, docs are any HTML file.
CORPUS_ROOT = './corpus'
DOC_PATTERN = r'(?!\.).*\.html'
CAT_PATTERN = r'([a-z_]+)/.*'
# Specialized Corpus Reader for HTML documents
class CategorizedHTMLCorpusreader(CategorizedPlaintextCorpusReader):
"""
Reads only the HTML body for the words and strips any tags.
"""
def _read_word_block(self, stream):
soup = BeautifulSoup(stream, 'lxml')
return self._word_tokenizer.tokenize(soup.get_text())
def _read_para_block(self, stream):
soup = BeautifulSoup(stream, 'lxml')
paras = []
piter = soup.find_all('p') if soup.find('p') else self._para_block_reader(stream)
for para in piter:
paras.append([self._word_tokenizer.tokenize(sent)
for sent in self._sent_tokenizer.tokenize(para)])
return paras
# Create our corpus reader
rss_corpus = CategorizedHTMLCorpusreader(CORPUS_ROOT, DOC_PATTERN,
cat_pattern=CAT_PATTERN, encoding='utf-8')
"""
Explanation: The first thing you needed to do was create a corpus reader that could read the RSS feeds and their topics, implementing one of the built-in corpus readers:
End of explanation
"""
# Create feature extractor methodology
def normalize_words(document):
"""
Expects as input a list of words that make up a document. This will
yield only lowercase significant words (excluding stopwords and
punctuation) and will lemmatize all words to ensure that we have word
forms that are standardized.
"""
stopwords = set(nltk.corpus.stopwords.words('english'))
lemmatizer = nltk.stem.wordnet.WordNetLemmatizer()
for token in document:
token = token.lower()
if token in string.punctuation: continue
if token in stopwords: continue
yield lemmatizer.lemmatize(token)
def document_features(document):
words = nltk.FreqDist(normalize_words(document))
feats = {}
for word in words.keys():
feats['contains(%s)' % word] = True
return feats
"""
Explanation: Just to make things easy, I've also included all of the imports at the top of this snippet in case you're just copying and pasting. This should give you a corpus that is easily readable with the following properties:
RSS Corpus contains 5506 files in 11 categories
Vocab: 69642 in 1920455 words for a lexical diversity of 27.576
This snippet demonstrates a choice I made - to override the _read_word_block and the _read_para_block functions in the CategorizedPlaintextCorpusReader, but of course you could have created your own HTMLCorpusReader class that implemented the categorization features.
The next thing to do is to figure out how you will generate your featuresets, I hope that you used unigrams, bigrams, TF-IDF and others. The simplest thing to do is simply a bag of words approach, however I have ensured that this bag of words does not contain punctuation or stopwords, has been normalized to all lowercase and has been lemmatized to reduce the number of word forms:
End of explanation
"""
def timeit(func):
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
delta = time.time() - start
return result, delta
return wrapper
@timeit
def generate_datasets(test_size=550, pickle_dir="."):
"""
Creates three data sets; a test set and dev test set of 550 documents
then a training set with the rest of the documents in the corpus. It
will then write the data sets to disk at the pickle_dir.
"""
documents = [(document_features(rss_corpus.words(fileid)), category)
for category in rss_corpus.categories()
for fileid in rss_corpus.fileids(category)]
random.shuffle(documents)
datasets = {
'test': documents[0:test_size],
'devtest': documents[test_size:test_size*2],
'training': documents[test_size*2:],
}
for name, data in datasets.items():
with open(os.path.join(pickle_dir, name+".pickle"), 'wb') as out:
pickle.dump(data, out)
def load_datasets(pickle_dir="."):
"""
Loads the randomly shuffled data sets from their pickles on disk.
"""
def loader(name):
path = os.path.join(pickle_dir, name+".pickle")
with open(path, 'rb') as f:
data = pickle.load(f)
return name, data
return dict(loader(name) for name in ('test', 'devtest', 'training'))
# Using a time it decorator you can see that this saves you quite a few seconds:
_, delta = generate_datasets(pickle_dir='datasets')
print "Took %0.3f seconds to generate datasets" % delta
"""
Explanation: You should save a training, devtest and test as pickles to disk so that you can easily work on your classifier without having to worry about the overhead of randomization. I went ahead and saved the features to disk; but if you're developing features then you'll only save the word lists to disk. Here are the functions both for generation and for loading the data sets:
End of explanation
"""
@timeit
def train_classifier(training, path='classifier.pickle'):
"""
Trains the classifier and saves it to disk.
"""
classifier = nltk.MaxentClassifier.train(training,
algorithm='megam', trace=2, gaussian_prior_sigma=1)
with open(path, 'wb') as out:
pickle.dump(classifier, out)
return classifier
datasets = load_datasets(pickle_dir='datasets')
classifier, delta = train_classifier(datasets['training'])
print "trained in %0.3f seconds" % delta
testacc = nltk.classify.accuracy(classifier, datasets['test']) * 100
print "test accuracy %0.2f%%" % testacc
classifier.show_most_informative_features(30)
from operator import itemgetter
def classify(text, explain=False):
classifier = None
with open('classifier.pickle', 'rb') as f:
classifier = pickle.load(f)
document = nltk.wordpunct_tokenize(text)
features = document_features(document)
pd = classifier.prob_classify(features)
for result in sorted([(s,pd.prob(s)) for s in pd.samples()], key=itemgetter(1), reverse=True):
print "%s: %0.4f" % result
print
if explain:
classifier.explain(features)
classify(get_text("nrRB0.html"), True)
classifier.explain(document_features(get_text("nrRB0.html")))
"""
Explanation: Last up is the building of the classifier. I used a maximum entropy classifier with the lemmatized word level features. Also note that I used the MEGAM algorithm to significantly speed up my training time:
End of explanation
"""
import os
from nltk.tag.stanford import NERTagger
from nltk.parse.stanford import StanfordParser
## NER JAR and Models
STANFORD_NER_MODEL = os.path.expanduser("~/Development/stanford-ner-2014-01-04/classifiers/english.all.3class.distsim.crf.ser.gz")
STANFORD_NER_JAR = os.path.expanduser("~/Development/stanford-ner-2014-01-04/stanford-ner-2014-01-04.jar")
## Parser JAR and Models
STANFORD_PARSER_MODELS = os.path.expanduser("~/Development/stanford-parser-full-2014-10-31/stanford-parser-3.5.0-models.jar")
STANFORD_PARSER_JAR = os.path.expanduser("~/Development/stanford-parser-full-2014-10-31/stanford-parser.jar")
def create_tagger(model=None, jar=None, encoding='ASCII'):
model = model or STANFORD_NER_MODEL
jar = jar or STANFORD_NER_JAR
return NERTagger(model, jar, encoding)
def create_parser(models=None, jar=None, **kwargs):
models = models or STANFORD_PARSER_MODELS
jar = jar or STANFORD_PARSER_JAR
return StanfordParser(jar, models, **kwargs)
class NER(object):
tagger = None
@classmethod
def initialize_tagger(klass, model=None, jar=None, encoding='ASCII'):
klass.tagger = create_tagger(model, jar, encoding)
@classmethod
def tag(klass, sent):
if klass.tagger is None:
klass.initialize_tagger()
sent = nltk.word_tokenize(sent)
return klass.tagger.tag(sent)
class Parser(object):
parser = None
@classmethod
def initialize_parser(klass, models=None, jar=None, **kwargs):
klass.parser = create_parser(models, jar, **kwargs)
@classmethod
def parse(klass, sent):
if klass.parser is None:
klass.initialize_parser()
return klass.parser.raw_parse(sent)
def tag(sent):
return NER.tag(sent)
def parse(sent):
return Parser.parse(sent)
tag("The man hit the building with the bat.")
for p in parse("The man hit the building with the bat."):
print p
"""
Explanation: The classifier did well - it trained in 2 minutes or so an dit got an initial accuracy of about 83% - a pretty good start!
Parsing with Stanford Parser and NLTK
NLTK parsing is notoriously bad - because it's pedagogical. However, you can use Stanford.
End of explanation
"""
from textblob import TextBlob
from bs4 import BeautifulSoup
text = TextBlob(get_text("nrRB0.html"))
print text.sentences
import nltk
np = nltk.FreqDist(text.noun_phrases)
print np.most_common(10)
print text.sentiment
review = TextBlob("Harrison Ford would be the most amazing, most wonderful, most handsome actor - the greatest that ever lived, if only he didn't have that silly earing.")
print review.sentiment
"""
Explanation: TextBlob
A lightweight wrapper around nltk that provides a simple "Blob" interface for working with text.
End of explanation
"""
b = TextBlob(u"بسيط هو أفضل من مجمع")
b.detect_language()
chinese_blob = TextBlob(u"美丽优于丑陋")
chinese_blob.translate(from_lang="zh-CN", to='en')
en_blob = TextBlob(u"Simple is better than complex.")
en_blob.translate(to="es")
"""
Explanation: Language Detection using TextBlob
End of explanation
"""
from __future__ import unicode_literals
from spacy.en import English
nlp = English()
tokens = nlp(u'The man hit the building with the baseball bat.')
baseball = tokens[7]
print (baseball.orth, baseball.orth_, baseball.head.lemma, baseball.head.lemma_)
tokens = nlp(u'The man hit the building with the baseball bat.', parse=True)
for token in tokens:
print token.prob
"""
Explanation: spaCy
Industrial strength NLP, in Python but with a strong Cython backend. Super fast. Licensing issue though.
End of explanation
"""
|
fastai/course-v3 | nbs/dl2/10c_fp16.ipynb | apache-2.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_10b import *
"""
Explanation: Training in mixed precision
End of explanation
"""
# export
import apex.fp16_utils as fp16
"""
Explanation: A little bit of theory
Jump_to lesson 12 video
Continuing the documentation on the fastai_v1 development here is a brief piece about mixed precision training. A very nice and clear introduction to it is this video from NVIDIA.
What's half precision?
In neural nets, all the computations are usually done in single precision, which means all the floats in all the arrays that represent inputs, activations, weights... are 32-bit floats (FP32 in the rest of this post). An idea to reduce memory usage (and avoid those annoying cuda errors) has been to try and do the same thing in half-precision, which means using 16-bits floats (or FP16 in the rest of this post). By definition, they take half the space in RAM, and in theory could allow you to double the size of your model and double your batch size.
Another very nice feature is that NVIDIA developed its latest GPUs (the Volta generation) to take fully advantage of half-precision tensors. Basically, if you give half-precision tensors to those, they'll stack them so that each core can do more operations at the same time, and theoretically gives an 8x speed-up (sadly, just in theory).
So training at half precision is better for your memory usage, way faster if you have a Volta GPU (still a tiny bit faster if you don't since the computations are easiest). How do we do it? Super easily in pytorch, we just have to put .half() everywhere: on the inputs of our model and all the parameters. Problem is that you usually won't see the same accuracy in the end (so it happens sometimes) because half-precision is... well... not as precise ;).
Problems with half-precision:
To understand the problems with half precision, let's look briefly at what an FP16 looks like (more information here).
The sign bit gives us +1 or -1, then we have 5 bits to code an exponent between -14 and 15, while the fraction part has the remaining 10 bits. Compared to FP32, we have a smaller range of possible values (2e-14 to 2e15 roughly, compared to 2e-126 to 2e127 for FP32) but also a smaller offset.
For instance, between 1 and 2, the FP16 format only represents the number 1, 1+2e-10, 1+22e-10... which means that 1 + 0.0001 = 1 in half precision. That's what will cause a certain numbers of problems, specifically three that can occur and mess up your training.
1. The weight update is imprecise: inside your optimizer, you basically do w = w - lr * w.grad for each weight of your network. The problem in performing this operation in half precision is that very often, w.grad is several orders of magnitude below w, and the learning rate is also small. The situation where w=1 and lrw.grad is 0.0001 (or lower) is therefore very common, but the update doesn't do anything in those cases.
2. Your gradients can underflow. In FP16, your gradients can easily be replaced by 0 because they are too low.
3. Your activations or loss can overflow. The opposite problem from the gradients: it's easier to hit nan (or infinity) in FP16 precision, and your training might more easily diverge.
The solution: mixed precision training
To address those three problems, we don't fully train in FP16 precision. As the name mixed training implies, some of the operations will be done in FP16, others in FP32. This is mainly to take care of the first problem listed above. For the next two there are additional tricks.
The main idea is that we want to do the forward pass and the gradient computation in half precision (to go fast) but the update in single precision (to be more precise). It's okay if w and grad are both half floats, but when we do the operation w = w - lr * grad, we need to compute it in FP32. That way our 1 + 0.0001 is going to be 1.0001.
This is why we keep a copy of the weights in FP32 (called master model). Then, our training loop will look like:
1. compute the output with the FP16 model, then the loss
2. back-propagate the gradients in half-precision.
3. copy the gradients in FP32 precision
4. do the update on the master model (in FP32 precision)
5. copy the master model in the FP16 model.
Note that we lose precision during step 5, and that the 1.0001 in one of the weights will go back to 1. But if the next update corresponds to add 0.0001 again, since the optimizer step is done on the master model, the 1.0001 will become 1.0002 and if we eventually go like this up to 1.0005, the FP16 model will be able to tell the difference.
That takes care of problem 1. For the second problem, we use something called gradient scaling: to avoid the gradients getting zeroed by the FP16 precision, we multiply the loss by a scale factor (scale=512 for instance). That way we can push the gradients to the right in the next figure, and have them not become zero.
Of course we don't want those 512-scaled gradients to be in the weight update, so after converting them into FP32, we can divide them by this scale factor (once they have no risks of becoming 0). This changes the loop to:
1. compute the output with the FP16 model, then the loss.
2. multiply the loss by scale then back-propagate the gradients in half-precision.
3. copy the gradients in FP32 precision then divide them by scale.
4. do the update on the master model (in FP32 precision).
5. copy the master model in the FP16 model.
For the last problem, the tricks offered by NVIDIA are to leave the batchnorm layers in single precision (they don't have many weights so it's not a big memory challenge) and compute the loss in single precision (which means converting the last output of the model in single precision before passing it to the loss).
Implementing all of this in the new callback system is surprisingly easy, let's dig into this!
Util functions
Before going in the main Callback we will need some helper functions. We will refactor using the APEX library util functions. The python-only build is enough for what we will use here if you don't manage to do the CUDA/C++ installation.
End of explanation
"""
bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)
def bn_to_float(model):
if isinstance(model, bn_types): model.float()
for child in model.children(): bn_to_float(child)
return model
def model_to_half(model):
model = model.half()
return bn_to_float(model)
"""
Explanation: Converting the model to FP16
Jump_to lesson 12 video
We will need a function to convert all the layers of the model to FP16 precision except the BatchNorm-like layers (since those need to be done in FP32 precision to be stable). We do this in two steps: first we convert the model to FP16, then we loop over all the layers and put them back to FP32 if they are a BatchNorm layer.
End of explanation
"""
model = nn.Sequential(nn.Linear(10,30), nn.BatchNorm1d(30), nn.Linear(30,2)).cuda()
model = model_to_half(model)
def check_weights(model):
for i,t in enumerate([torch.float16, torch.float32, torch.float16]):
assert model[i].weight.dtype == t
assert model[i].bias.dtype == t
check_weights(model)
"""
Explanation: Let's test this:
End of explanation
"""
model = nn.Sequential(nn.Linear(10,30), nn.BatchNorm1d(30), nn.Linear(30,2)).cuda()
model = fp16.convert_network(model, torch.float16)
check_weights(model)
"""
Explanation: In Apex, the function that does this for us is convert_network. We can use it to put the model in FP16 or back to FP32.
End of explanation
"""
from torch.nn.utils import parameters_to_vector
def get_master(model, flat_master=False):
model_params = [param for param in model.parameters() if param.requires_grad]
if flat_master:
master_param = parameters_to_vector([param.data.float() for param in model_params])
master_param = torch.nn.Parameter(master_param, requires_grad=True)
if master_param.grad is None: master_param.grad = master_param.new(*master_param.size())
return model_params, [master_param]
else:
master_params = [param.clone().float().detach() for param in model_params]
for param in master_params: param.requires_grad_(True)
return model_params, master_params
"""
Explanation: Creating the master copy of the parameters
From our model parameters (mostly in FP16), we'll want to create a copy in FP32 (master parameters) that we will use for the step in the optimizer. Optionally, we concatenate all the parameters to do one flat big tensor, which can make that step a little bit faster.
End of explanation
"""
model_p,master_p = get_master(model)
model_p1,master_p1 = fp16.prep_param_lists(model)
def same_lists(ps1, ps2):
assert len(ps1) == len(ps2)
for (p1,p2) in zip(ps1,ps2):
assert p1.requires_grad == p2.requires_grad
assert torch.allclose(p1.data.float(), p2.data.float())
same_lists(model_p,model_p1)
same_lists(model_p,master_p)
same_lists(master_p,master_p1)
same_lists(model_p1,master_p1)
"""
Explanation: The util function from Apex to do this is prep_param_lists.
End of explanation
"""
model1 = nn.Sequential(nn.Linear(10,30), nn.Linear(30,2)).cuda()
model1 = fp16.convert_network(model1, torch.float16)
model_p,master_p = get_master(model1, flat_master=True)
model_p1,master_p1 = fp16.prep_param_lists(model1, flat_master=True)
same_lists(model_p,model_p1)
same_lists(master_p,master_p1)
assert len(master_p[0]) == 10*30 + 30 + 30*2 + 2
assert len(master_p1[0]) == 10*30 + 30 + 30*2 + 2
"""
Explanation: We can't use flat_master when there is a mix of FP32 and FP16 parameters (like batchnorm here).
End of explanation
"""
def get_master(opt, flat_master=False):
model_params = [[param for param in pg if param.requires_grad] for pg in opt.param_groups]
if flat_master:
master_params = []
for pg in model_params:
mp = parameters_to_vector([param.data.float() for param in pg])
mp = torch.nn.Parameter(mp, requires_grad=True)
if mp.grad is None: mp.grad = mp.new(*mp.size())
master_params.append(mp)
else:
master_params = [[param.clone().float().detach() for param in pg] for pg in model_params]
for pg in master_params:
for param in pg: param.requires_grad_(True)
return model_params, master_params
"""
Explanation: The thing is that we don't always want all the parameters of our model in the same parameter group, because we might:
- want to do transfer learning and freeze some layers
- apply discriminative learning rates
- don't apply weight decay to some layers (like BatchNorm) or the bias terms
So we actually need a function that splits the parameters of an optimizer (and not a model) according to the right parameter groups.
End of explanation
"""
def to_master_grads(model_params, master_params, flat_master:bool=False)->None:
if flat_master:
if master_params[0].grad is None: master_params[0].grad = master_params[0].data.new(*master_params[0].data.size())
master_params[0].grad.data.copy_(parameters_to_vector([p.grad.data.float() for p in model_params]))
else:
for model, master in zip(model_params, master_params):
if model.grad is not None:
if master.grad is None: master.grad = master.data.new(*master.data.size())
master.grad.data.copy_(model.grad.data)
else: master.grad = None
"""
Explanation: Copy the gradients from model params to master params
After the backward pass, all gradients must be copied to the master params before the optimizer step can be done in FP32. We need a function for that (with a bit of adjustement if we have flat master).
End of explanation
"""
x = torch.randn(20,10).half().cuda()
z = model(x)
loss = F.cross_entropy(z, torch.randint(0, 2, (20,)).cuda())
loss.backward()
to_master_grads(model_p, master_p)
def check_grads(m1, m2):
for p1,p2 in zip(m1,m2):
if p1.grad is None: assert p2.grad is None
else: assert torch.allclose(p1.grad.data, p2.grad.data)
check_grads(model_p, master_p)
fp16.model_grads_to_master_grads(model_p, master_p)
check_grads(model_p, master_p)
"""
Explanation: The corresponding function in the Apex utils is model_grads_to_master_grads.
End of explanation
"""
from torch._utils import _unflatten_dense_tensors
def to_model_params(model_params, master_params, flat_master:bool=False)->None:
if flat_master:
for model, master in zip(model_params, _unflatten_dense_tensors(master_params[0].data, model_params)):
model.data.copy_(master)
else:
for model, master in zip(model_params, master_params): model.data.copy_(master.data)
"""
Explanation: Copy the master params to the model params
After the step, we need to copy back the master parameters to the model parameters for the next update.
End of explanation
"""
# export
def get_master(opt, flat_master=False):
model_pgs = [[param for param in pg if param.requires_grad] for pg in opt.param_groups]
if flat_master:
master_pgs = []
for pg in model_pgs:
mp = parameters_to_vector([param.data.float() for param in pg])
mp = torch.nn.Parameter(mp, requires_grad=True)
if mp.grad is None: mp.grad = mp.new(*mp.size())
master_pgs.append([mp])
else:
master_pgs = [[param.clone().float().detach() for param in pg] for pg in model_pgs]
for pg in master_pgs:
for param in pg: param.requires_grad_(True)
return model_pgs, master_pgs
# export
def to_master_grads(model_pgs, master_pgs, flat_master:bool=False)->None:
for (model_params,master_params) in zip(model_pgs,master_pgs):
fp16.model_grads_to_master_grads(model_params, master_params, flat_master=flat_master)
# export
def to_model_params(model_pgs, master_pgs, flat_master:bool=False)->None:
for (model_params,master_params) in zip(model_pgs,master_pgs):
fp16.master_params_to_model_params(model_params, master_params, flat_master=flat_master)
"""
Explanation: The corresponding function in Apex is master_params_to_model_params.
But we need to handle param groups
The thing is that we don't always want all the parameters of our model in the same parameter group, because we might:
- want to do transfer learning and freeze some layers
- apply discriminative learning rates
- don't apply weight decay to some layers (like BatchNorm) or the bias terms
So we actually need a function that splits the parameters of an optimizer (and not a model) according to the right parameter groups and the following functions need to handle lists of lists of parameters (one list of each param group in model_pgs and master_pgs)
End of explanation
"""
class MixedPrecision(Callback):
_order = 99
def __init__(self, loss_scale=512, flat_master=False):
assert torch.backends.cudnn.enabled, "Mixed precision training requires cudnn."
self.loss_scale,self.flat_master = loss_scale,flat_master
def begin_fit(self):
self.run.model = fp16.convert_network(self.model, dtype=torch.float16)
self.model_pgs, self.master_pgs = get_master(self.opt, self.flat_master)
#Changes the optimizer so that the optimization step is done in FP32.
self.run.opt.param_groups = self.master_pgs #Put those param groups inside our runner.
def after_fit(self): self.model.float()
def begin_batch(self): self.run.xb = self.run.xb.half() #Put the inputs to half precision
def after_pred(self): self.run.pred = self.run.pred.float() #Compute the loss in FP32
def after_loss(self): self.run.loss *= self.loss_scale #Loss scaling to avoid gradient underflow
def after_backward(self):
#Copy the gradients to master and unscale
to_master_grads(self.model_pgs, self.master_pgs, self.flat_master)
for master_params in self.master_pgs:
for param in master_params:
if param.grad is not None: param.grad.div_(self.loss_scale)
def after_step(self):
#Zero the gradients of the model since the optimizer is disconnected.
self.model.zero_grad()
#Update the params from master to model.
to_model_params(self.model_pgs, self.master_pgs, self.flat_master)
"""
Explanation: The main Callback
Jump_to lesson 12 video
End of explanation
"""
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs = 64
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
nfs = [32,64,128,256,512]
def get_learner(nfs, data, lr, layer, loss_func=F.cross_entropy,
cb_funcs=None, opt_func=adam_opt(), **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return Learner(model, data, loss_func, lr=lr, cb_funcs=cb_funcs, opt_func=opt_func)
"""
Explanation: Now let's test this on Imagenette
End of explanation
"""
cbfs = [partial(AvgStatsCallback,accuracy),
ProgressCallback,
CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
learn = get_learner(nfs, data, 1e-2, conv_layer, cb_funcs=cbfs)
learn.fit(1)
"""
Explanation: Training without mixed precision
End of explanation
"""
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
ProgressCallback,
partial(BatchTransformXCallback, norm_imagenette),
MixedPrecision]
learn = get_learner(nfs, data, 1e-2, conv_layer, cb_funcs=cbfs)
learn.fit(1)
test_eq(next(learn.model.parameters()).type(), 'torch.cuda.FloatTensor')
"""
Explanation: Training with mixed precision
End of explanation
"""
# export
def test_overflow(x):
s = float(x.float().sum())
return (s == float('inf') or s == float('-inf') or s != s)
x = torch.randn(512,1024).cuda()
test_overflow(x)
x[123,145] = float('inf')
test_overflow(x)
%timeit test_overflow(x)
%timeit torch.isnan(x).any().item()
"""
Explanation: Dynamic loss scaling
The only annoying thing with the previous implementation of mixed precision training is that it introduces one new hyper-parameter to tune, the value of the loss scaling. Fortunately for us, there is a way around this. We want the loss scaling to be as high as possible so that our gradients can use the whole range of representation, so let's first try a really high value. In all likelihood, this will cause our gradients or our loss to overflow, and we will try again with half that big value, and again, until we get to the largest loss scale possible that doesn't make our gradients overflow.
This value will be perfectly fitted to our model and can continue to be dynamically adjusted as the training goes, if it's still too high, by just halving it each time we overflow. After a while though, training will converge and gradients will start to get smaller, so we also need a mechanism to get this dynamic loss scale larger if it's safe to do so. The strategy used in the Apex library is to multiply the loss scale by 2 each time we had a given number of iterations without overflowing.
To check if the gradients have overflowed, we check their sum (computed in FP32). If one term is nan, the sum will be nan. Interestingly, on the GPU, it's faster than checking torch.isnan:
Jump_to lesson 12 video
End of explanation
"""
# export
def grad_overflow(param_groups):
for group in param_groups:
for p in group:
if p.grad is not None:
s = float(p.grad.data.float().sum())
if s == float('inf') or s == float('-inf') or s != s: return True
return False
"""
Explanation: So we can use it in the following function that checks for gradient overflow:
End of explanation
"""
# export
class MixedPrecision(Callback):
_order = 99
def __init__(self, loss_scale=512, flat_master=False, dynamic=True, max_loss_scale=2.**24, div_factor=2.,
scale_wait=500):
assert torch.backends.cudnn.enabled, "Mixed precision training requires cudnn."
self.flat_master,self.dynamic,self.max_loss_scale = flat_master,dynamic,max_loss_scale
self.div_factor,self.scale_wait = div_factor,scale_wait
self.loss_scale = max_loss_scale if dynamic else loss_scale
def begin_fit(self):
self.run.model = fp16.convert_network(self.model, dtype=torch.float16)
self.model_pgs, self.master_pgs = get_master(self.opt, self.flat_master)
#Changes the optimizer so that the optimization step is done in FP32.
self.run.opt.param_groups = self.master_pgs #Put those param groups inside our runner.
if self.dynamic: self.count = 0
def begin_batch(self): self.run.xb = self.run.xb.half() #Put the inputs to half precision
def after_pred(self): self.run.pred = self.run.pred.float() #Compute the loss in FP32
def after_loss(self):
if self.in_train: self.run.loss *= self.loss_scale #Loss scaling to avoid gradient underflow
def after_backward(self):
#First, check for an overflow
if self.dynamic and grad_overflow(self.model_pgs):
#Divide the loss scale by div_factor, zero the grad (after_step will be skipped)
self.loss_scale /= self.div_factor
self.model.zero_grad()
return True #skip step and zero_grad
#Copy the gradients to master and unscale
to_master_grads(self.model_pgs, self.master_pgs, self.flat_master)
for master_params in self.master_pgs:
for param in master_params:
if param.grad is not None: param.grad.div_(self.loss_scale)
#Check if it's been long enough without overflow
if self.dynamic:
self.count += 1
if self.count == self.scale_wait:
self.count = 0
self.loss_scale *= self.div_factor
def after_step(self):
#Zero the gradients of the model since the optimizer is disconnected.
self.model.zero_grad()
#Update the params from master to model.
to_model_params(self.model_pgs, self.master_pgs, self.flat_master)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback,
ProgressCallback,
partial(BatchTransformXCallback, norm_imagenette),
MixedPrecision]
learn = get_learner(nfs, data, 1e-2, conv_layer, cb_funcs=cbfs)
learn.fit(1)
"""
Explanation: And now we can write a new version of the Callback that handles dynamic loss scaling.
End of explanation
"""
learn.cbs[-1].loss_scale
"""
Explanation: The loss scale used is way higher than our previous number:
End of explanation
"""
!./notebook2script.py 10c_fp16.ipynb
"""
Explanation: Export
End of explanation
"""
|
kbennion/foundations-hw | 09/.ipynb_checkpoints/09 - Functions-checkpoint.ipynb | mit | len
"""
Explanation: Class 9: Functions
A painful analogy
What do you do when you wake up in the morning?
I don't know about you, but I get ready.
"Obviously," you say, a little too snidely for my liking. You're particular, very detail-oriented, and need more information out of me.
Fine, then. Since you're going to be nitpicky, I might be able to break it down a little bit more for you...
I get out of bed
I take a shower
I get dressed
I eat breakfast
Unfortunately that's not good enough for you. "But how do you eat breakfast?" Well, maybe I...
Get a bowl out of a cabinet
Get some cereal out of the pantry
Get some milk out of the fridge
Pour some cereal into a bowl
Pour some milk into the bowl
Sit down at the table and start eating
"Are you eating with a spoon?" you interrupt. "When did you get the spoon out? Was that after the milk, or before the bowl?"
It's annoying people like this that make us have functions.
FUN FACT: The joke's on you, because I don't even actually eat cereal. Maybe I don't even get ready in the morning, either.
What is a function?
Functions are chunks of code that do something. They're different than the code we've written so far because they have names.
Instead of detailing each and every step involved in eating breakfast, I just use "I eat breakfast" as a shorthand for many, many detailed steps. Functions are the same - they allow us to take complicated parts of code, give it a name, and type just_eat_breakfast() every morning instead of twenty-five lines of code.
What are some examples of functions?
We've used a lot of functions in our time with Python. You remember our good buddy len? It's a function that gives back the length of whatever you send its way, e.g. len("ghost") is 5 and len("cartography") is 11.
End of explanation
"""
max
print
import requests
requests.get
# And if we just wanted to use them, for some reason
n = -34
print(n, "in absolute value is", abs(n))
print("We can add after casting to int:", 55 + int("55"))
n = 4.4847
print(n, "can be rounded to", round(n))
print(n, "can also be rounded to 2 decimal points", round(n, 2))
numbers = [4, 22, 40, 54]
print("The total of the list is", sum(numbers))
"""
Explanation: Almost everything useful is a function. Python has a ton of other built-in functions!
Along with len, a couple you might have seen are:
abs(...) takes a number and returns the absolute value of the number
int(...) takes a string or float and returns it as an integer
round(...) takes a float and returns a rounded version of it
sum(...) takes a list and returns the sum of all of its elements
max(...) takes a list and returns the largest of all of its selements
print(...) takes whatever you want to give it and displays it on the screen
Functions can also come from packages and libraries. The .get part of requests.get is a function, too!
And here, to prove it to you?
End of explanation
"""
def urlretrieve(url, filename=None, reporthook=None, data=None):
url_type, path = splittype(url)
with contextlib.closing(urlopen(url, data)) as fp:
headers = fp.info()
# Just return the local path and the "headers" for file://
# URLs. No sense in performing a copy unless requested.
if url_type == "file" and not filename:
return os.path.normpath(path), headers
# Handle temporary file setup.
if filename:
tfp = open(filename, 'wb')
else:
tfp = tempfile.NamedTemporaryFile(delete=False)
filename = tfp.name
_url_tempfiles.append(filename)
with tfp:
result = filename, headers
bs = 1024*8
size = -1
read = 0
blocknum = 0
if "content-length" in headers:
size = int(headers["Content-Length"])
if reporthook:
reporthook(blocknum, bs, size)
while True:
block = fp.read(bs)
if not block:
break
read += len(block)
tfp.write(block)
blocknum += 1
if reporthook:
reporthook(blocknum, bs, size)
if size >= 0 and read < size:
raise ContentTooShortError(
"retrieval incomplete: got only %i out of %i bytes"
% (read, size), result)
return result
"""
Explanation: See? Functions make the world run.
One useful role they play is functions hide code that you wouldn't want to type a thousand times. For example, you might have used urlretrieve from urllib to download files from around the internet. If you didn't use urlretrieve you'd have to type all of this:
End of explanation
"""
# A function to multiply a number by two
def double(number):
bigger = number * 2
return bigger
#what happens inside the function STAYS inside the function
#unless you use return, you don't know what happens within the function
"""
Explanation: Horrifying, right? Thank goodness for functions.
Writing your own functions
I've always been kind of jealous of len(...) and its crowd. It seemed unfair that Python made a list of cool, important functions, and neither me nor you had any say in the matter. What if I want a function that turns all of the periods in a sentence into exclamation points, or prints out a word a hundred million times?
Well, turns out that isn't a problem. We can do that. Easily! And we will. If you can type def and use a colon, you can write a function.
A function that you write yourself looks like this:
End of explanation
"""
print("2 times two is", double(2))
print("10 times two is", double(10))
print("56 times two is", double(56))
age = 76
print("Double your age is", double(age))
"""
Explanation: It has a handful of parts:
def - tells Python "hey buddy, we're about to define a function! Get ready." And Python appropriately prepares itself.
double - is the name of the function, and it's how you'll refer to the function later on. For example, len's function name is (obviously) len.
(number) - defines the parameters that the function "takes." You can see that this function is called double, and you send it one parameter that will be called number.
return bigger - is called the return statement. If the function is a factory, this is the shipping department - return tells you what to send back to the main program.
You'll see it doesn't do anything, though. That's because we haven't called the function, which is a programmer's way of saying use the function. Let's use it!
End of explanation
"""
def greet(name):
return "Hello " + name
# This one works
print(greet("Soma"))
# Overwrite the function greet with a string
greet = "blah"
# Trying the function again breaks
print(greet("Soma"))
"""
Explanation: Function Naming
Your function name has to be unique, otherwise Python will get confused. No other functions or variabels can share its name!
For example, if you call it len it'll forget about the built-in len function, and if you give one of your variables the name print suddenly Python won't understand how print(...) works anymore.
If you end up doing this, you'll get errors like the one below
End of explanation
"""
def exclaim(potato_soup):
return potato_soup + "!!!!!!!!!!"
invitation = "I hope you can come to my wedding"
print(exclaim(invitation))
line = "I am sorry to hear you have the flu"
print(exclaim(line))
"""
Explanation: Parameters
In our function double, we have a parameter called number.
py
def double(number):
bigger = number * 2
return bigger
Notice in the last example up above, though, we called double(age). Those don't match!!!
The thing is, your function doesn't care what the variable you send it is called. Whatever you send it, it will rename. It's like if someone adopted my cat Smushface, they might think calling her Petunia would be a little bit nicer (it wouldn't be, but I wouldn't do anything about it).
Here's an example with my favorite variable name potato_soup
End of explanation
"""
name = "Nancy"
name_length = len(name)
print("Hello", name, "your name is", name_length, "letters long")
name = "Brick"
name_length = len(name)
print("Hello", name, "your name is", name_length, "letters long")
name = "Saint Augustine"
name_length = len(name)
print("Hello", name, "your name is", name_length, "letters long")
"""
Explanation: invitation and line both get renamed to potato_soup inside of the function, so you can reuse the function with any variable of any name.
Let's say I have a function that does some intense calculations:
py
def sum_times_two(a, b):
added = a + b
return added * 2
To reiterate: a and b have nothing to do with the values outside of the function. You don't have to make variables called a and b and then send them to the function, the function takes care of that by itself. For example, the below examples are perfectly fine.
py
sum_times_two(2, 3)
r = 4
y = 7
sum_times_two(r, y)
When you're outside of the function, you almost never have to think about what's inside the function. You don't care about what variabels are called or anything. It's a magic box. Think about how you don't know what len looks like inside, or print, but you use them all of the time!
Why functions?
Two reasons to use functions, since maybe you'll ask:
Don't Repeat Yourself - If you find yourself writing the same code again and again, it's a good time to put that code into a function. len(...) is a function because Python people decided that you shouldn't have to write length-calculating code every time you wanted to see how many characters were in a string.
Code Modularity - sometimes it's just nice to organize your code. All of your parts that deal with counting dog names can go over here, and all of the stuff that has to do with boroughs goes over there. In the end it can make for more readable and maintanable code. (Maintainable code = code you can edit in the future without thinking real hard)
Those reasons probably don't mean much to you right now, and I sure don't blame you. Abstract programming concepts are just dumb abstract things until you actually start using them.
Let's say I wanted to greet someone and then tell them how long their name is, because I'm pedantic.
End of explanation
"""
def weird_greeting(name):
name_length = len(name)
print("Hello", name, "your name is", name_length, "letters long")
weird_greeting("Nancy")
weird_greeting("Brick")
weird_greeting("Saint Augustine")
"""
Explanation: Do you know how exhausted I got typing all of that out? And how it makes no sense at all? Luckily, functions save us: all of our code goes into one place so we don't have to repeat ourselves, and we can give it a descriptive name.
End of explanation
"""
# Our cool function
def size_comparison(a, b):
if a > b:
return "Larger"
else:
return "Smaller"
print(size_comparison(4, 5.5))
print(size_comparison(65, 2))
print(size_comparison(34.2, 33))
"""
Explanation: return
The role of a function is generally to do something and then send the result back to us. len sends us back the length of the string, requests.get sends us back the web page we requested.
py
def double(a):
return a * 2
This is called the return statement. You don't have to send something back (print doesn't) but you usually want to.
Writing a custom function
Let's say we have some code that compares the number of boats you have to the number of cars you have.
python
if boat_count > car_count:
print "Larger"
else:
print "Smaller"
Simple, right? But unfortunately we're at a rich people convention where they're always comparing the number of boats to the number of cars to the number of planes etc etc etc. If we have to check again and again and again and again for all of those people and always print Larger or Smaller I'm sure we'd get bored of typing all that. So let's convert it to a function!
Let's give our function a name of size_comparison. Remember: We can name our functions whatever we want, as long as it's unique.
Our function will take two parameters. they're boat_coat and car_count above, but we want generic, re-usable names, so maybe like, uh, a and b?
For our function's return value, let's have it send back "Larger" or "Smaller".
End of explanation
"""
def to_kmh(speeed):
return round(speeed * 1.6)
mph = 40
print("You are driving", mph, "in mph")
print("You are driving", to_kmh(mph), "in kmh")
"""
Explanation: Your Turn
This is a do-now even though it's not the beginning of class!
1a. Driving Speed
With the code below, it tells you how fast you're driving. I figure that a lot of people are more familiar with kilometers an hour, though, so let's write a function that does the conversion. I wrote a skeleton, now you can fill in the conversion.
Make it display a whole number.
End of explanation
"""
#magic numbers -- unique values with
#def to_mpm(speed):
#return speed * 26.8
#return to_kmh(speed) * 1000 / 60
def to_mpm(speed):
return round(speed * 26.8224)
mph = 40
print("You are driving", mph, "in mph")
print("You are driving", to_kmh(mph), "in kmh")
print("You are driving", to_mpm(mph), "in meters/minute")
"""
Explanation: 1b. Driving Speed Part II
Now write a function called to_mpm that, when given miles per hour, computes the meters per minute.
End of explanation
"""
def to_mpm(speeed):
mpm = to_kmh * 16.6667
return round(mpm)
"""
Explanation: 1c. Driving Speed Part III
Rewrite to_mpm to use the to_kmh function. D.R.Y.!
End of explanation
"""
# You have to wash ten cars on every street, along with the cars in your driveway.
# With the following list of streets, how many cars do we have?
def total(n):
return n * 10
# Here are the streets
streets = ['10th Ave', '11th Street', '45th Ave']
# Let's count them up
total = len(streets)
# And add one
count = total + 1
# And see how many we have
print(total(count))
"""
Explanation: 2. Broken Function
The code below won't work. Why not?
End of explanation
"""
first = { 'measurement': 3.4, 'scale': 'kilometer' }
second = { 'measurement': 9.1, 'scale': 'mile' }
third = { 'measurement': 2.0, 'scale': 'meter' }
fourth = { 'measurement': 9.0, 'scale': 'inches' }
def to_meters(measurement):
if measurement['scale'] == 'kilometer':
return measurement['measurement'] * 1000
if measurement['scale'] == 'meter':
return measurement['measurement']
if measurement['scale'] == 'miles':
return measurement['measurement'] * 1.6 == 1000
return 99
print(to_meters(first))
print(to_meters(second))
"""
Explanation: 3. Data converter
We have a bunch of data in different formats, and we need to normalize it! The data looks like this:
python
var first = { 'measurement': 3.4, 'scale': 'kilometer' }
var second = { 'measurement': 9.1, 'scale': 'mile' }
var third = { 'measurement': 2.0, 'scale': 'meter' }
var fourth = { 'measurement': 9.0, 'scale': 'inches' }
Write a function called to_meters(...). When you send it a dictionary, have it examine the measurement and scale and return the adjusted value. For the values above, 3.4 kilometers should be 3400.0 meters, 9.1 miles should be around 14600, and 9 inches should be apprxoimately 0.23.
End of explanation
"""
|
crawfordsm/crawfordsm.github.io | _posts/hof_voters_files/hof_voters.ipynb | mit | #read in the data
def read_votes(infile):
"""Read in the number of votes in each file"""
lines = open(infile).readlines()
hof_votes = {}
for l in lines:
player={}
l = l.split(',')
name = l[1].replace('X-', '').replace(' HOF', '').strip()
player['year'] = l[2]
player['votes'] = float(l[3])
player['p'] = float(l[4][:-1])/100.0
player['war'] = float(l[8])
hof_votes[name] = player
return hof_votes
#calcuate the total number of votes in each year
hof={}
n_votes = {}
for i in np.arange(1996, 2017):
hof[i] = read_votes('{}_list.csv'.format(i))
k=0
keys = hof[i].keys()
while hof[i][keys[k]]['p']<0.5: k+=1
k = keys[k]
n_votes[i] = int ( hof[i][k]['votes'] / hof[i][k]['p'])
n_years = 2017-1996
def match_years(hof, year1, year2):
"Produce a list of players and the number of votes received between two years"
player_dict={}
for name in hof[year1].keys():
if name in hof[year2].keys():
player_dict[name]=np.array([hof[year1][name]['p'], hof[year2][name]['p']])
return player_dict
end_year = 2017
def number_of_first_year(hof, year):
"Calculate the number of first ballot hall of famers in a class"
first_year = 0
for name in hof[year]:
if hof[year][name]['year']=='1st':
if hof[year][name]['p']>0.75: first_year+= 1
if name in ['Barry Bonds', 'Roger Clemens']: first_year+= 1
return first_year
def number_of_HOF(hof, year):
"Calculte the number of HOF for a year"
first_year = 0
for name in hof[year]:
if hof[year][name]['p']>0.75: first_year+= 1
return first_year
def number_of_drop(hof, year):
"Calculate the number of players dropped in a year"
first_year = 0
for name in hof[year]:
if hof[year][name]['p']<0.05: first_year+= 1
return first_year
def total_number_of_hof(hof, year):
"Total number of hall of famers for a class"
first_year = 0
for name in hof[year]:
if hof[year][name]['year']=='1st':
if hof[year][name]['p']>0.75:
first_year+= 1
if name in ['Barry Bonds', 'Roger Clemens']: first_year+= 1
for y in range(year+1, end_year):
if name in hof[y].keys():
#print year, name, hof[y][name]['p']
if hof[y][name]['p']>0.75:
first_year+= 1
return first_year
def average_change_in_votes(hof, year1, year2):
"""Determine the statistics change in votes from one class to another"""
player_dict = match_years(hof, year1, year2)
#print player_dict
change = 0
count = 0
for name in player_dict:
change += player_dict[name][1] - player_dict[name][0]
count += 1
#print count, name, player_dict[name][0], player_dict[name][1], player_dict[name][1] - player_dict[name][0], change
change = change / count
return count, change
def number_of_votes(hof, year):
keys = hof[year].keys()
k=0
while hof[year][keys[k]]['p']<0.5: k+=1
k = keys[k]
return int ( hof[year][k]['votes'] / hof[year][k]['p'])
from astropy.table import Table
data_table = Table(names=('Year','Votes', 'Strength', 'HOF', 'Drop', 'Count', 'Change', 'Total'))
for year in np.arange(1997,2017):
strength = number_of_first_year(hof, year)
nhof = number_of_HOF(hof, year)
nvotes = number_of_votes(hof, year)
ndrop = number_of_drop(hof, year)
total = total_number_of_hof(hof, year)
count, change = average_change_in_votes(hof, year-1, year)
data_table.add_row([year, nvotes, strength, nhof, ndrop, count, change, total])
plt.figure()
plt.plot(data_table['Year'], data_table['Change'], ls='', marker='o')
plt.xlabel('Year', fontsize='x-large')
plt.ylabel('$\Delta p \ (\%)$', fontsize='x-large')
plt.show()
'Mean={} Std={}'.format(data_table['Change'].mean(), data_table['Change'].std())
'Max={} Min={}'.format(data_table['Change'].max(), data_table['Change'].min())
"""
Explanation: Did the Hall of Fame voter purge make a difference?
In a recent Jayson Stark article and about lessons in hall of fame voting, he mentions the following three assumptions about the Baseball Hall of fame voters after a significant number of non-active voters were eliminated:
An electorate in which 109 fewer writers cast a vote in this election than in 2015.
An electorate that had a much different perspective on players who shined brightest under the light of new-age metrics.
And an electorate that appeared significantly less judgmental of players shadowed by those pesky performance-enhancing drug clouds.
However, are these last two assumptions true? Did the purge of Hall of Fame voters make a difference? Did the set of Hall of Fame voters least active have a different set of values than the those who are still voting?
Arbitrarily, I decided to test this against the years 1995-2016, which gives a good 20 elections as well as starting at the year Mike Schmidt was elected to the Hall of Fame (which is utterly arbitrary other than Mike Schmidt being my favorite player when I was young). However to figure this out, the first question that has to be answer is how does the average percentage change from year to year. This ends up being a little surprising when you just look at the numbers:
End of explanation
"""
stats.pearsonr(data_table['Year'], data_table['Change'])
stats.pearsonr(data_table['Votes'], data_table['Change'])
stats.pearsonr(data_table['Votes'][1:]-data_table['Votes'][:-1], data_table['Change'][1:])
data_table['Year', 'Votes', 'Count', 'Change', 'Strength','Total', 'HOF', 'Drop'].show_in_notebook(display_length=21)
#['Year', 'Count', 'Change', 'Strength', 'HOF', 'Drop']
"""
Explanation: As a matter of fact, this year saw one of the largest increases at 8.2%. Taken alone, this may indicate that something has changed with the removal of so many voters, but when viewed with all the other years, it does not look very exceptional as the values range between -6 to +8%. The average change is an increase by 2% per year, but with a standard deviation much larger than it of 4%. The average change in percentage is either highly random or driven by something other than change in the number of votes. In fact, the change in percentages does not show any strong correlation with the number of voters or the change in number of voters.
End of explanation
"""
nhof_2 = data_table['Total'][1:]- data_table['Strength'][1:] #number of HOFs in a class after year 1
p = data_table['Change'][1:]
dv = data_table['Votes'][1:] - data_table['Votes'][:-1]
from scipy import linalg as la
aa = np.vstack((data_table['Strength'][1:],nhof_2,data_table['HOF'][:-1], np.ones_like(nhof_2))).T
polycofs = la.lstsq(aa[:-1], p[:-1])[0]
print polycofs
s = aa * polycofs
s = s.sum(axis=1)
s
plt.figure()
plt.plot(data_table['HOF'][:-1]-data_table['Strength'][1:], p, ls='', marker='o')
plt.xlabel('$nhof_{previous} - Strength$', fontsize='x-large')
plt.ylabel('$\Delta p \ (\%)$', fontsize='x-large')
plt.show()
from scipy import stats
print stats.pearsonr(s,p)
Table((data_table['Year'][1:],data_table['HOF'][:-1]-data_table['Strength'][1:],p)).show_in_notebook()
coef = np.polyfit(s,p,1)
np.polyval(coef,0.08)
print s[-1]
print coef
"""
Explanation: Correlations with Hall of Fame classes
At initial glance, there is not much pattern to the data so pure randomness could be an explanation. However, we can define a few other metrics to take a look at the data and it might give us a better idea of what is going on. The first would be the number of Hall of Famers (hofs) elected in the previous class. The second is defined as the strength of the class as the number of first ballot hofs in that class (For the record, I consider Bonds and Clemons as first ballot hall of famers as the would have been if not for their Performance Enhancing Drug (PED) history). The third is the total number of hofs in a class, but that is uncertain for the most recent classes.
A very strong trend does appears between the average change in the percentage and the strength of an incoming class minus the number of hofs elected the year before. Unsurprisingly, when a strong class comes onto the ballot, they tend to take votes away from other players. Likewise, when a large number of players are elected, they free up votes for other players. A linear relationship of $$s = 0.0299*nhof_{previous} -0.0221\times Strength - 0.0034\times(Total-Strength) - 0.00299$$ gives a very good fit to $\Delta p$ and shows a strong linear correlation indicated by an r-pearson statistic of 0.95.
End of explanation
"""
name_list = []
p_list = []
dp_list = []
pp_list = []
year1 = 2015
year2 = year1+1
expect_p = s[year2 - 1998]
print year2, expect_p
"""
Explanation: Change in Voting Habits
If we use this relationship, we can look at what the expected percentage average change in the votes were for 2016. The expected change based on the existing data (1 First ballot hofs, 4 hofs the previous year, 1 total hof for class of 2016) was an increase of +9.0%. The average increase for 2016? That was +8.2%. So, at least overall, the increase in percentages is exactly what was expected based on a moderate incoming class (if you also assume Trevor Hoffman will eventually be elected the expected change for this year is then 8.7%) and four players entering the Hall the previous year. From this perspective, the voting purge made little difference in how the percentage of votes for a player changed.
End of explanation
"""
plt.figure()
name_list=[]
p_list=[]
pp_list=[]
dp_list=[]
war_list=[]
for year1 in range(1997,2015):
year2 = year1+1
expect_p = s[year2 - 1998]
for name in hof[year1]:
if name in hof[year2].keys():
name_list.append(name)
p_list.append(hof[year1][name]['p'])
dp_list.append(hof[year2][name]['p'] - hof[year1][name]['p'])
pp_list.append((hof[year2][name]['p'] - hof[year1][name]['p'])-expect_p)
war_list.append(hof[year2][name]['war'])
plt.plot(p_list, pp_list, 'bo')
name_list=[]
p_2016_list=[]
pp_2016_list=[]
dp_2016_list=[]
war_2016_list = []
year1=2015
year2 = year1+1
expect_p = s[year2 - 1998]
for name in hof[year1]:
if name in hof[year2].keys():
name_list.append(name)
p_2016_list.append(hof[year1][name]['p'])
dp_2016_list.append(hof[year2][name]['p'] - hof[year1][name]['p'])
pp_2016_list.append((hof[year2][name]['p'] - hof[year1][name]['p'])-expect_p)
war_2016_list.append(hof[year2][name]['war'])
plt.plot(p_2016_list, pp_2016_list, 'rs')
plt.xlabel('p (%)', fontsize='x-large')
plt.ylabel('$\Delta p - s $', fontsize='x-large')
plt.show()
"""
Explanation: Historically, players with higher vote percentage generally have seen their voting percentages increase. In the figure below, we look at the difference between the change in vote percentage for a given player, $\Delta p$, and the expected average change for all players that year as compared to the player's percentage, p, for the previous year. The 2016 year (red squares) does not appear significantly different than any other years (blue circles). It is just more common that players with low vote percentages tend to have their vote percentages suppressed than players with higher vote percentages. Nonetheless, there is large scatter in the distribution, which for any given player in any given year does not make it very predictive.
End of explanation
"""
plt.plot(war_list[-17:], pp_list[-17:], 'bo')
mask = np.zeros(len(war_2016_list), dtype=bool)
for i, name in enumerate(name_list):
if name in ['Sammy Sosa', 'Gary Sheffield', 'Mark McGwire', 'Barry Bonds', 'Roger Clemens']:
mask[i]=True
war = np.array(war_2016_list)
pp = np.array(pp_2016_list)
plt.plot(war, pp, 'rs')
plt.plot(war[mask], pp[mask], 'gs')
plt.xlabel('WAR', fontsize='x-large')
plt.ylabel('$\Delta p - s $', fontsize='x-large')
plt.show()
Table((name_list, p_2016_list, dp_2016_list, pp_2016_list, war_2016_list)).show_in_notebook()
"""
Explanation: Have voters changed in terms of WAR or PEDs?
If we look at the corrected change in voting percentage as a function of WAR, there does appear to be a stronger correlation between WAR and percentage change this year (red and green squares) than seen last year (blue circles), although some correlation does exist. The three points not falling near the correlation are Barry Bonds and Roger Clemons (PED history for otherwise certain hofs) and Lee Smith (reliever). Going back further years shows a large scatter in terms of WAR and corrected percentage change, and it would be interesting to see how this has changed over all the different years and to see if the strength of this correlation has been increasing. Furthermore, it would be interesting to see how this relates to a players other, more traditional metrics like home runs or wins.
The green circles are players that have been a strongly association with PEDs. Barry Bonds and Roger Clemons are exceptions, but the drop in the percentages for the other three players is in line for the drop for players with similar values of WAR. Along with the average change in voting seen for Bonds and Clemons, it does not look like the behavior for players associated with PEDs is very different than other players.
End of explanation
"""
plt.figure()
for year in range(1996,2017):
for name in hof[year].keys():
if hof[year][name]['year']=='1st' :
w = hof[year][name]['war']
p = hof[year][name]['p']
plt.plot([w], [p], 'bo')
if p > 0.75 and w > 75: print name, w, p
plt.show()
"""
Explanation: Conclusions and other thoughts
The overall average change in vote percentage was almost exactly what was predicted based on the strength of the incoming class and the large number of Hall of Famers elected the previous year. Along with the fact that percentages tend to increase relative to the average change for players with higher percentages, it does not look like there were any major changes to the voter patterns between this year and last year due to the purge of voters.
In terms of players that took PEDs, no major differences are detected in the voting patterns as compared to other players or the previous year.
In terms of WAR, the percentage change for a player does seem to correlate with WAR and possible has become a stronger correlation.
However, it should be noted that this is one year, a relatively small sample size, and that something very different could be occurring here.
Relievers still are an exceptional case with Lee Smith having a very low WAR. His vote percentage did decrease relative to the overall class and it will be interesting to see what happens to the three relieviers (Trevor Hoffman and Billy Wagner along with Lee Smith) next year. If Lee Smith is an example of how the new group of voters view relievers, we would expect to see all of their percentages drop relative to the average change, but it will be interesting as Trevor Hoffman is already very close.
The player with the worst performance though was Nomar Garciaparra with a drop in voting percentage of -12% as compared to the average. He was never associated with PEDs, and this was arguably expected due to being the lowest, second year positional player by WAR on the ballot. On the other hand, the player with the largest increase, Mike Mussina, has the largest WAR of any player outside of Bonds or Clemons.
As a final aside, Jeff Bagwell, Curt Schilling, and Mike Mussina are the only players in the last 20 years with no known associated with PEDs and WAR > 75 to not be elected, so far, to the hall of fame. Along with Phil Neikro and Bert Blyleven (and exlcuding Roger Clemons and Barry Bonds), these five players are the only players with WAR > 75 and not be elected on their first ballot in the last twenty years, whereas 13 other players with WAR > 75 were elected on their first ballot.
End of explanation
"""
|
lsst-dm-tutorial/lsst2017 | tutorial.ipynb | gpl-3.0 | %%script bash
export DATA_DIR=$HOME/DATA
export CI_HSC_DIR=$DATA_DIR/ci_hsc_small
mkdir -p $DATA_DIR
cd $DATA_DIR
if ! [ -d $CI_HSC_DIR ]; then
curl -O http://lsst-web.ncsa.illinois.edu/~krughoff/data/small_demo.tar.gz
tar zxvf small_demo.tar.gz
fi
export WORK_DIR=$HOME/WORK
mkdir -p $WORK_DIR
if ! [ -f $WORK_DIR/_mapper ]; then
echo "lsst.obs.hsc.HscMapper" > $WORK_DIR/_mapper
ingestImages.py $WORK_DIR $CI_HSC_DIR/raw/*.fits --mode=link
cd $WORK_DIR
ln -s $CI_HSC_DIR/CALIB .
mkdir ref_cats
cd ref_cats
ln -s $CI_HSC_DIR/ps1_pv3_3pi_20170110 .
fi
import os
DATA_DIR = os.path.join(os.environ['HOME'], "DATA")
CI_HSC_DIR = os.path.join(DATA_DIR, "ci_hsc_small")
WORK_DIR = os.path.join(os.environ['HOME'], "WORK")
"""
Explanation: Using the LSST DM Stack in Python
This tutorial focuses on using the DM stack in Python. Some of the things we'll be doing are more commonly done on the command-line, via executable scripts the stack also provides. A complete tutorial for the command-line functionality can be found in DM Tech Note 23.
More notebook examples can be found here: https://github.com/RobertLuptonTheGood/notebooks/tree/master/Demos
Data Repository Setup
Instead of operating directly on files and directories, we interact with on-disk data products via an abstraction layer called the data butler. The butler operates on data repositories, and our first task is to set up a repository with some raw data, master calibration files, and an external reference catalog. All of these are from a self-contained test dataset we call ci_hsc. The full ci_hsc dataset includes just enough data to run the full (current) LSST pipeline, which extends through processing coadds from multiple bands together. In this tutorial we'll focus on processing an individual image, and that's all this particular subset will support. We also won't go into the details of how to build master calibration files or reference catalogs here.
These first few steps to set up a data repository are best performed on the command-line, but use a Jupyter trick to do that within the notebook. You're also welcome to copy and paste these lines (minus the "%%script bash" line, of course) into a JupyterLab terminal window and run them individually instead if you want to pay close attention to what we're doing.
End of explanation
"""
%%script bash
processCcd.py $HOME/WORK --rerun isr --id visit=903334 ccd=16 --config isr.doWrite=True
"""
Explanation: Instrument Signature Removal and Command-Line Tasks
Before we can start doing interesting things, we need some minimally processed images (i.e. flat-fielded, bias-corrected, etc). Because the HSC team has spent a lot of time characterizing the instrument, we really want to run this step with the default configuration they've provided. That's also much actually easier to do from the command-line, and while we could do it from Python, that'd involve a lot of little irrelevant workarounds we'd rather not get bogged down in.
ISR is implemented as a subclass of lsst.pipe.base.Task. Nearly all of our high-level algorithms are implemented as Tasks, which are essentially just callable objects that can be composed (a high-level Task can hold one or more lower-level "subtasks", to which it can delegate work) and configured (every task takes an instance of a configuration class that controls what it does in detail). ISR is actually a CmdLineTask, a special kind of task that can be run from the command-line and use the data butler for all of its inputs and outputs (regular Tasks generally do not use the butler directly). Unlike virtually every other algorithm, there is a different ISR Task for each major camera (though there's also a simple default one), reflecting the specialized processing that's needed at this level.
But (for uninteresting, historical reasons), it's not currently possible to run IsrTask from the command-line. Instead, what we can do is run a parent CmdLineTask, lsst.pipe.tasks.ProcessCcdTask, which will run IsrTask as well as a few other steps. By default, it doesn't actually save the image directly after ISR is run - it performs a few more operations first, and then saves that image. But we can tell it to do so by modifying the tasks' configuration when we.
The full command-line for running ProcessCcdTask is below. Note that $HOME/WORK is just the WORK_DIR variable we've defined above, but we have to redefine it here because evironment variables in one %%script environment don't propagate to the next. If you've been running these from a terminal tab instead, you can just use $WORK_DIR instead.
End of explanation
"""
from lsst.daf.persistence import Butler
butler = Butler(inputs=os.path.join(WORK_DIR, "rerun/isr"))
"""
Explanation: There are a few features of this command-line that bear explaining:
- We run processCcd.py, not ProcessCcdTask. There's a similar driver script for all CmdLineTasks, with the name formed by making the first word lowercase and removing the Task suffix. These are added to your PATH when you set up the LSST package in which they're defined (which happens automatically in the JupyterLab environment).
- The first argument to any command-line task is the path an input data repository.
- We've used the --rerun argument to set the location of the output repository, in this case $HOME/WORK/rerun/isr. You can also use --output to set the path more directly, but we recommend --rerun because it enforces a nice convention for where to put outputs that helps with discoverability.
- The --id argument sets the data ID(s) to be processed, in this case a single CCD from a single visit. All CmdLineTasks share a fairly sophisticated syntax for expressions that match multiple data IDs, which you can learn more about by running any CmdLineTask with --help.
- We've overridden a configuration value with the --config option, in this case to make sure the just-after-ISR image file is written. Running a CmdLineTask automatically also includes applying configuation overrides that customize the task for the kind of data you're processing (i.e. which camera it comes from), and that's how the task knows to run the custom ISR task for HSC, rather than the generic default. You can see all of the config options for a CmdLineTask by running with --show config, though the results can be a bit overwhelming.
The rest of this tutorial is focused on using LSST software as a Python library, so this will be the last thing we run from the command-line. Again, for more information about how to run LSST's existing processing scripts from the command-line, check out DM Tech Note 23.
Data Access with Butler
The outputs of CmdLineTasks, like their inputs, are organized into data repositories, which are managed by an object called Butler. To retrieve a dataset from the Butler, we start by constructing one pointing to the output repository from the processing run (which is now an input repository for this Butler, which won't have an output repository since we won't be writing any more files):
End of explanation
"""
exposure = butler.get("postISRCCD", visit=903334, ccd=16)
"""
Explanation: We can then call get with the name and data ID of the dataset. The name of the image that's saved directly after ISR is postISRCCD.
End of explanation
"""
from lsst.afw.geom import Box2D, Box2I, Point2I, Extent2I
from lsst.afw.image import Exposure
# Execute this cell (and the one below) to re-load the post-ISR Exposure from disk after
# modifying it.
exposure = butler.get("postISRCCD", visit=903334, ccd=16)
bbox = exposure.getBBox()
bbox.grow(-bbox.getDimensions()//3) # box containing the central third (in each dimension)
bbox.grow(-Extent2I(0, 400)) # make it a bit smaller in x
# exposure[bbox] would also work here because exposure.getXY0() == (0, 0),
# but it's dangerous in general because it ignores that origin.
sub = Exposure(exposure, bbox=bbox, dtype=exposure.dtype, deep=False)
import numpy as np
import matplotlib
%matplotlib inline
matplotlib.rcParams["figure.figsize"] = (8, 6)
matplotlib.rcParams["font.size"] = 12
def display(image, mask=None, colors=None, alpha=0.40, **kwds):
box = Box2D(image.getBBox())
extent = (box.getMinX(), box.getMaxX(), box.getMinY(), box.getMaxY())
kwds.setdefault("extent", extent)
kwds.setdefault("origin", "lower")
kwds.setdefault("interpolation", "nearest")
matplotlib.pyplot.imshow(image.array, **kwds)
kwds.pop("vmin", None)
kwds.pop("vmax", None)
kwds.pop("norm", None)
kwds.pop("cmap", None)
if mask is not None:
for plane, color in colors.items():
array = np.zeros(mask.array.shape + (4,), dtype=float)
rgba = np.array(matplotlib.colors.hex2color(matplotlib.colors.cnames[color]) + (alpha, ),
dtype=float)
np.multiply.outer((mask.array & mask.getPlaneBitMask(plane)).astype(bool), rgba, out=array)
matplotlib.pyplot.imshow(array, **kwds)
"""
Explanation: Image, Boxes, and (Crude) Image Display
A full 2k x 4k HSC CCD is a pretty big image to display when you don't have specialized display code. The DM stack does have specialized display code, but it either requires DS9 (which requires some ssh tunnels to use with data living on a server) or a Firefly server installation. For this tutorial, we'll just throw together a naive matplotlib display function, and create a view to a subimage that we'll display instead of the full image.
This section features a few of our most important class objects:
lsst.afw.image.Exposure is an image object that actually holds three image planes: the science image (Exposure.image), an image of variance in every pixel (Exposure.variance), an integer bit mask (Exposure.mask). It also holds a lot of more complex objects that characterize the image, such as a point-spread function (lsst.afw.detection.Psf) and world-coordinate system (lsst.afw.image.Wcs). Most of these objects aren't filled in yet, because all we've run so far is ISR. It doesn't generally make sense to perform mathematical operations (i.e. addition) on Exposures, because those operations aren't always well-defined on the more complex objects. You can get a MaskedImage object with the same image, mask, and variance planes that does support mathematical operations but doesn't contain Psfs and Wcss (etc) with Exposure.maskedImage.
The Exposure.image and Exposure.variance properties return lsst.afw.image.Image objects. These have a .array property that returns a numpy.ndarray view to the Image's pixels. Conceptually, you should think of an Image as just a numpy.ndarray with a possibly nonzero origin.
The Exposure.mask property returns a lsst.afw.image.Mask object, which behaves like an Image with a dictionary-like object that relates string labels to bit numbers.
All of these image-like objects have a getBBox() method, which returns a lsst.afw.geom.Box2I. The minimum and maximum points of a Box2I are specified in integers that correspond to the centers of the lower-left and upper-right pixels in the box, but the box conceptually contains the entirety of those pixels. To get a box with a floating-point representation of the same boundary for the extent argument to imshow below, we construct a Box2D from the Box2I.
Point2I and Extent2I are used to represent absolute positions and offsets between positions as integers (respectively). These have floating-point counterparts Point2D and Extent2D.
End of explanation
"""
display(sub.image, vmin=175, vmax=300, cmap=matplotlib.cm.gray)
"""
Explanation: And now here's the (cutout) of the detrended image. I've cheated in setting the scale by looking at the background level in advance.
End of explanation
"""
from lsst.meas.algorithms import SubtractBackgroundTask
bkgConfig = SubtractBackgroundTask.ConfigClass()
# Execute this cell to get fun & terrible results!
bkgConfig.useApprox = False
bkgConfig.binSize = 20
"""
Explanation: Background Subtraction and Task Configuration
The next step we usually take is to estimate and subtract the background, using lsst.meas.algorithms.SubtractBackgroundTask. This is a regular Task, not a CmdLineTask, and hence we'll just pass it our Exposure object (it operates in-place) instead of a Butler.
End of explanation
"""
help(bkgConfig)
SubtractBackgroundTask.ConfigClass.algorithm?
bkgTask = SubtractBackgroundTask(config=bkgConfig)
bkgResult = bkgTask.run(exposure)
display(sub.image, vmin=-0.5, vmax=100, cmap=matplotlib.cm.gray)
"""
Explanation: The pattern for configuration here is the same as it was for SubaruIsrTask, but here we're setting values directly instead of loading a configuration file from the obs_subaru camera-specialization package. The config object here is an instance of a class that inherits from lsst.pex.config.Config that contains a set of lsst.pex.config.Field objects that define the options that can be modified. Each Field behaves more or less like a Python property, and you can get information on all of the fields in a config object by either using help:
End of explanation
"""
from lsst.meas.algorithms import SingleGaussianPsf
FWHM_TO_SIGMA = 1.0/(2*np.sqrt(2*np.log(2)))
PIXEL_SCALE = 0.168 # arcsec/pixel
SEEING = 0.7 # FWHM in arcsec
sigma = FWHM_TO_SIGMA*SEEING/PIXEL_SCALE
width = int(sigma*3)*2 + 1
psf = SingleGaussianPsf(width, width, sigma=sigma)
exposure.setPsf(psf)
"""
Explanation: If you've run through all of these steps after executing the cell that warns about terrible results, you should notice that the galaxy in the upper right has been oversubtracted.
Exercise: Before continuing on, re-load the exposure from disk, reset the configuration and Task instances, and re-run without executing the cell that applies bad values to the config, all by just re-executing the right cells above. You should end up an image in which the upper-right galaxy looks essentially the same as it does in the image before we subtracted the background.
Installing an Initial-Guess PSF
Most later processing steps require a PSF model, which is represented by a Psf object that's attached to the Exposure. For now, we'll just make a Gaussian PSF with some guess at the seeing.
End of explanation
"""
from lsst.afw.geom import Point2D
display(psf.computeKernelImage(Point2D(60.5, 7.2)))
"""
Explanation: A Psf object can basically just do one thing: it can return an image of itself at a point. SingleGaussianPsf represents a constant PSF, so it always returns the same image, regardless of the point you give it.
But there are two ways to evaluate a Psf at a point. If you want an image centered on the middle pixel, and that middle pixel to be the origin - what you'd usually want if you're going to convolve the PSF with another model - use computeKernelImage(x, y):
End of explanation
"""
display(psf.computeImage(Point2D(60.5, 7.2)))
"""
Explanation: If you want to compare the PSF to a star at the exact same position, use computeImage(x, y). That will shift the image returned by computeKernelImage(x, y) to the right sub-pixel offset, and update the origin of the image to take care of the rest, so you end up with a postage stamp in the same coordinate system as the original image where the star is.
End of explanation
"""
from lsst.pipe.tasks.repair import RepairTask
repairTask = RepairTask()
repairTask.run(exposure)
display(sub.image, mask=sub.mask, colors={"CR": "red"},
vmin=-0.5, vmax=100, alpha=0.8, cmap=matplotlib.cm.gray)
"""
Explanation: Removing Cosmic Rays
Cosmic rays are detected and interpolated by RepairTask, which also sets mask planes to indicate where the cosmic rays were ("CR") and which pixels were interpolated ("INTERP"; this may happen due to saturation or bad pixels as well). Because we're just using the default configuration, we can skip creating a config object and just construct the Task with no arguments.
End of explanation
"""
from lsst.meas.algorithms import SourceDetectionTask
from lsst.afw.table import SourceTable, SourceCatalog
schema = SourceTable.makeMinimalSchema()
detectTask = SourceDetectionTask(schema=schema)
# A SourceTable is really just a factory object for records; don't confuse it with SourceCatalog, which is
# usually what you want. But a SourceTable *is* what SourceDetectionTask wants here.
table = SourceTable.make(schema)
detectResult = detectTask.run(table, exposure)
display(sub.image, mask=sub.mask, colors={"DETECTED": "blue"}, vmin=-0.5, vmax=100, cmap=matplotlib.cm.gray)
"""
Explanation: Detecting Sources
Unlike the other Tasks we've dealt with so far, SourceDetectionTask creates a SourceCatalog in addition to updating the image (all it does to the image is add a "DETECTED" mask plane). All Tasks that work with catalogs need to be initialized with a lsst.afw.table.Schema object, to which the Task will add the fields necessary to store its outputs. A SourceCatalog's Schema cannot be modified after the SourceCatalog has been constructed, which means it's necessary to construct all Schema-using Tasks before actually running any of them.
Each record in the catalog returned by SourceDetectionTask has a Footprint object attached to it. A Footprint represents the approximate region covered by a source in a run-length encoding data structure. It also contains a list of peaks found within that region. The "DETECTED" mask plane is set to exactly the pixels covered by any Footprint in the returned catalog.
End of explanation
"""
from lsst.meas.deblender import SourceDeblendTask
deblendTask = SourceDeblendTask(schema=schema)
catalog = detectResult.sources
deblendTask.run(exposure, catalog)
"""
Explanation: Deblending
Deblending attempts to separate detections with multiple peaks into separate objects. We keep all of the original sources in the SourceCatalog (called parents) when we deblend, but for each parent source that contains more than one peak, we create a new record (called a child) for each of those peaks. The Footprints attached to the child objects are instances of a subclass called HeavyFootprint, which include new deblended pixel values as well as the region description. These can be used by calling insert to replace an Image's pixels with the HeavyFootprint's pixels.
EXERCISE: This section will not run if the cells are executed naively in order. At some point you'll have to go re-execute one or more cells in the previous section to get the right behavior. Which one(s)? Why? Copy those cells here (in the right places) when you figure it out.
End of explanation
"""
# Find some blended sources inside the subimage:
blendParents = []
for record in catalog:
if record.get("deblend_nChild") > 0 and bbox.contains(record.getFootprint().getBBox()):
blendParents.append(record)
# Sort by peak brightness so we can look at something with decent S/N
blendParents.sort(key=lambda r: -r.getFootprint().getPeaks()[0].getPeakValue())
from lsst.afw.image import Image
"""
Explanation: To inspect some deblender outputs, we'll start by finding some parent objects that were deblended into multiple children, by looking at the deblend_nChild field (which was added to the Schema when we constructed the SourceDeblendTask, and populated when we called run).
End of explanation
"""
blendParentImage = Image(exposure.image, bbox=blendParents[0].getFootprint().getBBox(),
deep=True, dtype=np.float32)
"""
Explanation: The image of the parent object is just the original image, but we'll cut out just the region inside its Footprint:
End of explanation
"""
blendChildImages = []
for blendChild in catalog.getChildren(blendParents[0].getId()):
image = Image(blendParentImage.getBBox(), dtype=np.float32)
blendChild.getFootprint().insert(image)
blendChildImages.append(image)
nSubPlots = len(blendChildImages) + 1
nCols = 3
nRows = nSubPlots//nCols + 1
matplotlib.pyplot.subplot(nRows, nCols, 1)
display(blendParentImage, vmin=-0.5, vmax=100, cmap=matplotlib.cm.gray)
for n, image in enumerate(blendChildImages):
matplotlib.pyplot.subplot(nRows, nCols, n + 2)
display(image, vmin=-0.5, vmax=100, cmap=matplotlib.cm.gray)
"""
Explanation: Now we'll insert the deblended child pixels into blank images of the same size:
End of explanation
"""
from lsst.meas.base import SingleFrameMeasurementTask
measureConfig = SingleFrameMeasurementTask.ConfigClass()
# What measurements are configured to run
print(measureConfig.plugins.names)
# Import an extension module that adds a new measurement
import lsst.meas.extensions.photometryKron
# What measurements *could* be configured to run
print(list(measureConfig.plugins.keys()))
# Configure the new measurement to run
measureConfig.plugins.names.add("ext_photometryKron_KronFlux")
measureTask = SingleFrameMeasurementTask(schema=schema, config=measureConfig)
measureTask.run(catalog, exposure)
"""
Explanation: Measurement
SingleFrameMeasurementTask is typically responsible for adding most fields to a SourceCatalog. It runs a series of plugins that make different measurements (you can configure them with the .plugins dictionary-like field on its config object, and control which are run with .names). If the deblender has been run first, it will measure child objects using their deblended pixels.
EXERCISE: Like the Deblending section, you'll have to re-execute some previous cells somewhere in this section to get the right behavior. Copy those cells into the right places here once you've gotten it working.
End of explanation
"""
from lsst.afw.geom.ellipses import Axes
display(sub.image, mask=sub.mask, colors={"DETECTED": "blue"}, vmin=-0.5, vmax=100, cmap=matplotlib.cm.gray)
for record in catalog:
if record.get("deblend_nChild") != 0:
continue
axes = Axes(record.getShape()) # convert to A, B, THETA parameterization
axes.scale(2.0) # matplotlib uses diameters, not radii
patch = matplotlib.patches.Ellipse((record.getX(), record.getY()),
axes.getA(), axes.getB(), axes.getTheta() * 180.0 / np.pi,
fill=False, edgecolor="green")
matplotlib.pyplot.gca().add_patch(patch)
matplotlib.pyplot.show()
"""
Explanation: We'll show some of the results of measurement by overlaying the measured ellipses on the image.
The shapes and centroids we use here (by calling record.getX(), record.getY(), record.getShape()) are aliases (called "slots") to fields with longer names that are our recommended measurements for these quantities. You can see the set of aliases by printing the schema (see next section).
End of explanation
"""
print(catalog.getSchema())
"""
Explanation: Working With Catalogs
Print the schema:
End of explanation
"""
catalog = catalog.copy(deep=True)
psfFlux = catalog["base_PsfFlux_flux"]
"""
Explanation: Get arrays of columns (requires the catalog to be continguous in memory, which we can guarantee with a deep copy):
End of explanation
"""
key = catalog.getSchema().find("deblend_nChild").key
deblended = [record for record in catalog if record.get(key) == 0]
"""
Explanation: Note that boolean values are stored in Flag columns, which are packed into bits. Unlike other column types, when you get an array of a Flag column, you get a copy, not a view.
Use Key objects instead of strings to do fast repeated access to fields when iterating over records:
End of explanation
"""
catalog[0].extract("base_PsfFlux_*") # or regex='...'
"""
Explanation: You can also get dict version of a subset of a Schema, a Catalog, or a Record by calling either extract methods with a glob:
End of explanation
"""
table = catalog.asAstropy()
"""
Explanation: For Records, the dict values are just the values of the fields, and for Catalogs, they're numpy.ndarray columns. For Schemas they're SchemaItems, which behave liked a named tuple containing a Key and a Field, which contains more descriptive information.
Get an Astropy view of the catalog (from which you can make a Pandas view):
End of explanation
"""
|
Gezort/YSDA_deeplearning17 | Seminar4/bonus/Bonus-advanced-theano.ipynb | mit | import numpy as np
def sum_squares(N):
return <student.Implement_me()>
%%time
sum_squares(10**8)
"""
Explanation: Theano, Lasagne
and why they matter
got no lasagne?
Install the bleeding edge version from here: http://lasagne.readthedocs.org/en/latest/user/installation.html
Warming up
Implement a function that computes the sum of squares of numbers from 0 to N
Use numpy or python
An array of numbers 0 to N - numpy.arange(N)
End of explanation
"""
import theano
import theano.tensor as T
#I gonna be function parameter
N = T.scalar("a dimension",dtype='int32')
#i am a recipe on how to produce sum of squares of arange of N given N
result = (T.arange(N)**2).sum()
#Compiling the recipe of computing "result" given N
sum_function = theano.function(inputs = [N],outputs=result)
%%time
sum_function(10**8)
"""
Explanation: theano teaser
Doing the very same thing
End of explanation
"""
#Inputs
example_input_integer = T.scalar("scalar input",dtype='float32')
example_input_tensor = T.tensor4("four dimensional tensor input") #dtype = theano.config.floatX by default
#не бойся, тензор нам не пригодится
input_vector = T.vector("", dtype='int32') # vector of integers
#Transformations
#transofrmation: elementwise multiplication
double_the_vector = input_vector*2
#elementwise cosine
elementwise_cosine = T.cos(input_vector)
#difference between squared vector and vector itself
vector_squares = input_vector**2 - input_vector
#Practice time:
#create two vectors of size float32
my_vector = student.init_float32_vector()
my_vector2 = student.init_one_more_such_vector()
#Write a transformation(recipe):
#(vec1)*(vec2) / (sin(vec1) +1)
my_transformation = student.implementwhatwaswrittenabove()
print my_transformation
#it's okay it aint a number
"""
Explanation: How does it work?
if you're currently in classroom, chances are i am explaining this text wall right now
* 1 You define inputs f your future function;
* 2 You write a recipe for some transformation of inputs;
* 3 You compile it;
* You have just got a function!
* The gobbledegooky version: you define a function as symbolic computation graph.
There are two main kinвs of entities: "Inputs" and "Transformations"
Both can be numbers, vectors, matrices, tensors, etc.
Both can be integers, floats of booleans (uint8) of various size.
An input is a placeholder for function parameters.
N from example above
Transformations are the recipes for computing something given inputs and transformation
(T.arange(N)^2).sum() are 3 sequential transformations of N
Doubles all functions of numpy vector syntax
You can almost always go with replacing "np.function" with "T.function" aka "theano.tensor.function"
np.mean -> T.mean
np.arange -> T.arange
np.cumsum -> T.cumsum
and so on.
builtin operations also work that way
np.arange(10).mean() -> T.arange(10).mean()
Once upon a blue moon the functions have different names or locations (e.g. T.extra_ops)
Ask us or google it
Still confused? We gonna fix that.
End of explanation
"""
inputs = [<two vectors that my_transformation depends on>]
outputs = [<What do we compute (can be a list of several transformation)>]
# The next lines compile a function that takes two vectors and computes your transformation
my_function = theano.function(
inputs,outputs,
allow_input_downcast=True #automatic type casting for input parameters (e.g. float64 -> float32)
)
#using function with, lists:
print "using python lists:"
print my_function([1,2,3],[4,5,6])
print
#Or using numpy arrays:
#btw, that 'float' dtype is casted to secong parameter dtype which is float32
print "using numpy arrays:"
print my_function(np.arange(10),
np.linspace(5,6,10,dtype='float'))
"""
Explanation: Compiling
So far we were using "symbolic" variables and transformations
Defining the recipe for computation, but not computing anything
To use the recipe, one should compile it
End of explanation
"""
#a dictionary of inputs
my_function_inputs = {
my_vector:[1,2,3],
my_vector2:[4,5,6]
}
# evaluate my_transformation
# has to match with compiled function output
print my_transformation.eval(my_function_inputs)
# can compute transformations on the fly
print "add 2 vectors", (my_vector + my_vector2).eval(my_function_inputs)
#!WARNING! if your transformation only depends on some inputs,
#do not provide the rest of them
print "vector's shape:", my_vector.shape.eval({
my_vector:[1,2,3]
})
"""
Explanation: Debugging
Compilation can take a while for big functions
To avoid waiting, one can evaluate transformations without compiling
Without compilation, the code runs slower, so consider reducing input size
End of explanation
"""
# Quest #1 - implement a function that computes a mean squared error of two input vectors
# Your function has to take 2 vectors and return a single number
<student.define_inputs_and_transformations()>
compute_mse =<student.compile_function()>
# Tests
from sklearn.metrics import mean_squared_error
for n in [1,5,10,10**3]:
elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),
np.ones(n),np.random.random(n),np.random.randint(100,size=n)]
for el in elems:
for el_2 in elems:
true_mse = np.array(mean_squared_error(el,el_2))
my_mse = compute_mse(el,el_2)
if not np.allclose(true_mse,my_mse):
print 'Wrong result:'
print 'mse(%s,%s)'%(el,el_2)
print "should be: %f, but your function returned %f"%(true_mse,my_mse)
raise ValueError,"Что-то не так"
print "All tests passed"
"""
Explanation: When debugging, one would generally want to reduce the computation complexity. For example, if you are about to feed neural network with 1000 samples batch, consider taking first 2.
If you really want to debug graph of high computation complexity, you could just as well compile it (e.g. with optimizer='fast_compile')
Do It Yourself
[2 points max]
End of explanation
"""
#creating shared variable
shared_vector_1 = theano.shared(np.ones(10,dtype='float64'))
#evaluating shared variable (outside symbolicd graph)
print "initial value",shared_vector_1.get_value()
# within symbolic graph you use them just as any other inout or transformation, not "get value" needed
#setting new value
shared_vector_1.set_value( np.arange(5) )
#getting that new value
print "new value", shared_vector_1.get_value()
#Note that the vector changed shape
#This is entirely allowed... unless your graph is hard-wired to work with some fixed shape
"""
Explanation: Shared variables
The inputs and transformations only exist when function is called
Shared variables always stay in memory like global variables
Shared variables can be included into a symbolic graph
They can be set and evaluated using special methods
but they can't change value arbitrarily during symbolic graph computation
we'll cover that later;
Hint: such variables are a perfect place to store network parameters
e.g. weights or some metadata
End of explanation
"""
# Write a recipe (transformation) that computes an elementwise transformation of shared_vector and input_scalar
#Compile as a function of input_scalar
input_scalar = T.scalar('coefficient',dtype='float32')
scalar_times_shared = <student.write_recipe()>
shared_times_n = <student.compile_function()>
print "shared:", shared_vector_1.get_value()
print "shared_times_n(5)",shared_times_n(5)
print "shared_times_n(-0.5)",shared_times_n(-0.5)
#Changing value of vector 1 (output should change)
shared_vector_1.set_value([-1,0,1])
print "shared:", shared_vector_1.get_value()
print "shared_times_n(5)",shared_times_n(5)
print "shared_times_n(-0.5)",shared_times_n(-0.5)
"""
Explanation: Your turn
End of explanation
"""
my_scalar = T.scalar(name='input',dtype='float64')
scalar_squared = T.sum(my_scalar**2)
#a derivative of v_squared by my_vector
derivative = T.grad(scalar_squared,my_scalar)
fun = theano.function([my_scalar],scalar_squared)
grad = theano.function([my_scalar],derivative)
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3,3)
x_squared = map(fun,x)
x_squared_der = map(grad,x)
plt.plot(x, x_squared,label="x^2")
plt.plot(x, x_squared_der, label="derivative")
plt.legend()
"""
Explanation: T.grad - why theano matters
Theano can compute derivatives and gradients automatically
Derivatives are computed symbolically, not numerically
Limitations:
* You can only compute a gradient of a scalar transformation over one or several scalar or vector (or tensor) transformations or inputs.
* A transformation has to have float32 or float64 dtype throughout the whole computation graph
* derivative over an integer has no mathematical sense
End of explanation
"""
my_vector = T.vector('float64')
#Compute the gradient of the next weird function over my_scalar and my_vector
#warning! Trying to understand the meaning of that function may result in permanent brain damage
weird_psychotic_function = ((my_vector+my_scalar)**(1+T.var(my_vector)) +1./T.arcsinh(my_scalar)).mean()/(my_scalar**2 +1) + 0.01*T.sin(2*my_scalar**1.5)*(T.sum(my_vector)* my_scalar**2)*T.exp((my_scalar-4)**2)/(1+T.exp((my_scalar-4)**2))*(1.-(T.exp(-(my_scalar-4)**2))/(1+T.exp(-(my_scalar-4)**2)))**2
der_by_scalar,der_by_vector = <student.compute_grad_over_scalar_and_vector()>
compute_weird_function = theano.function([my_scalar,my_vector],weird_psychotic_function)
compute_der_by_scalar = theano.function([my_scalar,my_vector],der_by_scalar)
#Plotting your derivative
vector_0 = [1,2,3]
scalar_space = np.linspace(0,7)
y = [compute_weird_function(x,vector_0) for x in scalar_space]
plt.plot(scalar_space,y,label='function')
y_der_by_scalar = [compute_der_by_scalar(x,vector_0) for x in scalar_space]
plt.plot(scalar_space,y_der_by_scalar,label='derivative')
plt.grid();plt.legend()
"""
Explanation: Why that rocks
End of explanation
"""
# Multiply shared vector by a number and save the product back into shared vector
inputs = [input_scalar]
outputs = [scalar_times_shared] #return vector times scalar
my_updates = {
shared_vector_1:scalar_times_shared #and write this same result bach into shared_vector_1
}
compute_and_save = theano.function(inputs, outputs, updates=my_updates)
shared_vector_1.set_value(np.arange(5))
#initial shared_vector_1
print "initial shared value:" ,shared_vector_1.get_value()
# evaluating the function (shared_vector_1 will be changed)
print "compute_and_save(2) returns",compute_and_save(2)
#evaluate new shared_vector_1
print "new shared value:" ,shared_vector_1.get_value()
"""
Explanation: Almost done - Updates
updates are a way of changing shared variables at after function call.
technically it's a dictionary {shared_variable : a recipe for new value} which is has to be provided when function is compiled
That's how it works:
End of explanation
"""
from sklearn.datasets import load_digits
mnist = load_digits(2)
X,y = mnist.data, mnist.target
print "y [shape - %s]:"%(str(y.shape)),y[:10]
print "X [shape - %s]:"%(str(X.shape))
print X[:3]
print y[:10]
# inputs and shareds
shared_weights = <student.code_me()>
input_X = <student.code_me()>
input_y = <student.code_me()>
predicted_y = <predicted probabilities for input_X>
loss = <logistic loss (scalar, mean over sample)>
grad = <gradient of loss over model weights>
updates = {
shared_weights: <new weights after gradient step>
}
train_function = <compile function that takes X and y, returns log loss and updates weights>
predict_function = <compile function that takes X and computes probabilities of y>
from sklearn.cross_validation import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y)
from sklearn.metrics import roc_auc_score
for i in range(5):
loss_i = train_function(X_train,y_train)
print "loss at iter %i:%.4f"%(i,loss_i)
print "train auc:",roc_auc_score(y_train,predict_function(X_train))
print "test auc:",roc_auc_score(y_test,predict_function(X_test))
print "resulting weights:"
plt.imshow(shared_weights.get_value().reshape(8,-1))
plt.colorbar()
"""
Explanation: Logistic regression example
[ 4 points max]
Implement the regular logistic regression training algorithm
Tips:
* Weights fit in as a shared variable
* X and y are potential inputs
* Compile 2 functions:
* train_function(X,y) - returns error and computes weights' new values (through updates)
* predict_fun(X) - just computes probabilities ("y") given data
We shall train on a two-class MNIST dataset
* please note that target y are {0,1} and not {-1,1} as in some formulae
End of explanation
"""
from mnist import load_dataset
#[down]loading the original MNIST dataset.
#Please note that you should only train your NN on _train sample,
# _val can be used to evaluate out-of-sample error, compare models or perform early-stopping
# _test should be hidden under a rock untill final evaluation... But we both know it is near impossible to catch you evaluating on it.
X_train,y_train,X_val,y_val,X_test,y_test = load_dataset()
print X_train.shape,y_train.shape
plt.imshow(X_train[0,0])
<here you could just as well create computation graph>
<this may or may not be a good place to evaluating loss and updates>
<here one could compile all the required functions>
<this may be a perfect cell to write a training&evaluation loop in>
<predict & evaluate on test here, right? No cheating pls.>
"""
Explanation: my1stNN
[basic part 4 points max]
Your ultimate task for this week is to build your first neural network [almost] from scratch and pure theano.
This time you will same digit recognition problem, but at a larger scale
* images are now 28x28
* 10 different digits
* 50k samples
Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) NN should already have ive you an edge over logistic regression.
[bonus score]
If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! The milestones would be 95%/97.5%/98.5% accuraсy on test set.
SPOILER!
At the end of the notebook you will find a few tips and frequently made mistakes. If you feel enough might to shoot yourself in the foot without external assistance, we encourage you to do so, but if you encounter any unsurpassable issues, please do look there before mailing us.
End of explanation
"""
|
SheffieldML/GPyOpt | manual/GPyOpt_scikitlearn.ipynb | bsd-3-clause | %pylab inline
import GPy
import GPyOpt
import numpy as np
from sklearn import svm
from numpy.random import seed
seed(12345)
"""
Explanation: GPyOpt: configuring Scikit-learn methods
Written by Javier Gonzalez and Zhenwen Dai, University of Sheffield.
Modified by Federico Tomasi, University of Genoa.
Last updated Thursday, 28 September 2017.
Part I: Regression
The goal of this notebook is to use GPyOpt to tune the parameters of Machine Learning algorithms. In particular, in this section we will show how to tune the hyper-parameters for the Support Vector Regression (SVR) implemented in Scikit-learn. Given the standard interface of Scikit-learn, other models can be tuned in a similar fashion.
We start loading the requires modules.
End of explanation
"""
# Let's load the dataset
GPy.util.datasets.authorize_download = lambda x: True # prevents requesting authorization for download.
data = GPy.util.datasets.olympic_marathon_men()
X = data['X']
Y = data['Y']
X_train = X[:20]
Y_train = Y[:20,0]
X_test = X[20:]
Y_test = Y[20:,0]
"""
Explanation: For this example we will use the Olympic marathon dataset available in GPy.
We split the original dataset into the training data (first 20 data points) and testing data (last 7 data points). The performance of SVR is evaluated in terms of Rooted Mean Squared Error (RMSE) on the testing data.
End of explanation
"""
from sklearn import svm
svr = svm.SVR()
svr.fit(X_train,Y_train)
Y_train_pred = svr.predict(X_train)
Y_test_pred = svr.predict(X_test)
print("The default parameters obtained: C="+str(svr.C)+", epilson="+str(svr.epsilon)+", gamma="+str(svr.gamma))
"""
Explanation: Let's first see the results with the default kernel parameters.
End of explanation
"""
plot(X_train,Y_train_pred,'b',label='pred-train')
plot(X_test,Y_test_pred,'g',label='pred-test')
plot(X_train,Y_train,'rx',label='ground truth')
plot(X_test,Y_test,'rx')
legend(loc='best')
print("RMSE = "+str(np.sqrt(np.square(Y_test_pred-Y_test).mean())))
"""
Explanation: We compute the RMSE on the testing data and plot the prediction. With the default parameters, SVR does not give an OK fit to the training data but completely miss out the testing data well.
End of explanation
"""
nfold = 3
def fit_svr_val(x):
x = np.atleast_2d(np.exp(x))
fs = np.zeros((x.shape[0],1))
for i in range(x.shape[0]):
fs[i] = 0
for n in range(nfold):
idx = np.array(range(X_train.shape[0]))
idx_valid = np.logical_and(idx>=X_train.shape[0]/nfold*n, idx<X_train.shape[0]/nfold*(n+1))
idx_train = np.logical_not(idx_valid)
svr = svm.SVR(C=x[i,0], epsilon=x[i,1],gamma=x[i,2])
svr.fit(X_train[idx_train],Y_train[idx_train])
fs[i] += np.sqrt(np.square(svr.predict(X_train[idx_valid])-Y_train[idx_valid]).mean())
fs[i] *= 1./nfold
return fs
## -- Note that similar wrapper functions can be used to tune other Scikit-learn methods
"""
Explanation: Now let's try Bayesian Optimization. We first write a wrap function for fitting with SVR. The objective is the RMSE from cross-validation. We optimize the parameters in log space.
End of explanation
"""
domain =[{'name': 'C', 'type': 'continuous', 'domain': (0.,7.)},
{'name': 'epsilon','type': 'continuous', 'domain': (-12.,-2.)},
{'name': 'gamma', 'type': 'continuous', 'domain': (-12.,-2.)}]
"""
Explanation: We set the search interval of $C$ to be roughly $[0,1000]$ and the search interval of $\epsilon$ and $\gamma$ to be roughtly $[1\times 10^{-5},0.1]$.
End of explanation
"""
opt = GPyOpt.methods.BayesianOptimization(f = fit_svr_val, # function to optimize
domain = domain, # box-constraints of the problem
acquisition_type ='LCB', # LCB acquisition
acquisition_weight = 0.1) # Exploration exploitation
# it may take a few seconds
opt.run_optimization(max_iter=50)
opt.plot_convergence()
"""
Explanation: We, then, create the GPyOpt object and run the optimization procedure. It might take a while.
End of explanation
"""
x_best = np.exp(opt.X[np.argmin(opt.Y)])
print("The best parameters obtained: C="+str(x_best[0])+", epilson="+str(x_best[1])+", gamma="+str(x_best[2]))
svr = svm.SVR(C=x_best[0], epsilon=x_best[1],gamma=x_best[2])
svr.fit(X_train,Y_train)
Y_train_pred = svr.predict(X_train)
Y_test_pred = svr.predict(X_test)
"""
Explanation: Let's show the best parameters found. They differ significantly from the default parameters.
End of explanation
"""
plot(X_train,Y_train_pred,'b',label='pred-train')
plot(X_test,Y_test_pred,'g',label='pred-test')
plot(X_train,Y_train,'rx',label='ground truth')
plot(X_test,Y_test,'rx')
legend(loc='best')
print("RMSE = "+str(np.sqrt(np.square(Y_test_pred-Y_test).mean())))
"""
Explanation: We can see SVR does a reasonable fit to the data. The result could be further improved by increasing the max_iter.
End of explanation
"""
from sklearn.datasets import load_iris, make_blobs
# iris = load_iris()
# # Take the first two features. We could avoid this by using a two-dim dataset
# X = iris.data[:, :2]
# y = iris.target
X, y = make_blobs(centers=3, cluster_std=4, random_state=1234)
for i in np.unique(y):
idx = y == i
plt.plot(X[idx,0], X[idx,1], 'o')
# plt.title("First two dimensions of Iris data")
# plt.xlabel('Sepal length')
# plt.ylabel('Sepal width');
"""
Explanation: Part II: Classification
In the same way as we did for the regression task, we can tune the parameters for a classification problem. The function to optimise is treated as a black box, so the procedure is very similar.
We can start by loading the standard Iris dataset from scikit-learn.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
train_index, test_index = next(StratifiedShuffleSplit(test_size=.4, random_state=123).split(X,y))
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
for i in np.unique(y_train):
idx = y_train == i
plt.plot(X_train[idx,0], X_train[idx,1], 'o')
plt.plot(X_test[:,0], X_test[:,1], 'rx');
plt.title("Learning and test sets")
plt.xlabel('Sepal length')
plt.ylabel('Sepal width');
"""
Explanation: Then, let's divide the dataset into a "learning" and a "test" set. The test set will be used later, in order to assess the performance of parameters selected by GPyOpt in relation to GridSearchCV and a SVC with the default parameters.
For each case, we consider an SVM with RBF kernel.
Let's plot both the learning (with colors related to their classes) and the test data (red "x").
End of explanation
"""
from sklearn.svm import SVC
svc = SVC(kernel='rbf').fit(X_train, y_train)
y_train_pred = svc.predict(X_train)
y_test_pred = svc.predict(X_test)
print("SVC with default parameters\nTest score: %.3f" % svc.score(X_test, y_test))
# utility functions taken from
# http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html
def make_meshgrid(x, y, h=.02):
"""Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
"""
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
"""Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
"""
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
"""
Explanation: 2.1 SVC with no parameter search
The first tentative is to use the SVC with the default parameters.
End of explanation
"""
f, ax = plt.subplots(1,1, squeeze=False)
for i in np.unique(y_test):
plot_contours(ax[0,0], svc, xx, yy, alpha=0.8)
idx = y_test == i
ax[0,0].plot(X_test[idx,0], X_test[idx,1], 'o')
ax[0,0].set_title("Difference between SVM prediction (background color) \nand real classes (point color) in test set")
# ax[0,0].set_xlim([4,8])
# ax[0,0].set_ylim([2,4.5])
# plt.xlabel('Sepal length')
# plt.ylabel('Sepal width');
plt.tight_layout()
"""
Explanation: We can visualise the surfaces defined by the SVM to differentiate the classes. The points have the color of each own class.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(SVC(kernel='rbf'), dict(C=np.logspace(-2,4,10), gamma=np.logspace(-12,-2,10))).fit(X_train,y_train)
print("GridSearchCV\nTest score: %.3f" % grid.score(X_test, y_test))
print("Best parameters extracted: %s" % grid.best_params_)
# y_test_pred = grid.predict(X_test)
f, ax = plt.subplots(1,1, squeeze=False)
for i in np.unique(y_test):
plot_contours(ax[0,0], grid, xx, yy, alpha=0.8)
idx = y_test == i
ax[0,0].plot(X_test[idx,0], X_test[idx,1], 'o')
ax[0,0].set_title("Difference between SVM prediction (background color) \n"
"and real classes (point color) in test set, using GridSearchCV")
# ax[0,0].set_xlim([4,8])
# ax[0,0].set_ylim([2,4.5])
# plt.xlabel('Sepal length')
# plt.ylabel('Sepal width');
plt.tight_layout()
"""
Explanation: 2.2 GridSearchCV
For comparison, we can use the standard GridSearchCV for the parameter selection. This will build a grid with the combination of all of the parameters, and then select the best combination of parameters that achieve the maximum validation score (or minimum error).
End of explanation
"""
domain =[{'name': 'C', 'type': 'continuous', 'domain': (-2.,4.)},
# {'name': 'kernel', 'type': 'categorical', 'domain': (0, 1)},
{'name': 'gamma', 'type': 'continuous', 'domain': (-12.,-2.)}
]
from sklearn.model_selection import cross_val_score
def fit_svc_val(x, mdl=None, cv=None):
x = np.atleast_2d(np.exp(x))
fs = np.zeros((x.shape[0], 1))
for i, params in enumerate(x):
dict_params = dict(zip([el['name'] for el in domain], params))
if 'kernel' in dict_params:
dict_params['kernel'] = 'rbf' if dict_params['kernel'] == 0 else 'poly'
mdl.set_params(**dict_params)
fs[i] = -np.mean(cross_val_score(mdl, X_train, y_train, cv=cv))
return fs
from functools import partial
opt = GPyOpt.methods.BayesianOptimization(f = partial(fit_svc_val, mdl=SVC(kernel='rbf')), # function to optimize
domain = domain, # box-constrains of the problem
acquisition_type ='LCB', # LCB acquisition
acquisition_weight = 0.2) # Exploration exploitation
opt.run_optimization(max_iter=50)
opt.plot_convergence()
opt.plot_acquisition()
x_best = np.exp(opt.X[np.argmin(opt.Y)])
best_params = dict(zip([el['name'] for el in domain], x_best))
svc_opt = SVC(**best_params)
svc_opt.fit(X_train, y_train)
# y_train_pred = svc.predict(X_train)
# y_test_pred = svc.predict(X_test)
print("GPyOpt\nTest score: %.3f" % svc_opt.score(X_test, y_test))
print("Best parameters extracted: %s" % best_params)
# y_test_pred = grid.predict(X_test)
f, ax = plt.subplots(1,1, squeeze=False)
for i in np.unique(y_test):
plot_contours(ax[0,0], svc_opt, xx, yy, alpha=0.8)
idx = y_test == i
ax[0,0].plot(X_test[idx,0], X_test[idx,1], 'o')
ax[0,0].set_title("Difference between SVM prediction (background color) \n"
"and real classes (point color) in test set, using GPyOpt")
# ax[0,0].set_xlim([4,8])
# ax[0,0].set_ylim([2,4.5])
# plt.xlabel('Sepal length')
# plt.ylabel('Sepal width');
plt.tight_layout()
"""
Explanation: 2.3 GPy optimization
End of explanation
"""
|
patrickmineault/xcorr-notebooks | notebooks/Expansion-Schmexpansion.ipynb | mit | %config InlineBackend.figure_format = 'retina'
import numpy as np
import matplotlib.pyplot as plt
def f(x):
r = np.exp(-1 / x ** 2)
r[x == 0] = 0
return r
rg = np.linspace(-10, 10, 401)
plt.plot(rg, f(rg))
"""
Explanation: Expansion shmexpansion
In calculus, we were taught that this function does not have a Taylor expansion at $x=0$:
$$f(x) = \cases{\exp(-1/x^2) \text{ , } x\ne 0 \ 0 \text{ , } x = 0}$$
This is because although the function is smooth along the real axis, it is discontinuous in the complex plane. All of its derivatives at $x=0$ are 0. It is infinitely flat around 0!
Let's plot it and see.
End of explanation
"""
def get_expansion(sigma):
nexpansion = 6
N = 1000
X = sigma * np.random.randn(N)
rnums = X.copy()
y = f(X)
Xs = []
coefficients = []
# Collect even powers of $x$ to form a polynomial regression.
Rs = []
for i in range(0, 2 * (nexpansion + 1), 2):
Rs.append(rg ** i)
Xs.append(X ** i)
# Tiny amount of trickery: orthogonalize the columns of our design matrix.
# This way, lower power coefficients won't change when we add higher power coefficients.
X = np.array(Xs).T
r = np.sqrt((X ** 2).sum(0, keepdims=True))
X = X / r
R = np.array(Rs).T
R = R / r
for i in range(1, X.shape[1]):
w = X[:, :i].T.dot(X[:, i])
X[:, i] -= X[:, :i].dot(w)
a = np.sqrt((X[:, i] ** 2).sum(0))
X[:, i] /= a
R[:, i] -= R[:, :i].dot(w)
R[:, i] /= a
# Check that the design matrix is indeed orthonormal.
np.testing.assert_allclose(X.T.dot(X), np.eye(X.shape[1]), atol=1E-6)
# Perform the polynomial regression. Note that we don't have to invert X.T.dot(X),
# since that is the identity matrix.
w_hat = (X.T @ y)
return w_hat, R
w_hat, R = get_expansion(1.0)
leg = ["True function"]
plt.plot(rg, f(rg))
for i in range(1, nexpansion+1):
plt.plot(rg, R[:, :i].dot(w_hat[:i]))
leg.append("degree %d" % ((i - 1) * 2))
plt.legend(leg)
plt.ylim((-.2, 1.2))
plt.xlim((-5, 5))
plt.title('Approximating exp(-1/x^2) with polynomials')
"""
Explanation: Doesn't look too promising! It's very flat at 0. All of its derivatives are exactly 0. However, even though its Taylor expansion is null, we can still form a perfectly good local polynomical approximation, that is, approximating the functions as the sum of a constant, a linear trend, a quadratic, etc.. The idea here is to probe the function at normally distributed locations around 0. We then perform a polynomial regression to approximate the function at these randomly chosen location. This is equivalent to minimizing the expected sum-of-squares error:
$$\mathbb E_{p(x)}[(f(x) - h_0(x) - h_2(x) x ^2 - h_4(x) x ^ 4 + \dots)^2]$$
Note that we only include the even coefficients $h_i(x), i \in [0, 2, 4, \ldots]$ in this expression, since we know the function is even, $f(x) = f(-x)$.
Let's roll!
End of explanation
"""
w_hat, R = get_expansion(2.0)
leg = ["True function"]
plt.plot(rg, f(rg))
for i in range(1, nexpansion+1):
plt.plot(rg, R[:, :i].dot(w_hat[:i]))
leg.append("degree %d" % ((i - 1) * 2))
plt.legend(leg)
plt.ylim((-.2, 1.2))
plt.xlim((-5, 5))
plt.title('Approximating exp(-1/x^2) with polynomials, $\sigma$ = 2')
"""
Explanation: Even though the function has no derivatives at 0, it can still be approximated by a polynomial! In the Taylor expansion, we only probe the function at 0. However, in the polynomial regression, we probe it at multiple points. While the Taylor expansion is unique once the degree is known, there are an infinite number of polynomial expansions that minimize different empirical risks. We can choose to minimize the empirical risk over a larger range of values of x by probing with normally distributed x values with a larger $\sigma$:
End of explanation
"""
w_hat, R = get_expansion(0.01)
leg = ["True function"]
plt.plot(rg, f(rg))
for i in range(1, nexpansion+1):
plt.plot(rg, R[:, :i].dot(w_hat[:i]))
leg.append("degree %d" % ((i - 1) * 2))
plt.legend(leg)
plt.ylim((-.2, 1.2))
plt.xlim((-5, 5))
plt.title('Approximating exp(-1/x^2) with polynomials, $\sigma$ = .01')
"""
Explanation: Now the expansion cares relatively less about the center. What if we set $\sigma = .01$?
End of explanation
"""
|
jlawman/jlawman.github.io | content/sklearn/Metrics - Classification Report Breakdown (Precision, Recall, F1).ipynb | mit | import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import make_blobs
data, labels = make_blobs(n_samples=100, n_features=2, centers=2,cluster_std=4,random_state=2)
plt.scatter(data[:,0], data[:,1], c = labels, cmap='coolwarm');
"""
Explanation: Create Dummy Data for Classification
Classify Dummy Data
Breakdown of Metrics Included in Classification Report
List of Other Classification Metrics Available in sklearn.metrics
1. Create Dummy Data for Classification
End of explanation
"""
#Import LinearSVC
from sklearn.svm import LinearSVC
#Create instance of Support Vector Classifier
svc = LinearSVC()
#Fit estimator to 70% of the data
svc.fit(data[:70], labels[:70])
#Predict final 30%
y_pred = svc.predict(data[70:])
#Establish true y values
y_true = labels[70:]
"""
Explanation: 2. Classify Data
End of explanation
"""
from sklearn.metrics import precision_score
print("Precision score: {}".format(precision_score(y_true,y_pred)))
"""
Explanation: 3. Breakdown of Metrics Included in Classification Report
Precision Score
TP - True Positives<br>
FP - False Positives<br>
Precision - Accuracy of positive predictions.<br>
Precision = TP/(TP + FP)
End of explanation
"""
from sklearn.metrics import recall_score
print("Recall score: {}".format(recall_score(y_true,y_pred)))
"""
Explanation: Recall Score
FN - False Negatives<br>
Recall (aka sensitivity or true positive rate): Fraction of positives That were correctly identified.<br>
Recall = TP/(TP+FN)
End of explanation
"""
from sklearn.metrics import f1_score
print("F1 Score: {}".format(f1_score(y_true,y_pred)))
"""
Explanation: F1 Score
F1 Score (aka F-Score or F-Measure) - A helpful metric for comparing two classifiers. F1 Score takes into account precision and the recall. It is created by finding the the harmonic mean of precision and recall.
F1 = 2 x (precision x recall)/(precision + recall)
End of explanation
"""
from sklearn.metrics import classification_report
print(classification_report(y_true,y_pred))
"""
Explanation: Classification Report
Report which includes Precision, Recall and F1-Score.
End of explanation
"""
from sklearn.metrics import confusion_matrix
import pandas as pd
confusion_df = pd.DataFrame(confusion_matrix(y_true,y_pred),
columns=["Predicted Class " + str(class_name) for class_name in [0,1]],
index = ["Class " + str(class_name) for class_name in [0,1]])
print(confusion_df)
"""
Explanation: Confusion Matrix
Confusion matrix allows you to look at the particular misclassified examples yourself and perform any further calculations as desired.
End of explanation
"""
|
hglanz/phys202-2015-work | assignments/assignment07/AlgorithmsEx01.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
"""
Explanation: Algorithms Exercise 1
Imports
End of explanation
"""
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
"""Split a string into a list of words, removing punctuation and stop words."""
s = s.replace("\n", " ")
for i in range(len(punctuation)):
s = s.replace(punctuation[i], " ")
#clean = ''.join([c.lower() for c in s if c not in punctuation])
clean = s.split(" ")
# Check if stop_words is a string
if stop_words != None:
if (isinstance(stop_words, str)):
stop_lst = stop_words.split(" ")
go_words = [w.lower() for w in clean if w not in stop_lst and len(w) > 0]
else:
go_words = [w.lower() for w in clean if w not in stop_words and len(w) > 0]
else:
go_words = [w.lower() for w in clean if len(w) > 0]
return(go_words)
#raise NotImplementedError()
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland = """
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
"""
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
"""
Explanation: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:
Split the string into lines using splitlines.
Split each line into a list of words and merge the lists for each line.
Use Python's builtin filter function to remove all punctuation.
If stop_words is a list, remove all occurences of the words in the list.
If stop_words is a space delimeted string of words, split them and remove them.
Remove any remaining empty words.
Make all words lowercase.
End of explanation
"""
def count_words(data):
"""Return a word count dictionary from the list of words in data."""
data.sort()
word_dict = {}
for i in range(len(data)):
if i == 0 or (data[i] != data[i-1]):
word_dict[data[i]] = 1
else:
word_dict[data[i]] += 1
return(word_dict)
#raise NotImplementedError()
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
"""
Explanation: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
End of explanation
"""
def sort_word_counts(wc):
"""Return a list of 2-tuples of (word, count), sorted by count descending."""
word_tups = [(key, wc[key]) for key in wc]
res = sorted(word_tups, key = lambda word: word[1], reverse = True)
return(res)
#raise NotImplementedError()
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
"""
Explanation: Write a function sort_word_counts that return a list of sorted word counts:
Each element of the list should be a (word, count) tuple.
The list should be sorted by the word counts, with the higest counts coming first.
To perform this sort, look at using the sorted function with a custom key and reverse
argument.
End of explanation
"""
file = open('mobydick_chapter1.txt', 'r')
data = file.read()
file.close()
swc = sort_word_counts(count_words(tokenize(data, 'the of and a to in is it that as')))
print(len(swc))
print(swc)
#raise NotImplementedError()
assert swc[0]==('i',43)
assert len(swc)==848
"""
Explanation: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:
Read the file into a string.
Tokenize with stop words of 'the of and a to in is it that as'.
Perform a word count, the sort and save the result in a variable named swc.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the dotplot
"""
Explanation: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
End of explanation
"""
|
junhwanjang/DataSchool | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb | mit | x = np.arange(1, 101)
x
y = np.arange(101, 201)
y
%%time
z = np.zeros_like(x)
for i, (xi, yi) in enumerate(zip(x, y)):
z[i] = xi + yi
z
"""
Explanation: NumPy 연산
벡터화 연산
NumPy는 코드를 간단하게 만들고 계산 속도를 빠르게 하기 위한 벡터화 연산(vectorized operation)을 지원한다. 벡터화 연산이란 반복문(loop)을 사용하지 않고 선형 대수의 벡터 혹은 행렬 연산과 유사한 코드를 사용하는 것을 말한다.
예를 들어 다음과 같은 연산을 해야 한다고 하자.
$$
x = \begin{bmatrix}1 \ 2 \ 3 \ \vdots \ 100 \end{bmatrix}, \;\;\;\;
y = \begin{bmatrix}101 \ 102 \ 103 \ \vdots \ 200 \end{bmatrix},
$$
$$z = x + y = \begin{bmatrix}1+101 \ 2+102 \ 3+103 \ \vdots \ 100+200 \end{bmatrix}= \begin{bmatrix}102 \ 104 \ 106 \ \vdots \ 300 \end{bmatrix}
$$
만약 NumPy의 벡터화 연산을 사용하지 않는다면 이 연산은 루프를 활용하여 다음과 같이 코딩해야 한다.
End of explanation
"""
%%time
z = x + y
z
"""
Explanation: 그러나 NumPy는 벡터화 연산을 지원하므로 다음과 같이 덧셈 연산 하나로 끝난다. 위에서 보인 선형 대수의 벡터 기호를 사용한 연산과 코드가 완전히 동일하다.
End of explanation
"""
x = np.arange(10)
x
a = 100
a * x
"""
Explanation: 연산 속도도 벡터화 연산이 훨씬 빠른 것을 볼 수 있다.
Element-Wise 연산
NumPy의 벡터화 연산은 같은 위치의 원소끼리 연산하는 element-wise 연산이다. NumPy의 ndarray를 선형 대수의 벡터나 행렬이라고 했을 때 덧셈, 뺄셈은 NumPy 연산과 일치한다
스칼라와 벡터의 곱도 마찬가지로 선형 대수에서 사용하는 식과 NumPy 코드가 일치한다.
End of explanation
"""
x = np.arange(10)
y = np.arange(10)
x * y
np.dot(x, y)
x.dot(y)
"""
Explanation: NumPy 곱셉의 경우에는 행렬의 곱, 즉 내적(inner product, dot product)의 정의와 다르다. 따라서 이 경우에는 별도로 dot이라는 명령 혹은 메서드를 사용해야 한다.
End of explanation
"""
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
a == b
a >= b
"""
Explanation: 비교 연산도 마찬가지로 element-wise 연산이다. 따라서 벡터 혹은 행렬 전체의 원소가 모두 같아야 하는 선형 대수의 비교 연산과는 다르다.
End of explanation
"""
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
c = np.array([1, 2, 3, 4])
np.array_equal(a, b)
np.array_equal(a, c)
"""
Explanation: 만약 배열 전체를 비교하고 싶다면 array_equal 명령을 사용한다.
End of explanation
"""
a = np.arange(5)
a
np.exp(a)
10**a
np.log(a)
np.log10(a)
"""
Explanation: 만약 NumPy 에서 제공하는 지수 함수, 로그 함수 등의 수학 함수를 사용하면 element-wise 벡터화 연산을 지원한다.
End of explanation
"""
import math
a = [1, 2, 3]
math.exp(a)
"""
Explanation: 만약 NumPy에서 제공하는 함수를 사용하지 않으면 벡터화 연산은 불가능하다.
End of explanation
"""
x = np.arange(5)
y = np.ones_like(x)
x + y
x + 1
"""
Explanation: 브로드캐스팅
선형 대수의 행렬 덧셈 혹은 뺄셈을 하려면 두 행렬의 크기가 같아야 한다. 그러나 NumPy에서는 서로 다른 크기를 가진 두 ndarray 배열의 사칙 연산도 지원한다. 이 기능을 브로드캐스팅(broadcasting)이라고 하는데 크기가 작은 배열을 자동으로 반복 확장하여 크기가 큰 배열에 맞추는 방벙이다.
예를 들어 다음과 같이 벡터와 스칼라를 더하는 경우를 생각하자. 선형 대수에서는 이러한 연산이 불가능하다.
$$
x = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix}, \;\;\;\;
x + 1 = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + 1 = ?
$$
그러나 NumPy는 브로드캐스팅 기능을 사용하여 스칼라를 벡터와 같은 크기로 확장시켜서 덧셈 계산을 한다.
$$
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} \overset{\text{numpy}}+ 1 =
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + \begin{bmatrix}1 \ 1 \ 1 \ 1 \ 1 \end{bmatrix} =
\begin{bmatrix}1 \ 2 \ 3 \ 4 \ 5 \end{bmatrix}
$$
End of explanation
"""
a = np.tile(np.arange(0, 40, 10), (3, 1)).T
a
b = np.array([0, 1, 2])
b
a + b
a = np.arange(0, 40, 10)[:, np.newaxis]
a
a + b
"""
Explanation: 브로드캐스팅은 더 차원이 높은 경우에도 적용된다. 다음 그림을 참조하라.
<img src="https://datascienceschool.net/upfiles/dbd3775c3b914d4e8c6bbbb342246b6a.png" style="width: 60%; margin: 0 auto 0 auto;">
End of explanation
"""
x = np.array([1, 2, 3, 4])
x
np.sum(x)
x.sum()
x = np.array([1, 3, 2])
x.min()
x.max()
x.argmin() # index of minimum
x.argmax() # index of maximum
x = np.array([1, 2, 3, 1])
x.mean()
np.median(x)
np.all([True, True, False])
np.any([True, True, False])
a = np.zeros((100, 100), dtype=np.int)
a
np.any(a != 0)
np.all(a == a)
a = np.array([1, 2, 3, 2])
b = np.array([2, 2, 3, 2])
c = np.array([6, 4, 4, 5])
((a <= b) & (b <= c)).all()
"""
Explanation: 차원 축소 연산
ndarray의 하나의 행에 있는 원소를 하나의 데이터 집합으로 보고 평균을 구하면 각 행에 대해 하나의 숫자가 나오게 된다. 예를 들어 10x5 크기의 2차원 배열에 대해 행-평균을 구하면 10개의 숫자를 가진 1차원 벡터가 나오게 된다. 이러한 연산을 차원 축소(dimension reduction) 연산이라고 한다.
ndarray 는 다음과 같은 차원 축소 연산 명령 혹은 메서드를 지원한다.
최대/최소: min, max, argmin, argmax
통계: sum, mean, median, std, var
불리언: all, any
End of explanation
"""
x = np.array([[1, 1], [2, 2]])
x
x.sum()
x.sum(axis=0) # columns (first dimension)
x.sum(axis=1) # rows (second dimension)
y = np.array([[1, 2, 3], [5, 6, 1]])
np.median(y, axis=-1) # last axis
"""
Explanation: 연산의 대상이 2차원 이상인 경우에는 어느 차원으로 계산을 할 지를 axis 인수를 사용하여 지시한다. axis=0인 경우는 행 연산, axis=1인 경우는 열 연산 등으로 사용한다. 디폴트 값은 0이다.
<img src="https://datascienceschool.net/upfiles/edfaf93a7f124f359343d1dcfe7f29fc.png", style="margin: 0 auto 0 auto;">
End of explanation
"""
a = np.array([[4, 3, 5], [1, 2, 1]])
a
np.sort(a)
np.sort(a, axis=1)
"""
Explanation: 정렬
sort 명령이나 메서드를 사용하여 배열 안의 원소를 크기에 따라 정렬하여 새로운 배열을 만들 수도 있다. 2차원 이상인 경우에는 마찬가지로 axis 인수를 사용하여 방향을 결정한다.
End of explanation
"""
a.sort(axis=1)
a
"""
Explanation: sort 메서드는 해당 객체의 자료 자체가 변화하는 in-place 메서드이므로 사용할 때 주의를 기울여야 한다.
End of explanation
"""
a = np.array([4, 3, 1, 2])
j = np.argsort(a)
j
a[j]
"""
Explanation: 만약 자료를 정렬하는 것이 아니라 순서만 알고 싶다면 argsort 명령을 사용한다.
End of explanation
"""
|
ercius/openNCEM | ncempy/notebooks/TitanX 4D-STEM Basic.ipynb | gpl-3.0 | dirName = r'C:\Users\Peter\Data\Te NP 4D-STEM'
fName = r'07_45x8 ss=5nm_spot11_CL=100 0p1s_alpha=4p63mrad_bin=4_300kV.dm4'
%matplotlib widget
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import ncempy.io as nio
import ncempy.algo as nalgo
import ipywidgets as widgets
from ipywidgets import interact, interactive
"""
Explanation: NCEM's 4D-STEM Basic Jupyter Notebook
Quickly process and investigate 4D-STEM data from the TitanX
To start:
Change the dirName and fName
Select Cell -- Run All
Scroll to bottom and investigate your data
End of explanation
"""
#Load the data using ncempy
fPath = Path(dirName) / Path(fName)
with nio.dm.fileDM(fPath.as_posix()) as dm1:
im1 = dm1.getDataset(0)
scanI = int(dm1.allTags['.ImageList.2.ImageTags.Series.nimagesx'])
scanJ = int(dm1.allTags['.ImageList.2.ImageTags.Series.nimagesy'])
numkI = im1['data'].shape[2]
numkJ = im1['data'].shape[1]
data = im1['data'].reshape([scanJ,scanI,numkJ,numkI])
print('Data shape is {}'.format(data.shape))
"""
Explanation: Import the data and reshape to 4D
Change dirName to the directory where your data lives
Change the fName to the full file name
End of explanation
"""
fg1,ax1 = plt.subplots(3,1,figsize=(10,6))
ax1[0].imshow(data[0,0,:,:])
# Find center of intensity
cm0 = nalgo.moments.centroid(nalgo.moments.moments(data[0,0,:,:].astype(np.float64)))
cm0 = [int(ii) for ii in cm0] # change to integer
# Plot the first diffraction pattern and found center
ax1[0].plot(cm0[1],cm0[0],'rx')
ax1[0].legend(['Center of central beam'])
ax1[0].set(title='First diffraction pattern\nCenter = {}'.format(cm0))
# Generate a bright field image
box0 = 25
BF0 = np.sum(np.sum(data[:,:,cm0[0]-box0:cm0[0]+box0,cm0[1]-box0:cm0[1]+box0],axis=3),axis=2)
ax1[1].imshow(BF0)
ax1[1].set(title='Bright field image')
ax1[2].imshow(np.sum(data, axis=(2,3)))
ax1[2].set(title='Sum of all diffraction intensity')
fg1.tight_layout()
"""
Explanation: Find the location of the zero beam and generate BF
Assumes the first diffraction pattern will have the least structure.
Use center of intensity to find pattern center
End of explanation
"""
im1 = data[:,:,::1,::1]
fg1,(ax1,ax2) = plt.subplots(1,2,figsize=(8,8))
p1 = ax1.plot(4,4,'or')
p1 = p1[0]
ax1.imshow(BF0)
im2 = ax2.imshow(np.log(im1[4,4,:,:]+50))
#Updates the plots
def axUpdate(i,j):
p1.set_xdata(i)
p1.set_ydata(j)
im2.set_array(np.log(im1[j,i,:,:]+50))
ax1.set(title='Bright Field Image',xlabel='i',label='j')
ax2.set(title='Diffraction pattern (log(I))')
#Connect the function and the sliders
w = interactive(axUpdate, i=(0,BF0.shape[1]-1), j=(0,BF0.shape[0]-1))
wB = widgets.Button(
description='Save current DP',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip=''
)
def saveCurrentDP(a):
curI = w.children[0].get_interact_value()
curJ = w.children[1].get_interact_value()
im = Image.fromarray(data[curJ,curI,:,:])
outName = fPath.as_posix() + '_DP{}i_{}j.tif'.format(curI,curJ)
im.save(outName)
wB.on_click(saveCurrentDP)
display(w)
display(wB)
"""
Explanation: Investigate the data
Scroll back and forth in the two axes with update of current position on Bright Field image
End of explanation
"""
DPmax = np.max(im1.reshape((im1.shape[0]*im1.shape[1],im1.shape[2],im1.shape[3])),axis=0)
#Plot the image
fg2,ax2 = plt.subplots(1,1)
ax2.imshow(np.sqrt(DPmax))
ax2.set(title='Maximum intensity for each detector pixel (sqrt)');
"""
Explanation: Find the maximum intensity for every pixel in the diffraction pattern
Useful to see features close to the noise floor
End of explanation
"""
|
NeuroDataDesign/seelviz | albert/prob/Tensor+Model.ipynb | apache-2.0 | FA = np.clip(FA, 0, 1)
RGB = color_fa(FA, tenfit.evecs)
nib.save(nib.Nifti1Image(np.array(255 * RGB, 'uint8'), img.get_affine()), 'tensor_rgb.nii.gz')
print('Computing tensor ellipsoids in a part of the splenium of the CC')
from dipy.data import get_sphere
sphere = get_sphere('symmetric724')
from dipy.viz import fvtk
ren = fvtk.ren()
evals = tenfit.evals[13:43, 44:74, 28:29]
evecs = tenfit.evecs[13:43, 44:74, 28:29]
cfa = RGB[13:43, 44:74, 28:29]
cfa /= cfa.max()
fvtk.add(ren, fvtk.tensor(evals, evecs, cfa, sphere))
fvtk.show(ren)
"""
Explanation: The FA is the normalized variance of the eigen-values of the tensor
$FA = \sqrt{\frac{1}{2}\frac{(\lambda_1-\lambda_2)^2+(\lambda_1-
\lambda_3)^2+(\lambda_2-\lambda_3)^2}{\lambda_1^2+
\lambda_2^2+\lambda_3^2}}$
End of explanation
"""
fvtk.clear(ren)
tensor_odfs = tenmodel.fit(data[20:50, 55:85, 38:39]).odf(sphere)
fvtk.add(ren, fvtk.sphere_funcs(tensor_odfs, sphere, colormap=None))
fvtk.show(ren)
"""
Explanation: Tensor elliptoids normalized to increase contrast
End of explanation
"""
|
LimeeZ/phys292-2015-work | days/day06/Matplotlib.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Visualization with Matplotlib
Learning Objectives: Learn how to make basic plots using Matplotlib's pylab API and how to use the Matplotlib documentation.
This notebook focuses only on the Matplotlib API, rather that the broader question of how you can use this API to make effective and beautiful visualizations.
Imports
The following imports should be used in all of your notebooks where Matplotlib in used:
End of explanation
"""
t = np.linspace(0, 10.0, 100)
plt.plot(t, np.sin(t))
plt.xlabel('Time')
plt.ylabel('Signal')
plt.title('My Plot'); # supress text output
"""
Explanation: Overview
The following conceptual organization is simplified and adapted from Benjamin Root's AnatomyOfMatplotlib tutorial.
Figures and Axes
In Matplotlib a single visualization is a Figure.
A Figure can have multiple areas, called subplots. Each subplot is an Axes.
If you don't create a Figure and Axes yourself, Matplotlib will automatically create one for you.
All plotting commands apply to the current Figure and Axes.
The following functions can be used to create and manage Figure and Axes objects.
Function | Description
:-----------------|:----------------------------------------------------------
figure | Creates a new Figure
gca | Get the current Axes instance
savefig | Save the current Figure to a file
sca | Set the current Axes instance
subplot | Create a new subplot Axes for the current Figure
subplots | Create a new Figure and a grid of subplots Axes
Plotting Functions
Once you have created a Figure and one or more Axes objects, you can use the following function to put data onto that Axes.
Function | Description
:-----------------|:--------------------------------------------
bar | Make a bar plot
barh | Make a horizontal bar plot
boxplot | Make a box and whisker plot
contour | Plot contours
contourf | Plot filled contours
hist | Plot a histogram
hist2d | Make a 2D histogram plot
imshow | Display an image on the axes
matshow | Display an array as a matrix
pcolor | Create a pseudocolor plot of a 2-D array
pcolormesh | Plot a quadrilateral mesh
plot | Plot lines and/or markers
plot_date | Plot with data with dates
polar | Make a polar plot
scatter | Make a scatter plot of x vs y
Plot modifiers
You can then use the following functions to modify your visualization.
Function | Description
:-----------------|:---------------------------------------------------------------------
annotate | Create an annotation: a piece of text referring to a data point
box | Turn the Axes box on or off
clabel | Label a contour plot
colorbar | Add a colorbar to a plot
grid | Turn the Axes grids on or off
legend | Place a legend on the current Axes
loglog | Make a plot with log scaling on both the x and y axis
semilogx | Make a plot with log scaling on the x axis
semilogy | Make a plot with log scaling on the y axis
subplots_adjust | Tune the subplot layout
tick_params | Change the appearance of ticks and tick labels
ticklabel_format| Change the ScalarFormatter used by default for linear axes
tight_layout | Automatically adjust subplot parameters to give specified padding
text | Add text to the axes
title | Set a title of the current axes
xkcd | Turns on XKCD sketch-style drawing mode
xlabel | Set the x axis label of the current axis
xlim | Get or set the x limits of the current axes
xticks | Get or set the x-limits of the current tick locations and labels
ylabel | Set the y axis label of the current axis
ylim | Get or set the y-limits of the current axes
yticks | Get or set the y-limits of the current tick locations and labels
Basic plotting
For now, we will work with basic line plots (plt.plot) to show how the Matplotlib pylab plotting API works. In this case, we don't create a Figure so Matplotlib does that automatically.
End of explanation
"""
f = plt.figure(figsize=(9,6)) # 9" x 6", default is 8" x 5.5"
plt.plot(t, np.sin(t), 'r.');
plt.xlabel('x')
plt.ylabel('y')
"""
Explanation: Basic plot modification
With a third argument you can provide the series color and line/marker style. Here we create a Figure object and modify its size.
End of explanation
"""
from matplotlib import lines
lines.lineStyles.keys()
from matplotlib import markers
markers.MarkerStyle.markers.keys()
"""
Explanation: Here is a list of the single character color strings:
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
The following will show all of the line and marker styles:
End of explanation
"""
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(-1.0, 11.0)
plt.ylim(-1.0, 1.0)
"""
Explanation: To change the plot's limits, use xlim and ylim:
End of explanation
"""
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(0.0, 10.0)
plt.ylim(-1.0, 1.0)
plt.xticks([0,5,10], ['zero','five','10'])
plt.tick_params(axis='y', direction='inout', length=10)
"""
Explanation: You can change the ticks along a given axis by using xticks, yticks and tick_params:
End of explanation
"""
plt.plot(np.random.rand(100), 'b-')
plt.grid(True)
plt.box(False)
"""
Explanation: Box and grid
You can enable a grid or disable the box. Notice that the ticks and tick labels remain.
End of explanation
"""
plt.plot(t, np.sin(t), label='sin(t)')
plt.plot(t, np.cos(t), label='cos(t)')
plt.xlabel('t')
plt.ylabel('Signal(t)')
plt.ylim(-1.5, 1.5)
plt.xlim(right=12.0)
plt.legend()
"""
Explanation: Multiple series
Multiple calls to a plotting function will all target the current Axes:
End of explanation
"""
plt.subplot(2,1,1) # 2 rows x 1 col, plot 1
plt.plot(t, np.exp(0.1*t))
plt.ylabel('Exponential')
plt.subplot(2,1,2) # 2 rows x 1 col, plot 2
plt.plot(t, t**2)
plt.ylabel('Quadratic')
plt.xlabel('x')
plt.tight_layout()
"""
Explanation: Subplots
Subplots allow you to create a grid of plots in a single figure. There will be an Axes associated with each subplot and only one Axes can be active at a time.
The first way you can create subplots is to use the subplot function, which creates and activates a new Axes for the active Figure:
End of explanation
"""
f, ax = plt.subplots(2, 2)
for i in range(2):
for j in range(2):
plt.sca(ax[i,j])
plt.plot(np.random.rand(20))
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
"""
Explanation: In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner:
End of explanation
"""
f, ax = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6,6))
for i in range(2):
for j in range(2):
plt.sca(ax[i,j])
plt.plot(np.random.rand(20))
if i==1:
plt.xlabel('x')
if j==0:
plt.ylabel('y')
plt.tight_layout()
"""
Explanation: The subplots function also makes it easy to pass arguments to Figure and to share axes:
End of explanation
"""
plt.plot(t, np.sin(t), marker='o', color='darkblue',
linestyle='--', alpha=0.3, markersize=10)
"""
Explanation: More marker and line styling
All plot commands, including plot, accept keyword arguments that can be used to style the lines in more detail. Fro more information see:
Controlling line properties
Specifying colors
End of explanation
"""
|
tensorflow/model-optimization | tensorflow_model_optimization/g3doc/guide/combine/pqat_example.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tempfile
import zipfile
import os
"""
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/combine/pqat_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/pqat_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/pqat_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/combine/pqat_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Pruning preserving quantization aware training (PQAT) Keras example
Overview
This is an end to end example showing the usage of the pruning preserving quantization aware training (PQAT) API, part of the TensorFlow Model Optimization Toolkit's collaborative optimization pipeline.
Other pages
For an introduction to the pipeline and other available techniques, see the collaborative optimization overview page.
Contents
In the tutorial, you will:
Train a tf.keras model for the MNIST dataset from scratch.
Fine-tune the model with pruning, using the sparsity API, and see the accuracy.
Apply QAT and observe the loss of sparsity.
Apply PQAT and observe that the sparsity applied earlier has been preserved.
Generate a TFLite model and observe the effects of applying PQAT on it.
Compare the achieved PQAT model accuracy with a model quantized using post-training quantization.
Setup
You can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide.
End of explanation
"""
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3),
activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
"""
Explanation: Train a tf.keras model for MNIST without pruning
End of explanation
"""
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
"""
Explanation: Evaluate the baseline model and save it for later usage
End of explanation
"""
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100)
}
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep()
]
pruned_model = prune_low_magnitude(model, **pruning_params)
# Use smaller learning rate for fine-tuning
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
pruned_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
pruned_model.summary()
"""
Explanation: Prune and fine-tune the model to 50% sparsity
Apply the prune_low_magnitude() API to prune the whole pre-trained model to demonstrate and observe its effectiveness in reducing the model size when applying zip, while maintaining accuracy. For how best to use the API to achieve the best compression rate while maintaining your target accuracy, refer to the pruning comprehensive guide.
Define the model and apply the sparsity API
The model needs to be pre-trained before using the sparsity API.
End of explanation
"""
# Fine-tune model
pruned_model.fit(
train_images,
train_labels,
epochs=3,
validation_split=0.1,
callbacks=callbacks)
"""
Explanation: Fine-tune the model and evaluate the accuracy against baseline
Fine-tune the model with pruning for 3 epochs.
End of explanation
"""
def print_model_weights_sparsity(model):
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Wrapper):
weights = layer.trainable_weights
else:
weights = layer.weights
for weight in weights:
# ignore auxiliary quantization weights
if "quantize_layer" in weight.name:
continue
weight_size = weight.numpy().size
zero_num = np.count_nonzero(weight == 0)
print(
f"{weight.name}: {zero_num/weight_size:.2%} sparsity ",
f"({zero_num}/{weight_size})",
)
"""
Explanation: Define helper functions to calculate and print the sparsity of the model.
End of explanation
"""
stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model)
print_model_weights_sparsity(stripped_pruned_model)
"""
Explanation: Check that the model was correctly pruned. We need to strip the pruning wrapper first.
End of explanation
"""
_, pruned_model_accuracy = pruned_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Pruned test accuracy:', pruned_model_accuracy)
"""
Explanation: For this example, there is minimal loss in test accuracy after pruning, compared to the baseline.
End of explanation
"""
# QAT
qat_model = tfmot.quantization.keras.quantize_model(stripped_pruned_model)
qat_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train qat model:')
qat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
# PQAT
quant_aware_annotate_model = tfmot.quantization.keras.quantize_annotate_model(
stripped_pruned_model)
pqat_model = tfmot.quantization.keras.quantize_apply(
quant_aware_annotate_model,
tfmot.experimental.combine.Default8BitPrunePreserveQuantizeScheme())
pqat_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train pqat model:')
pqat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
print("QAT Model sparsity:")
print_model_weights_sparsity(qat_model)
print("PQAT Model sparsity:")
print_model_weights_sparsity(pqat_model)
"""
Explanation: Apply QAT and PQAT and check effect on model sparsity in both cases
Next, we apply both QAT and pruning-preserving QAT (PQAT) on the pruned model and observe that PQAT preserves sparsity on your pruned model. Note that we stripped pruning wrappers from your pruned model with tfmot.sparsity.keras.strip_pruning before applying PQAT API.
End of explanation
"""
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in kilobytes.
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)/1000
"""
Explanation: See compression benefits of PQAT model
Define helper function to get zipped model file.
End of explanation
"""
# QAT model
converter = tf.lite.TFLiteConverter.from_keras_model(qat_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
qat_tflite_model = converter.convert()
qat_model_file = 'qat_model.tflite'
# Save the model.
with open(qat_model_file, 'wb') as f:
f.write(qat_tflite_model)
# PQAT model
converter = tf.lite.TFLiteConverter.from_keras_model(pqat_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
pqat_tflite_model = converter.convert()
pqat_model_file = 'pqat_model.tflite'
# Save the model.
with open(pqat_model_file, 'wb') as f:
f.write(pqat_tflite_model)
print("QAT model size: ", get_gzipped_model_size(qat_model_file), ' KB')
print("PQAT model size: ", get_gzipped_model_size(pqat_model_file), ' KB')
"""
Explanation: Since this is a small model, the difference between the two models isn't very noticeable. Applying pruning and PQAT to a bigger production model would yield a more significant compression.
End of explanation
"""
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print(f"Evaluated on {i} results so far.")
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
"""
Explanation: See the persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TFLite model on the test dataset.
End of explanation
"""
interpreter = tf.lite.Interpreter(pqat_model_file)
interpreter.allocate_tensors()
pqat_test_accuracy = eval_model(interpreter)
print('Pruned and quantized TFLite test_accuracy:', pqat_test_accuracy)
print('Pruned TF test accuracy:', pruned_model_accuracy)
"""
Explanation: You evaluate the model, which has been pruned and quantized, and then see the accuracy from TensorFlow persists in the TFLite backend.
End of explanation
"""
def mnist_representative_data_gen():
for image in train_images[:1000]:
image = np.expand_dims(image, axis=0).astype(np.float32)
yield [image]
"""
Explanation: Apply post-training quantization and compare to PQAT model
Next, we use normal post-training quantization (no fine-tuning) on the pruned model and check its accuracy against the PQAT model. This demonstrates why you would need to use PQAT to improve the quantized model's accuracy.
First, define a generator for the callibration dataset from the first 1000 training images.
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(stripped_pruned_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = mnist_representative_data_gen
post_training_tflite_model = converter.convert()
post_training_model_file = 'post_training_model.tflite'
# Save the model.
with open(post_training_model_file, 'wb') as f:
f.write(post_training_tflite_model)
# Compare accuracy
interpreter = tf.lite.Interpreter(post_training_model_file)
interpreter.allocate_tensors()
post_training_test_accuracy = eval_model(interpreter)
print('PQAT TFLite test_accuracy:', pqat_test_accuracy)
print('Post-training (no fine-tuning) TF test accuracy:', post_training_test_accuracy)
"""
Explanation: Quantize the model and compare accuracy to the previously acquired PQAT model. Note that the model quantized with fine-tuning achieves higher accuracy.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb | apache-2.0 | !pip3 install pandas-gbq
%%bash
git clone https://github.com/GoogleCloudPlatform/training-data-analyst.git \
--depth 1
cd training-data-analyst/blogs/gcp_forecasting
"""
Explanation: Overview
Time-series forecasting problems are ubiquitous throughout the business world and can be posed as a supervised machine learning problem.
A common approach to creating features and labels is to use a sliding window where the features are historical entries and the label(s) represent entries in the future. As any data-scientist that works with time-series knows, this sliding window approach can be tricky to get right.
In this notebook we share a workflow to tackle time-series problems.
Dataset
For this demo we will be using New York City real estate data obtained from nyc.gov. The data starts in 2003. The data can be loaded into BigQuery with the following code:
```python
Read data. Data was collected from nyc open data repository.
import pandas as pd
dfr = pd.read_csv('https://storage.googleapis.com/asl-testing/data/nyc_open_data_real_estate.csv')
Upload to BigQuery.
PROJECT = 'YOUR-PROJECT-HERE'
DATASET = 'nyc_real_estate'
TABLE = 'residential_sales'
dfr.to_gbq('{}.{}'.format(DATASET, TABLE), PROJECT)
```
Objective
The goal of the notebook is to show how to forecast using Pandas and BigQuery. The steps achieve in this notebook are the following:
1 Building a machine learning (ML) forecasting model locally
* Create features and labels on a subsample of data
* Train a model using sklearn
2 Building and scaling out a ML using Google BigQuery
* Create features and labels on the full dataset using BigQuery.
* Train the model on the entire dataset using BigQuery ML
3 Building an advanced forecasting modeling using recurrent neural network (RNN) model
* Create features and labels on the full dataset using BigQuery.
* Train a model using TensorFlow
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
BigQuery
Cloud storage
AI Platform
The BigQuery and Cloud Storage costs are < \$0.05 and the AI Platform training job uses approximately 0.68 ML units or ~\$0.33.
Pandas: Rolling window for time-series forecasting
We have created a Pandas solution create_rolling_features_label function that automatically creates the features/label setup. This is suitable for smaller datasets and for local testing before training on the Cloud. And we have also created a BigQuery script that creates these rolling windows suitable for large datasets.
Data Exploration
This notebook is self-contained so let's clone the training-data-analyst repo so we can have access to the feature and label creation functions in time_series.py and scalable_time_series.py. We'll be using the pandas_gbq package so make sure that it is installed.
End of explanation
"""
%matplotlib inline
import pandas as pd
import pandas_gbq as gbq
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.linear_model import Ridge
import time_series
# Allow you to easily have Python variables in SQL query.
from IPython.core.magic import register_cell_magic
from IPython import get_ipython
@register_cell_magic('with_globals')
def with_globals(line, cell):
contents = cell.format(**globals())
if 'print' in line:
print(contents)
get_ipython().run_cell(contents)
"""
Explanation: After cloning the above repo we can important pandas and our custom module time_series.py.
End of explanation
"""
dfr = pd.read_csv('https://storage.googleapis.com/asl-testing/data/nyc_open_data_real_estate.csv')
# Upload to BigQuery.
PROJECT = "[your-project-id]"
DATASET = 'nyc_real_estate'
TABLE = 'residential_sales'
BUCKET = "[your-bucket]" # Used later.
gbq.to_gbq(dfr, '{}.{}'.format(DATASET, TABLE), PROJECT, if_exists='replace')
"""
Explanation: For this demo we will be using New York City real estate data obtained from nyc.gov. This public dataset starts in 2003. The data can be loaded into BigQuery with the following code:
End of explanation
"""
SOURCE_TABLE = TABLE
FILTER = '''residential_units = 1 AND sale_price > 10000
AND sale_date > TIMESTAMP('2010-12-31 00:00:00')'''
%%with_globals
%%bigquery --project {PROJECT} df
SELECT
borough,
neighborhood,
building_class_category,
tax_class_at_present,
block,
lot,
ease_ment,
building_class_at_present,
address,
apartment_number,
zip_code,
residential_units,
commercial_units,total_units,
land_square_feet,
gross_square_feet,
year_built,
tax_class_at_time_of_sale,
building_class_at_time_of_sale,
sale_price,
sale_date,
price_per_sq_ft
FROM
{SOURCE_TABLE}
WHERE
{FILTER}
ORDER BY
sale_date
LIMIT
100
df.head()
%%with_globals
%%bigquery --project {PROJECT} df
SELECT
neighborhood,
COUNT(*) AS cnt
FROM
{SOURCE_TABLE}
WHERE
{FILTER}
GROUP BY
neighborhood
ORDER BY
cnt
"""
Explanation: Since we are just doing local modeling, let's just use a subsample of the data. Later we will train on all of the data in the cloud.
End of explanation
"""
ax = df.set_index('neighborhood').cnt\
.tail(10)\
.plot(kind='barh');
ax.set_xlabel('total sales');
"""
Explanation: The most sales are from the upper west side, midtown west, and the upper east side.
End of explanation
"""
%%with_globals
%%bigquery --project {PROJECT} df
SELECT
neighborhood,
APPROX_QUANTILES(sale_price, 100)[
OFFSET
(50)] AS median_price
FROM
{SOURCE_TABLE}
WHERE
{FILTER}
GROUP BY
neighborhood
ORDER BY
median_price
ax = df.set_index('neighborhood').median_price\
.tail(10)\
.plot(kind='barh');
ax.set_xlabel('median price')
"""
Explanation: SOHO and Civic Center are the most expensive neighborhoods.
End of explanation
"""
%%with_globals
%%bigquery --project asl-testing-217717 df
SELECT
sale_week,
APPROX_QUANTILES(sale_price, 100)[
OFFSET
(50)] AS median_price
FROM (
SELECT
TIMESTAMP_TRUNC(sale_date, week) AS sale_week,
sale_price
FROM
{SOURCE_TABLE}
WHERE
{FILTER})
GROUP BY
sale_week
ORDER BY
sale_week
sales = pd.Series(df.median_price)
sales.index= pd.DatetimeIndex(df.sale_week.dt.date)
sales.head()
ax = sales.plot(figsize=(8,4), label='median_price')
ax = sales.rolling(10).mean().plot(ax=ax, label='10 week rolling average')
ax.legend()
"""
Explanation: Build features
Let's create features for building a machine learning model:
Aggregate median sales for each week. Prices are noisy and by grouping by week, we will smooth out irregularities.
Create a rolling window to split the single long time series into smaller windows. One feature vector will contain a single window and the label will be a single observation (or window for multiple predictions) occuring after the window.
End of explanation
"""
WINDOW_SIZE = 52 * 1
HORIZON = 4*6
MONTHS = 0
WEEKS = 1
LABELS_SIZE = 1
df = time_series.create_rolling_features_label(sales, window_size=WINDOW_SIZE, pred_offset=HORIZON)
df = time_series.add_date_features(df, df.index, months=MONTHS, weeks=WEEKS)
df.head()
"""
Explanation: Sliding window
Let's create our features. We will use the create_rolling_features_label function that automatically creates the features/label setup.
Create the features and labels.
End of explanation
"""
# Features, label.
X = df.drop('label', axis=1)
y = df['label']
# Train/test split. Splitting on time.
train_ix = time_series.is_between_dates(y.index,
end='2015-12-30')
test_ix = time_series.is_between_dates(y.index,
start='2015-12-30',
end='2018-12-30 08:00:00')
X_train, y_train = X.iloc[train_ix], y.iloc[train_ix]
X_test, y_test = X.iloc[test_ix], y.iloc[test_ix]
print(X_train.shape, X_test.shape)
"""
Explanation: Let's train our model using all weekly median prices from 2003 -- 2015. Then we will test our model's performance on prices from 2016 -- 2018
End of explanation
"""
mean = X_train.mean()
std = X_train.std()
def zscore(X):
return (X-mean)/std
X_train = zscore(X_train)
X_test = zscore(X_test)
"""
Explanation: Z-score normalization for the features for training.
End of explanation
"""
df_baseline = y_test.to_frame(name='label')
df_baseline['pred'] = y_train.mean()
# Join mean predictions with test labels.
baseline_global_metrics = time_series.Metrics(df_baseline.pred,
df_baseline.label)
baseline_global_metrics.report("Global Baseline Model")
# Train model.
cl = RandomForestRegressor(n_estimators=500, max_features='sqrt',
random_state=10, criterion='mse')
cl = Ridge(100)
cl = GradientBoostingRegressor ()
cl.fit(X_train, y_train)
pred = cl.predict(X_test)
random_forest_metrics = time_series.Metrics(y_test,
pred)
random_forest_metrics.report("Forest Model")
"""
Explanation: Initial model
Baseline
Build naive model that just uses the mean of training set.
End of explanation
"""
# Data frame to query for plotting
df_res = pd.DataFrame({'pred': pred, 'baseline': df_baseline.pred, 'y_test': y_test})
metrics = time_series.Metrics(df_res.y_test, df_res.pred)
ax = df_res.iloc[:].plot(y=[ 'pred', 'y_test'],
style=['b-','k-'],
figsize=(10,5))
ax.set_title('rmse: {:2.2f}'.format(metrics.rmse), size=16);
ax.set_ylim(20,)
df_res.head()
"""
Explanation: The regression model performs 35% better than the baseline model.
Observations:
* Linear Regression does okay for this dataset (Regularization helps generalize the model)
* Random Forest is better -- doesn't require a lot of tuning. It performs a bit better than regression.
* Gradient Boosting does do better than regression
Interpret results
End of explanation
"""
# Import BigQuery module
from google.cloud import bigquery
# Import external custom module containing SQL queries
import scalable_time_series
# Define hyperparameters
value_name = "med_sales_price"
downsample_size = 7 # 7 days into 1 week
window_size = 52
labels_size = 1
horizon = 1
# Construct a BigQuery client object.
client = bigquery.Client()
# Set dataset_id to the ID of the dataset to create.
sink_dataset_name = "temp_forecasting_dataset"
dataset_id = "{}.{}".format(client.project, sink_dataset_name)
# Construct a full Dataset object to send to the API.
dataset = bigquery.Dataset.from_string(dataset_id)
# Specify the geographic location where the dataset should reside.
dataset.location = "US"
# Send the dataset to the API for creation.
# Raises google.api_core.exceptions.Conflict if the Dataset already
# exists within the project.
try:
dataset = client.create_dataset(dataset) # API request
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
except Exception as e:
print("Dataset {}.{} already exists".format(
client.project, dataset.dataset_id))
"""
Explanation: BigQuery modeling
We have observed there is signal in our data and our smaller, local model is working better. Let's scale this model out to the cloud. Let's train a BigQuery Machine Learning (BQML) on the full dataset.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.
Make sure that billing is enabled for your project.
BigQuery is automatically enabled in new projects. To activate BigQuery in a pre-existing project, go to Enable the BigQuery API.
Enter your project ID in the cell below.
Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
BigQuery > BigQuery Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
computer.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below.
Import libraries
Import supporting modules:
End of explanation
"""
# Call BigQuery and examine in dataframe
source_dataset = "nyc_real_estate"
source_table_name = "all_sales"
query_create_date_range = scalable_time_series.create_date_range(
client.project, source_dataset, source_table_name)
df = client.query(query_create_date_range + "LIMIT 100").to_dataframe()
df.head(5)
"""
Explanation: We need to create a date range table in BigQuery so that we can join our data to that to get the correct sequences.
End of explanation
"""
job_config = bigquery.QueryJobConfig()
# Set the destination table
table_name = "start_end_timescale_date_range"
table_ref = client.dataset(sink_dataset_name).table(table_name)
job_config.destination = table_ref
job_config.write_disposition = "WRITE_TRUNCATE"
# Start the query, passing in the extra configuration.
query_job = client.query(
query=query_create_date_range,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
"""
Explanation: Execute query and write to BigQuery table.
End of explanation
"""
# Call BigQuery and examine in dataframe
sales_dataset_table = source_dataset + "." + source_table_name
query_bq_sub_sequences = scalable_time_series.bq_create_rolling_features_label(
client.project, sink_dataset_name, table_name, sales_dataset_table,
value_name, downsample_size, window_size, horizon, labels_size)
print(query_bq_sub_sequences[0:500])
%%with_globals
%%bigquery --project $PROJECT
{query_bq_sub_sequences}
LIMIT 100
"""
Explanation: Now that we have the date range table created we can create our training dataset for BQML.
End of explanation
"""
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("bqml_forecasting"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created")
except:
print("Dataset already exists")
"""
Explanation: Create BigQuery dataset
Prior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, Dataset means a folder for tables.
We will take advantage of BigQuery's Python Client to create the dataset.
End of explanation
"""
feature_list = ["price_ago_{time}".format(time=time)
for time in range(window_size, 0, -1)]
label_list = ["price_ahead_{time}".format(time=time)
for time in range(1, labels_size + 1)]
select_list = ",".join(feature_list + label_list)
select_string = "SELECT {select_list} FROM ({query})".format(
select_list=select_list,
query=query_bq_sub_sequences)
concat_vars = []
concat_vars.append("CAST(feat_seq_start_date AS STRING)")
concat_vars.append("CAST(lab_seq_end_date AS STRING)")
farm_finger = "FARM_FINGERPRINT(CONCAT({concat_vars}))".format(
concat_vars=", ".join(concat_vars))
sampling_clause = "ABS(MOD({farm_finger}, 100))".format(
farm_finger=farm_finger)
bqml_train_query = "{select_string} WHERE {sampling_clause} < 80".format(
select_string=select_string, sampling_clause=sampling_clause)
bqml_eval_query = "{select_string} WHERE {sampling_clause} >= 80".format(
select_string=select_string, sampling_clause=sampling_clause)
"""
Explanation: Split dataset into a train and eval set.
End of explanation
"""
%%with_globals
%%bigquery --project $PROJECT
CREATE or REPLACE MODEL bqml_forecasting.nyc_real_estate
OPTIONS(model_type = "linear_reg",
input_label_cols = ["price_ahead_1"]) AS
{bqml_train_query}
"""
Explanation: Create model
To create a model
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
{select_list}
FROM
ML.TRAINING_INFO(MODEL `bqml_forecasting.nyc_real_estate`)
"""
Explanation: Get training statistics
Because the query uses a CREATE MODEL statement to create a table, you do not see query results. The output is an empty string.
To get the training results we use the ML.TRAINING_INFO function.
Have a look at Step Three and Four of this tutorial to see a similar example.
End of explanation
"""
%%with_globals
%%bigquery --project $PROJECT
#standardSQL
SELECT
{select_list}
FROM
ML.EVALUATE(MODEL `bqml_forecasting.nyc_real_estate`, ({bqml_eval_query}))
"""
Explanation: 'eval_loss' is reported as mean squared error, so our RMSE is 291178. Your results may vary.
End of explanation
"""
%%with_globals
%%bigquery --project $PROJECT df
#standardSQL
SELECT
predicted_price_ahead_1
FROM
ML.PREDICT(MODEL `bqml_forecasting.nyc_real_estate`, ({bqml_eval_query}))
"""
Explanation: Predict
To use our model to make predictions, we use ML.PREDICT. Let's, use the nyc_real_estate you trained above to infer median sales price of all of our data.
Have a look at Step Five of this tutorial to see another example.
End of explanation
"""
# Construct a BigQuery client object.
client = bigquery.Client()
# Set dataset_id to the ID of the dataset to create.
sink_dataset_name = "temp_forecasting_dataset"
dataset_id = "{}.{}".format(client.project, sink_dataset_name)
# Construct a full Dataset object to send to the API.
dataset = bigquery.Dataset.from_string(dataset_id)
# Specify the geographic location where the dataset should reside.
dataset.location = "US"
# Send the dataset to the API for creation.
# Raises google.api_core.exceptions.Conflict if the Dataset already
# exists within the project.
try:
dataset = client.create_dataset(dataset) # API request
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
except:
print("Dataset {}.{} already exists".format(
client.project, dataset.dataset_id))
"""
Explanation: TensorFlow Sequence Model
If you might want to use a more custom model, then Keras or TensorFlow may be helpful. Below we are going to create a custom LSTM sequence-to-one model that will read our input data in via CSV files and will train and evaluate.
Create temporary BigQuery dataset
End of explanation
"""
# Call BigQuery and examine in dataframe
source_dataset = "nyc_real_estate"
source_table_name = "all_sales"
query_create_date_range = scalable_time_series.create_date_range(
client.project, source_dataset, source_table_name)
df = client.query(query_create_date_range + "LIMIT 100").to_dataframe()
df.head(5)
"""
Explanation: We need to create a date range table in BigQuery so that we can join our data to that to get the correct sequences.
End of explanation
"""
job_config = bigquery.QueryJobConfig()
# Set the destination table
table_name = "start_end_timescale_date_range"
table_ref = client.dataset(sink_dataset_name).table(table_name)
job_config.destination = table_ref
job_config.write_disposition = "WRITE_TRUNCATE"
# Start the query, passing in the extra configuration.
query_job = client.query(
query=query_create_date_range,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
"""
Explanation: Execute query and write to BigQuery table.
End of explanation
"""
# Call BigQuery and examine in dataframe
sales_dataset_table = source_dataset + "." + source_table_name
downsample_size = 7
query_csv_sub_seqs = scalable_time_series.csv_create_rolling_features_label(
client.project, sink_dataset_name, table_name, sales_dataset_table,
value_name, downsample_size, window_size, horizon, labels_size)
df = client.query(query_csv_sub_seqs + "LIMIT 100").to_dataframe()
df.head(20)
"""
Explanation: Now that we have the date range table created we can create our training dataset.
End of explanation
"""
job_config = bigquery.QueryJobConfig()
csv_select_list = "med_sales_price_agg, labels_agg"
for step in ["train", "eval"]:
if step == "train":
selquery = "SELECT {csv_select_list} FROM ({}) WHERE {} < 80".format(
query_csv_sub_sequences, sampling_clause)
else:
selquery = "SELECT {csv_select_list} FROM ({}) WHERE {} >= 80".format(
query_csv_sub_sequences, sampling_clause)
# Set the destination table
table_name = "nyc_real_estate_{}".format(step)
table_ref = client.dataset(sink_dataset_name).table(table_name)
job_config.destination = table_ref
job_config.write_disposition = "WRITE_TRUNCATE"
# Start the query, passing in the extra configuration.
query_job = client.query(
query=selquery,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
"""
Explanation: Now let's write the our training data table to BigQuery for train and eval so that we can export to CSV for TensorFlow.
End of explanation
"""
dataset_ref = client.dataset(
dataset_id=sink_dataset_name, project=client.project)
for step in ["train", "eval"]:
destination_uri = "gs://{}/{}".format(
BUCKET, "forecasting/nyc_real_estate/data/{}*.csv".format(step))
table_name = "nyc_real_estate_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, sink_dataset_name, table_name, destination_uri))
!gsutil -m cp gs://asl-testing-bucket/forecasting/nyc_real_estate/data/*.csv .
!head train*.csv
"""
Explanation: Export BigQuery table to CSV in GCS.
End of explanation
"""
import os
PROJECT = PROJECT # REPLACE WITH YOUR PROJECT ID
BUCKET = BUCKET # REPLACE WITH A BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR REGION e.g. us-central1
# Import os environment variables
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TF_VERSION"] = "1.13"
os.environ["SEQ_LEN"] = str(WINDOW_SIZE)
%%bash
OUTDIR=gs://$BUCKET/forecasting/nyc_real_estate/trained_model
JOBNAME=nyc_real_estate$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$PWD/tf_module/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TF_VERSION \
-- \
--train_file_pattern="gs://asl-testing-bucket/forecasting/nyc_real_estate/data/train*.csv" \
--eval_file_pattern="gs://asl-testing-bucket/forecasting/nyc_real_estate/data/eval*.csv" \
--output_dir=$OUTDIR \
--job-dir=./tmp \
--seq_len=$SEQ_LEN \
--train_batch_size=32 \
--eval_batch_size=32 \
--train_steps=1000 \
--learning_rate=0.01 \
--start_delay_secs=60 \
--throttle_secs=60 \
--lstm_hidden_units="32,16,8"
"""
Explanation: Train TensorFlow on Google Cloud AI Platform.
End of explanation
"""
|
freininghaus/adventofcode | 2016/day05-python.ipynb | mit | with open("input/day5.txt", "r") as f:
inputLines = [line for line in f]
doorId = bytes(inputLines[0].strip(), "utf-8")
import hashlib
import itertools
"""
Explanation: Day 5: How About a Nice Game of Chess
End of explanation
"""
def interestingHashes(prefix):
for i in itertools.count():
m = hashlib.md5()
m.update(prefix + str(i).encode("utf-8"))
h = m.hexdigest()
if h.startswith("00000"):
yield h
"""
Explanation: Implement a generator for 'interesting' hashes
End of explanation
"""
def password1(prefix):
return "".join(h[5] for h in itertools.islice(interestingHashes(prefix), 8))
password1(doorId)
"""
Explanation: Part 1: Join the 5th character of each of the first 8 interesting hashes
End of explanation
"""
def password2(prefix):
result = [None] * 8
for h in interestingHashes(prefix):
if h[5] in "01234567":
pos = int(h[5])
if result[pos] is None:
result[pos] = h[6]
if all(c is not None for c in result):
return "".join(result)
password2(doorId)
"""
Explanation: Part 2: More complex password algorithm
If the 5th character of an interesting hash is a digit between 0 and 7, it tells us a position in the password.
This position is assigned to the 6th character of the interesting hash.
An interesting hash is ignored if the position given by its 5th character has been seen already.
End of explanation
"""
|
mortcanty/SARDocker | mohammed.ipynb | mit | %matplotlib inline
"""
Explanation: Test for Mohammed
This container was started with
sudo docker run -d -p 433:8888 --name=sar -v /home/mort/imagery/mohammed/Data:/home/imagery mort/sardocker
End of explanation
"""
ls /home/imagery
"""
Explanation: Here are the RadarSat-2 quadpol coherency matrix image directories as created from the Sentinel-1 Toolbox:
End of explanation
"""
run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721079_DK650144_FQ17W_20160403_230258_HH_VV_HV_VH_SLC/
run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721080_DK650145_FQ17W_20160427_230257_HH_VV_HV_VH_SLC/
run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/
"""
Explanation: To combine the matrix bands into a single GeoTiff image, we run the python script ingestrs2quad.py:
End of explanation
"""
run /home/dispms -f /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/polSAR.tif \
-p [1,6,9]
"""
Explanation: Here is an RGB display of the three diagonal matrix elements of the above image (bands 1,6 and 9):
End of explanation
"""
run /home/enlml /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/polSAR.tif
"""
Explanation: To estimate the equivalent number of looks, run the python script enlml.py:
End of explanation
"""
!/home/sar_seq_rs2quad.sh 20160403 20160427 20160614 [50,50,400,400] 5 0.01
"""
Explanation: So the ENL would appear to be about 5.
To run the change sequential change detection on the three images, run the bash script sar_seq_rs2quad.sh. It gathers the three images together and calls the python script sar_seq.py which does the change detection. By choosing a spatial subset (in this case 400x400), the images are clipped and co-registered to the first image. This might be unnecessary if the images are well registered anyway.
If you have a multicore processor you can eneable parallel computation by openeing a terminal window in the container (new terminal) and running
ipcluster start -n 4
End of explanation
"""
run /home/dispms \
-f /home/imagery/RS2_OK82571_PK721079_DK650144_FQ17W_20160403_230258_HH_VV_HV_VH_SLC/sarseq(20160403-1-20160614)_cmap.tif -c
"""
Explanation: Here is the change map for the most recent changes:
End of explanation
"""
|
MAKOSCAFEE/AllNotebooks | BasicMathReview.ipynb | mit | import numpy as np
y = np.array([1,2,3])
x = np.array([2,3,4])
"""
Explanation: 1. Linear Algebra
In the context of deep learning, linear algebra is a mathematical toolbox that offers helpful techniques for manipulating groups of numbers simultaneously. It provides structures like vectors and matrices (spreadsheets) to hold these numbers and new rules for how to add, subtract, multiply, and divide them.
1.1 Vector
A vector of n dimensions is an ordered collection of n coordinates, where each coordinate is a scalar of the
underlying field. An n-dimensional vector v with real coordinates is an element of R^n
End of explanation
"""
y + x
y-x
y/x
"""
Explanation: 1.1.1 Elementwise Operations
End of explanation
"""
np.dot(y,x)
"""
Explanation: 1.1.2 Dot productions
The dot product of two vectors is a scalar. Dot product of vectors and matrices (matrix multiplication) is one of the most important operations in deep learning.
End of explanation
"""
x * y
"""
Explanation: 1.1.3 Hadamard product
This is element wise multiplication which results to another vector.
End of explanation
"""
a = np.array([[1,2,3],[4,5,6]])
b = np.array([[1,2,3]])
"""
Explanation: 2. Matrices
Matrix is a rectangular array of scalars. Primarily, an n × m matrix A is used to describe a linear
transformation from m to n dimensions, where the matrix is an operator. We describe the dimensions of a matrix in terms of rows by columns
\begin{split}\begin{bmatrix}
2 & 4 \
5 & -7 \
12 & 5 \
\end{bmatrix}
\begin{bmatrix}
a² & 2a & 8\
18 & 7a-4 & 10\
\end{bmatrix}\end{split}
The first has dimensions (3,2). The second (2,3).
End of explanation
"""
a + 1
"""
Explanation: 2.1 Scalar Operations
Scalar operations with matrices work the same way as they do for vectors. Simply apply the scalar to every element in the matrix — add, subtract, divide, multiply, etc.
\begin{split}\begin{bmatrix}
2 & 3 \
2 & 3 \
2 & 3 \
\end{bmatrix}
+
1
=
\begin{bmatrix}
3 & 4 \
3 & 4 \
3 & 4 \
\end{bmatrix}\end{split}
End of explanation
"""
a = np.array([[1,2],[3,4]])
b = np.array([[3,4],[5,6]])
a + b
b-a
"""
Explanation: 2.2 Elementwise operations
In order to add, subtract, or divide two matrices they must have equal dimensions. We combine corresponding values in an elementwise fashion to produce a new matrix.
\begin{split}\begin{bmatrix}
a & b \
c & d \
\end{bmatrix}
+
\begin{bmatrix}
1 & 2\
3 & 4 \
\end{bmatrix}
=
\begin{bmatrix}
a+1 & b+2\
c+3 & d+4 \
\end{bmatrix}\end{split}
End of explanation
"""
a*b
"""
Explanation: 2.3 Hardmard production
Hadamard product of matrices is an elementwise operation. Values that correspond positionally are multiplied to produce a new matrix.
\begin{split}\begin{bmatrix}
a_1 & a_2 \
a_3 & a_4 \
\end{bmatrix}
\odot
\begin{bmatrix}
b_1 & b_2 \
b_3 & b_4 \
\end{bmatrix}
=
\begin{bmatrix}
a_1 \cdot b_1 & a_2 \cdot b_2 \
a_3 \cdot b_3 & a_4 \cdot b_4 \
\end{bmatrix}\end{split}
AB is a valid matrix product if A is p × q and B is q × r (left matrix has same number of columns
as right matrix has rows).
NOTE: Not all Matrices are eligible for multiplication. here are the rules
* The number of columns of the 1st matrix must equal the number of rows of the 2nd
* The product of an M x N matrix and an N x K matrix is an M x K matrix. The new matrix takes the rows of the 1st and columns of the 2nd
Matrix multiplication relies on dot product to multiply various combinations of rows and columns. In the image below, taken from Khan Academy’s excellent linear algebra course, each entry in Matrix C is the dot product of a row in matrix A and a column in matrix B
\begin{split}\begin{bmatrix}
a & b \
c & d \
e & f \
\end{bmatrix}
\cdot
\begin{bmatrix}
1 & 2 \
3 & 4 \
\end{bmatrix}
=
\begin{bmatrix}
1a + 3b & 2a + 4b \
1c + 3d & 2c + 4d \
1e + 3f & 2e + 4f \
\end{bmatrix}\end{split}
End of explanation
"""
def get_derivative(func, x):
"""Compute the derivative of `func` at the location `x`."""
h = 0.0001 # step size
return (func(x+h) - func(x)) / h # rise-over-run
def f(x): return x**2 # some test function f(x)=x^2
x = 3 # the location of interest
computed = get_derivative(f, x)
actual = 2*x
computed, actual # = 6.0001, 6 # pretty close if you ask me...
"""
Explanation: 2.4 Matrix transpose
Neural networks frequently process weights and inputs of different sizes where the dimensions do not meet the requirements of matrix multiplication. Matrix transpose provides a way to “rotate” one of the matrices so that the operation complies with multiplication requirements and can continue. There are two steps to transpose a matrix:
Rotate the matrix right 90°
Reverse the order of elements in each row (e.g. [a b c] becomes [c b a])
As an example, transpose matrix M into T:
\begin{split}\begin{bmatrix}
a & b \
c & d \
e & f \
\end{bmatrix}
\quad \Rightarrow \quad
\begin{bmatrix}
a & c & e \
b & d & f \
\end{bmatrix}\end{split}
3. Calculus
You need to know some basic calculus in order to understand how functions change over time (derivatives), and to calculate the total amount of a quantity that accumulates over a time period (integrals). The language of calculus will allow you to speak precisely about the properties of functions and better understand their behaviour.
3.1 Derivertives
it is an instantaneous rate of change
End of explanation
"""
|
lemieuxl/pyplink | demo/PyPlink Demo.ipynb | mit | from pyplink import PyPlink
"""
Explanation: PyPlink
PyPlink is a Python module to read and write binary Plink files. Here are small examples for PyPlink.
End of explanation
"""
import zipfile
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Downloading the demo data from Plink webset
urlretrieve(
"http://pngu.mgh.harvard.edu/~purcell/plink/dist/hapmap_r23a.zip",
"hapmap_r23a.zip",
)
# Extracting the archive content
with zipfile.ZipFile("hapmap_r23a.zip", "r") as z:
z.extractall(".")
"""
Explanation: Table of contents
Reading binary pedfile
Getting the demo data
Reading the binary file
Getting dataset information
Iterating over all markers
Additive format
Nucleotide format
Iterating over selected markers
Additive format
Nucleotide format
Extracting a single marker
Additive format
Nucleotide format
Misc example
Extracting a subset of markers and samples
Counting the allele frequency of markers
Writing binary pedfile
SNP-major format
INDIVIDUAL-major-format
Reading binary pedfile
Getting the demo data
The Plink softwares provides a testing dataset on the resources page. It contains the 270 samples from the HapMap project (release 23) on build GRCh36/hg18.
End of explanation
"""
pedfile = PyPlink("hapmap_r23a")
"""
Explanation: Reading the binary file
To read a binary file, PyPlink only requires the prefix of the files.
End of explanation
"""
print("{:,d} samples and {:,d} markers".format(
pedfile.get_nb_samples(),
pedfile.get_nb_markers(),
))
all_samples = pedfile.get_fam()
all_samples.head()
all_markers = pedfile.get_bim()
all_markers.head()
"""
Explanation: Getting dataset information
End of explanation
"""
for marker_id, genotypes in pedfile:
print(marker_id)
print(genotypes)
break
for marker_id, genotypes in pedfile.iter_geno():
print(marker_id)
print(genotypes)
break
"""
Explanation: Iterating over all markers
<a id="iterating_over_all_additive"></a>
Additive format
Cycling through genotypes as -1, 0, 1 and 2 values, where -1 is unknown, 0 is homozygous (major allele), 1 is heterozygous, and 2 is homozygous (minor allele).
End of explanation
"""
for marker_id, genotypes in pedfile.iter_acgt_geno():
print(marker_id)
print(genotypes)
break
"""
Explanation: <a id="iterating_over_all_nuc"></a>
Nucleotide format
Cycling through genotypes as A, C, G and T values (where 00 is unknown).
End of explanation
"""
markers = ["rs7092431", "rs9943770", "rs1587483"]
for marker_id, genotypes in pedfile.iter_geno_marker(markers):
print(marker_id)
print(genotypes, end="\n\n")
"""
Explanation: Iterating over selected markers
<a id="iterating_over_selected_additive"></a>
Additive format
Cycling through genotypes as -1, 0, 1 and 2 values, where -1 is unknown, 0 is homozygous (major allele), 1 is heterozygous, and 2 is homozygous (minor allele).
End of explanation
"""
markers = ["rs7092431", "rs9943770", "rs1587483"]
for marker_id, genotypes in pedfile.iter_acgt_geno_marker(markers):
print(marker_id)
print(genotypes, end="\n\n")
"""
Explanation: <a id="iterating_over_selected_nuc"></a>
Nucleotide format
Cycling through genotypes as A, C, G and T values (where 00 is unknown).
End of explanation
"""
pedfile.get_geno_marker("rs7619974")
"""
Explanation: Extracting a single marker
<a id="extracting_additive"></a>
Additive format
Cycling through genotypes as -1, 0, 1 and 2 values, where -1 is unknown, 0 is homozygous (major allele), 1 is heterozygous, and 2 is homozygous (minor allele).
End of explanation
"""
pedfile.get_acgt_geno_marker("rs7619974")
"""
Explanation: <a id="extracting_nuc"></a>
Nucleotide format
Cycling through genotypes as A, C, G and T values (where 00 is unknown).
End of explanation
"""
# Getting the Y markers
y_markers = all_markers[all_markers.chrom == 24].index.values
# Getting the males
males = all_samples.gender == 1
# Cycling through the Y markers
for marker_id, genotypes in pedfile.iter_geno_marker(y_markers):
male_genotypes = genotypes[males.values]
print("{:,d} total genotypes".format(len(genotypes)))
print("{:,d} genotypes for {:,d} males ({} on chr{} and position {:,d})".format(
len(male_genotypes),
males.sum(),
marker_id,
all_markers.loc[marker_id, "chrom"],
all_markers.loc[marker_id, "pos"],
))
break
"""
Explanation: Misc example
Extracting a subset of markers and samples
To get all markers on the Y chromosomes for the males.
End of explanation
"""
# Getting the founders
founders = (all_samples.father == "0") & (all_samples.mother == "0")
# Computing the MAF
markers = ["rs7619974", "rs2949048", "rs16941434"]
for marker_id, genotypes in pedfile.iter_geno_marker(markers):
valid_genotypes = genotypes[founders.values & (genotypes != -1)]
maf = valid_genotypes.sum() / (len(valid_genotypes) * 2)
print(marker_id, round(maf, 6), sep="\t")
"""
Explanation: Counting the allele frequency of markers
To count the minor allele frequency of a subset of markers (only for founders).
End of explanation
"""
# The genotypes for 3 markers and 10 samples
all_genotypes = [
[0, 0, 0, 1, 0, 0, -1, 2, 1, 0],
[0, 0, 1, 1, 0, 0, 0, 1, 2, 0],
[0, 0, 0, 0, 1, 1, 0, 0, 0, 1],
]
# Writing the BED file using PyPlink
with PyPlink("test_output", "w") as pedfile:
for genotypes in all_genotypes:
pedfile.write_genotypes(genotypes)
# Writing a dummy FAM file
with open("test_output.fam", "w") as fam_file:
for i in range(10):
print("family_{}".format(i+1), "sample_{}".format(i+1), "0", "0", "0", "-9",
sep=" ", file=fam_file)
# Writing a dummy BIM file
with open("test_output.bim", "w") as bim_file:
for i in range(3):
print("1", "marker_{}".format(i+1), "0", i+1, "A", "T",
sep="\t", file=bim_file)
# Checking the content of the newly created binary files
pedfile = PyPlink("test_output")
pedfile.get_fam()
pedfile.get_bim()
for marker, genotypes in pedfile:
print(marker, genotypes)
"""
Explanation: Writing binary pedfile
SNP-major format
The following examples shows how to write a binary file using the PyPlink module. The SNP-major format is the default. It means that the binary file is written one marker at a time.
Note that PyPlink only writes the BED file. The user is required to create the FAM and BIM files.
End of explanation
"""
from subprocess import Popen, PIPE
# Computing frequencies
proc = Popen(["plink", "--noweb", "--bfile", "test_output", "--freq"],
stdout=PIPE, stderr=PIPE)
outs, errs = proc.communicate()
print(outs.decode(), end="")
with open("plink.frq", "r") as i_file:
print(i_file.read(), end="")
"""
Explanation: The newly created binary files are compatible with Plink.
End of explanation
"""
# The genotypes for 3 markers and 10 samples (INDIVIDUAL-major)
all_genotypes = [
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 1, 0],
[ 1, 1, 0],
[ 0, 0, 1],
[ 0, 0, 1],
[-1, 0, 0],
[ 2, 1, 0],
[ 1, 2, 0],
[ 0, 0, 1],
]
# Writing the BED file using PyPlink
with PyPlink("test_output_2", "w", bed_format="INDIVIDUAL-major") as pedfile:
for genotypes in all_genotypes:
pedfile.write_genotypes(genotypes)
# Writing a dummy FAM file
with open("test_output_2.fam", "w") as fam_file:
for i in range(10):
print("family_{}".format(i+1), "sample_{}".format(i+1), "0", "0", "0", "-9",
sep=" ", file=fam_file)
# Writing a dummy BIM file
with open("test_output_2.bim", "w") as bim_file:
for i in range(3):
print("1", "marker_{}".format(i+1), "0", i+1, "A", "T",
sep="\t", file=bim_file)
from subprocess import Popen, PIPE
# Computing frequencies
proc = Popen(["plink", "--noweb", "--bfile", "test_output_2", "--freq", "--out", "plink_2"],
stdout=PIPE, stderr=PIPE)
outs, errs = proc.communicate()
print(outs.decode(), end="")
with open("plink_2.frq", "r") as i_file:
print(i_file.read(), end="")
"""
Explanation: INDIVIDUAL-major format
The following examples shows how to write a binary file using the PyPlink module. The INDIVIDUAL-major format means that the binary file is written one sample at a time.
Files in INDIVIDUAL-major format is not readable by PyPlink. You need to convert it using Plink.
Note that PyPlink only writes the BED file. The user is required to create the FAM and BIM files.
End of explanation
"""
|
wesleybeckner/salty | scripts/molecular_dynamics/therm_cond.ipynb | mit | from keras.layers import Dense, Dropout, Input
from keras.models import Model, Sequential
from keras.optimizers import Adam
import salty
from sklearn import preprocessing
from keras import regularizers
import matplotlib.pyplot as plt
import numpy as np
from keras.callbacks import EarlyStopping
from sklearn.metrics import mean_squared_error
import numpy as np
import pandas as pd
import time
import math
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model, model_from_json
from keras.layers import Dense, Dropout, SpatialDropout2D, Flatten, Activation, merge, Input, Masking, BatchNormalization
from keras.layers.core import Lambda
from keras.layers.convolutional import Convolution2D, MaxPooling2D, AveragePooling2D
from keras.layers.pooling import GlobalAveragePooling2D
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import PReLU, ELU
from keras.optimizers import Adam, Nadam, RMSprop, SGD
from keras.callbacks import ModelCheckpoint, EarlyStopping, Callback, LearningRateScheduler
from keras.regularizers import l2, l1
from keras.utils import np_utils
from keras import backend as K
import tensorflow as tf
from math import sqrt
import scipy
from sklearn.model_selection import StratifiedKFold, KFold, StratifiedShuffleSplit, ShuffleSplit
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, auc, confusion_matrix,mean_squared_error,r2_score
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
"""
Explanation: James' Salty Tests
Using the Salty package and Keras to model molecular properties.
1. Setting up
1.1 Import necessary packages
End of explanation
"""
devmodel = salty.aggregate_data(['viscosity','thermal_conductivity','cpt']) # other option is viscosity
X_train, Y_train, X_test, Y_test = salty.devmodel_to_array\
(devmodel, train_fraction=0.8)
import pandas as pd
data = pd.read_csv("C://users/james/miniconda3/envs/research/lib/site-packages/salty/data/thermal_conductivity_premodel.csv")
data.head(5)
data = pd.read_csv("C://users/james/miniconda3/envs/research/lib/site-packages/salty/data/electrical_conductivity_premodel.csv")
data.head(5)
"""
Explanation: 1.2 Data Pre-processing
This step is performed using the salty package to aggregate property data and organize them into training and testing sets. Salty takes care of all pre-processing.
End of explanation
"""
print("X_train.shape: ", X_train.shape)
print("Y_train.shape: ", Y_train.shape)
print("X_test.shape: ", X_test.shape)
print("Y_test.shape: ", Y_test.shape)
"""
Explanation: Check to see that the dimensions make sense:
End of explanation
"""
early = EarlyStopping(monitor='loss', patience=50, verbose=2)
mlp_input = Input(shape=(int(X_train.shape[1]),)) #returns an input tensor. Special because you just specify the shape.
#Use l2 regularization instead of dropout. Both are methods of regularization (preventing overfitting)
#l2 just seems to work better.
x = Dense(100, kernel_initializer='glorot_normal', activation='relu',kernel_regularizer=regularizers.l2(0.01))(mlp_input)
#x = Dropout(0.5)(x)
x = Dense(3, activation='linear')(x)
model = Model(mlp_input, x) #input = mlp_input, output = x.
model.compile(optimizer="Adam", loss="mean_squared_error", metrics=['mse'])
history = model.fit(X_train,Y_train, validation_split = 0.2, epochs=1000, verbose=0, callbacks=[early])
scores = model.evaluate(X_test, Y_test, verbose=2)
print(model.summary())
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
"""
Explanation: 2. Build the model
Steps in building a working model:
Define the model. (In terms of layers using Dense).
Compile the model. (model.compile) (How to learn, what to learn).
Train the model. (model.fit)
Evaluate the model. (model.evaluate) (Calculates the loss).
Predict using the model. (model.predict)
2.1 Functional API or Sequential?
Which to choose?
Functional API is used over Sequential model for multi-input and/or multi-output models. Sequential is for single-input, single-output.
"With the functional API, it is easy to reuse trained models: you can treat any model as if it were a layer, by calling it on a tensor. Note that by calling a model you aren't just reusing the architecture of the model, you are also reusing its weights."
A layer instance is callable (on a tensor), and it returns a tensor.
Input tensor(s) and output tensor(s) can then be used to define a Model.
Such a model can be trained just like Keras Sequential models.
In the code below, there is one layer with 150 nodes, and one output layer with 3 nodes. A single layer is called dense. A dense layer connects every node in the input to every node in the output. In a dense layer here, you specify the output dimensions.
Dropout:
Dropout is a method of regularization to prevent overfitting. These neurons are only dropped during training. In the end they are all there. Specifies fraction of neurons to drop during each epoch. One way to consider this is that the neurons weights are set to 0 during the epoch.
Layers:
Layers are basically functions that contain an internal state called weights that can be trainable or not. When we fit (train) a model, we are changing the weights.
End of explanation
"""
print(history.history.keys())
plt.plot(history.history['mean_squared_error'])
plt.plot(history.history['val_mean_squared_error'])
plt.title('model mse')
plt.ylabel('mse')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
"""
Explanation: Learning curves to see how MSE changes over epochs. I don't know why model mse and model loss look different since the loss is mse
End of explanation
"""
def create_model(optimizer = 'Adam', init = 'glorot_normal'):
mlp_input = Input(shape=(int(X_train.shape[1]),)) #returns an input tensor.
#layer instance is called on tensor, and returns tensor.
x = Dense(150, kernel_initializer='glorot_normal', activation="relu")(mlp_input)
x = Dropout(0.5)(x)
x = Dense(3,activation = 'linear')(x)
model = Model(mlp_input, x)
model.compile(optimizer="Adam",
loss="mean_squared_error",
metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, batch_size=10, verbose=2)
# Write this to test different optimizers:
# optimizer = ['SGD', 'RMSprop', 'Adagrad', 'Adadelta', 'Adam', 'Adamax', 'Nadam']
# param_grid = dict(optimizer=optimizer)
param_grid = dict(epochs=[10,20])
grid = GridSearchCV(estimator=model, param_grid=param_grid)
grid_result = grid.fit(X_train, Y_train)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
"""
Explanation: 2.2 Selecting best model using GridSearch
GridSearch examines each combination in param_grid to find the best combination. Below is a simple test that tests 10 and 20 epochs. You can also test different models.
In addition to GridSearch, there is also RandomSearch and model-based search. SKLearn has RandomSearch.
End of explanation
"""
early = EarlyStopping(monitor='loss', patience=50, verbose=2)
seed = 7
np.random.seed(seed)
kf = KFold(n_splits=5, shuffle=True, random_state=seed)
cvscores = []
for train, test in kf.split(X_train, Y_train):
# create model
# mlp_input = Input(shape=(int(X_train.shape[1]),))
# x = Dense(150, kernel_initializer='glorot_normal', activation="relu")(mlp_input)
# x = BatchNormalization()(x)
# x = Dropout(0.5)(x)
# x = Dense(3,activation = 'linear')(x)
# model = Model(mlp_input, x)
# model.compile(optimizer="Adam",
# loss="mean_squared_error",
# metrics=['accuracy', 'mse'])
# # Fit the model. Note that the train and test sets are different for each split. Each fraction will used as the validation
# #set eventually.
# model.fit(X_train[train], Y_train[train], validation_data=(X_train[test],Y_train[test]),epochs=100,
# callbacks = [early], batch_size=10, verbose=0)
# # evaluate the model
# scores = model.evaluate(X_train[test], Y_train[test], verbose=0)
# cvscores.append(scores[1] * 100)
mlp_input = Input(shape=(int(X_train.shape[1]),)) #returns an input tensor. Special because you just specify the shape.
x = Dense(100, kernel_initializer='glorot_normal', activation='relu', kernel_regularizer = regularizers.l2(0.01))(mlp_input)
#x = Dropout(0.5)(x)
x = Dense(3, activation='linear')(x)
model = Model(mlp_input, x) #input = mlp_input, output = x.
model.compile(optimizer="Adam", loss="mean_squared_error", metrics=['mse'])
model.fit(X_train[train],Y_train[train],validation_data=(X_train[test],Y_train[test]), callbacks=[early], epochs=100)
scores = model.evaluate(X_train[test], Y_train[test], verbose=0)
cvscores.append(scores[1] * 100)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
"""
Explanation: 2.3 K-Fold Cross-Validation on Keras
K-Fold CV splits the data set into K sets, training K-1 and testing on 1, then repeating K times.
Below is an example using 5 splits (Train on 4/5, test on 1/5 of total data set). This is a method of evaluating the accuracy of your model.
End of explanation
"""
print("%.2f%% (+/- %.2f%%)" % (np.mean(cvscores), np.std(cvscores)))
cvscores
"""
Explanation: Here is the average accuracy and s.d of accuracy from the k splits:
End of explanation
"""
import pandas as pd #import pandas so we can make a dataframe.
#Set X_train and Y_train to X and Y for simplicity when writing code for graphing and such.
X = X_train
Y = Y_train
#Define the RMSE and R^2 functions.
def rmse(y,y_pred):
rms=np.sqrt(mean_squared_error(y,y_pred))
return rms
def r2(y,y_pred):
r2 = r2_score(y,y_pred)
return r2
Y_pred = model.predict(X_test) #X_test is a subset of the original data saved for testing. Y as predicted from these vals.
#Creates a pandas dataframe to easily visualize R^2 and RMSE of each property fit.
df = pd.DataFrame({"RMSE": [rmse(Y_test[:,0],Y_pred[:,0]), rmse(Y_test[:,1], Y_pred[:,1]), rmse(Y_test[:,2], Y_pred[:,2])],
"$R^2$": [r2(Y_test[:,0],Y_pred[:,0]), r2(Y_test[:,1],Y_pred[:,1]), r2(Y_test[:,2],Y_pred[:,2])],
"Property": ['Viscosity', 'THERMAL CONDUCTIVITY', '$C_{pt}$ $(K/J/mol)$']})
#Make the 3 plots.
with plt.style.context('seaborn-whitegrid'):
fig = plt.figure(figsize=(5, 2.5), dpi=300)
ax = fig.add_subplot(131)
ax.plot([-20, 20], [-20, 20], linestyle="-", label=None, c="black", linewidth=1)
ax.plot(np.exp(Y)[:, 0], np.exp(model.predict(X))[:, 0], \
marker="*", linestyle="", alpha=0.4)
ax.set_ylabel("Predicted Viscosity")
ax.set_xlabel("Actual Viscosity")
#ax.text(0.1,.9,"R: {0:5.3f}".format(multi_model.score(X,Y)), transform = ax.transAxes)
plt.xlim(0, 10)
plt.ylim(0,10)
ax.grid()
ax = fig.add_subplot(132)
ax.plot([0, 0.5], [0, 0.5], linestyle="-", label=None, c="black", linewidth=1)
ax.plot(np.exp(Y)[:, 1], np.exp(model.predict(X))[:, 1], \
marker="*", linestyle="", alpha=0.4)
ax.set_ylabel("Predicted THERMAL CONDUCTIVITY")
ax.set_xlabel("Actual THERMAL CONDUCTIVITY")
plt.xlim(0,0.5)
plt.ylim(0,0.5)
ax.grid()
ax = fig.add_subplot(133)
ax.plot([0, 2000],[0,2000],linestyle="-",label=None,c="black",linewidth=1)
ax.plot(np.exp(Y)[:,2],np.exp(model.predict(X))[:,2],\
marker="*",linestyle="",alpha=0.4)
ax.set_ylabel("Predicted $C_{pt}$ $(K/J/mol)$")
ax.set_xlabel("Actual $C_{pt}$ $(K/J/mol)$")
plt.xlim(0,2000)
plt.ylim(0,2000)
ax.grid()
plt.tight_layout()
"""
Explanation: 3. Visualize predicted results
Note that the plots are exponential plots. This is why there are no negative values.
End of explanation
"""
df.set_index(['Property'])
"""
Explanation: $R^2$ and $RMSE$ values for each property are displayed below:
End of explanation
"""
|
akseshina/dl_course | seminar_3/AlexNet.ipynb | gpl-3.0 | import cifar10
"""
Explanation: Load Data
End of explanation
"""
cifar10.maybe_download_and_extract()
"""
Explanation: The CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
"""
class_names = cifar10.load_class_names()
class_names
"""
Explanation: Load the class-names.
End of explanation
"""
images_train, cls_train, labels_train = cifar10.load_training_data()
"""
Explanation: Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.
End of explanation
"""
images_test, cls_test, labels_test = cifar10.load_test_data()
"""
Explanation: Load the test-set.
End of explanation
"""
print("Size of:")
print("- Training-set:\t\t{}".format(len(images_train)))
print("- Test-set:\t\t{}".format(len(images_test)))
"""
Explanation: The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
End of explanation
"""
from cifar10 import img_size, num_channels, num_classes
"""
Explanation: The data dimensions are used in several places in the source-code below. They have already been defined in the cifar10 module, so we just need to import them.
End of explanation
"""
img_size_cropped = 24
"""
Explanation: The images are 32 x 32 pixels, but we will crop the images to 24 x 24 pixels.
End of explanation
"""
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true) == 9
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing if we need to print ensemble and best-net.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
# Plot image.
ax.imshow(images[i, :, :, :],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
ax.set_xlabel(xlabel)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
"""
Explanation: Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
"""
images = images_test[0:9]
cls_true = cls_test[0:9]
plot_images(images=images, cls_true=cls_true, smooth=False)
"""
Explanation: Plot a few images to see if data is correct
End of explanation
"""
plot_images(images=images, cls_true=cls_true, smooth=True)
x = tf.placeholder(tf.float32, shape=[None, img_size, img_size, num_channels], name='x')
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, axis=1)
"""
Explanation: The pixelated images above are what the neural network will get as input. The images might be a bit easier for the human eye to recognize if we smoothen the pixels.
End of explanation
"""
def pre_process_image(image, training):
# This function takes a single image as input,
# and a boolean whether to build the training or testing graph.
if training:
# Randomly crop the input image.
image = tf.random_crop(image, size=[img_size_cropped, img_size_cropped, num_channels])
# Randomly flip the image horizontally.
image = tf.image.random_flip_left_right(image)
# Randomly adjust hue, contrast and saturation.
image = tf.image.random_hue(image, max_delta=0.05)
image = tf.image.random_contrast(image, lower=0.3, upper=1.0)
image = tf.image.random_brightness(image, max_delta=0.2)
image = tf.image.random_saturation(image, lower=0.0, upper=2.0)
# Some of these functions may overflow and result in pixel
# values beyond the [0, 1] range. A simple solution is to limit the range.
image = tf.minimum(image, 1.0)
image = tf.maximum(image, 0.0)
else:
# Crop the input image around the centre so it is the same
# size as images that are randomly cropped during training.
image = tf.image.resize_image_with_crop_or_pad(image,
target_height=img_size_cropped,
target_width=img_size_cropped)
return image
"""
Explanation: Data augmentation for images
End of explanation
"""
def pre_process(images, training):
# Use TensorFlow to loop over all the input images and call
# the function above which takes a single image as input.
images = tf.map_fn(lambda image: pre_process_image(image, training), images)
return images
"""
Explanation: The function above is called for each image in the input batch using the following function.
End of explanation
"""
distorted_images = pre_process(images=x, training=True)
"""
Explanation: In order to plot the distorted images, we create the pre-processing graph for TensorFlow, so we may execute it later.
End of explanation
"""
def main_network(images, training):
images = tf.cast(images, tf.float32)
x_pretty = pt.wrap(images)
if training:
phase = pt.Phase.train
else:
phase = pt.Phase.infer
# Can't wrap it to pretty tensor because
# 'Layer' object has no attribute 'local_response_normalization'
normalize = lambda x: pt.wrap(
tf.nn.local_response_normalization(x, depth_radius=5.0, bias=2.0, alpha=1e-4, beta=0.75))
with pt.defaults_scope(activation_fn=tf.nn.relu, phase=phase):
layers = []
for i in ["left", "right"]:
first_conv = x_pretty.\
conv2d(kernel=5, depth=48, name='conv_1_' + i)
first_conv_norm = normalize(first_conv)
first_conv_norm_pool = first_conv_norm.\
max_pool(kernel=3, stride=2, edges='VALID', name='pool_1_' + i)
second_conv = first_conv_norm_pool.\
conv2d(kernel=3, depth=128, bias=tf.ones_initializer(), name='conv_2_' + i)
second_conv_norm = normalize(second_conv)
second_conv_norm_pooled = pt.wrap(second_conv_norm).\
max_pool(kernel=2, stride=2, edges='VALID', name='pool_2_' + i)
layers.append(second_conv_norm_pooled)
first_interlayer = pt.wrap(tf.concat([layers[-2], layers[-1]], axis=3))
for i in ["left", "right"]:
cur_layer = first_interlayer.\
conv2d(kernel=3, depth=192, name='conv_3_' + i).\
conv2d(kernel=3, depth=192, name='conv_4_' + i).\
conv2d(kernel=3, depth=128, name='conv_5_' + i).\
max_pool(kernel=3, stride=2, edges='VALID', name='pool_3_' + i)
layers.append(cur_layer)
second_interlayer = pt.wrap(tf.concat([layers[-2], layers[-1]], axis=3))
print(second_interlayer.shape)
y_pred, loss = second_interlayer.\
flatten().\
fully_connected(1024, name='fully_conn_1').\
dropout(0.2, name='dropout_1').\
fully_connected(512, name='fully_conn_2').\
dropout(0.2, name='dropout_2').\
fully_connected(10, name='fully_conn_3').\
softmax_classifier(num_classes=num_classes, labels=y_true)
return y_pred, loss
"""
Explanation: Creating Main Processing
https://github.com/google/prettytensor/blob/master/prettytensor/pretty_tensor_image_methods.py
End of explanation
"""
def create_network(training):
# Wrap the neural network in the scope named 'network'.
# Create new variables during training, and re-use during testing.
with tf.variable_scope('network', reuse=not training):
images = x
images = pre_process(images=images, training=training)
y_pred, loss = main_network(images=images, training=training)
return y_pred, loss
"""
Explanation: Creating Neural Network
Note that the neural network is enclosed in the variable-scope named 'network'. This is because we are actually creating two neural networks in the TensorFlow graph. By assigning a variable-scope like this, we can re-use the variables for the two neural networks, so the variables that are optimized for the training-network are re-used for the other network that is used for testing.
End of explanation
"""
global_step = tf.Variable(initial_value=0,
name='global_step', trainable=False)
"""
Explanation: Create Neural Network for Training Phase
Note that trainable=False which means that TensorFlow will not try to optimize this variable.
End of explanation
"""
_, loss = create_network(training=True)
"""
Explanation: Create the neural network to be used for training. The create_network() function returns both y_pred and loss, but we only need the loss-function during training.
End of explanation
"""
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step=global_step)
"""
Explanation: Create an optimizer which will minimize the loss-function. Also pass the global_step variable to the optimizer so it will be increased by one after each iteration.
End of explanation
"""
y_pred, _ = create_network(training=False)
"""
Explanation: Create Neural Network for Test Phase / Inference
Now create the neural network for the test-phase. Once again the create_network() function returns the predicted class-labels y_pred for the input images, as well as the loss-function to be used during optimization. During testing we only need y_pred.
End of explanation
"""
y_pred_cls = tf.argmax(y_pred, dimension=1)
"""
Explanation: We then calculate the predicted class number as an integer. The output of the network y_pred is an array with 10 elements. The class number is the index of the largest element in the array.
End of explanation
"""
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
"""
Explanation: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
End of explanation
"""
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
"""
saver = tf.train.Saver()
"""
Explanation: Saver
In order to save the variables of the neural network, so they can be reloaded quickly without having to train the network again, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.
End of explanation
"""
def get_weights_variable(layer_name):
with tf.variable_scope("network/" + layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
"""
Explanation: Getting the Weights
Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.
We used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes. Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.
The implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
End of explanation
"""
weights_conv1 = get_weights_variable(layer_name='conv_1_left')
weights_conv2 = get_weights_variable(layer_name='conv_1_right')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(weights_conv1).shape)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(weights_conv2).shape)
"""
Explanation: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.
End of explanation
"""
def get_layer_output(layer_name):
# The name of the last operation of the convolutional layer.
# This assumes you are using Relu as the activation-function.
tensor_name = "network/" + layer_name + "/Relu:0"
# Get the tensor with this name.
tensor = tf.get_default_graph().get_tensor_by_name(tensor_name)
return tensor
"""
Explanation: Getting the Layer Outputs
Similarly we also need to retrieve the outputs of the convolutional layers. The function for doing this is slightly different than the function above for getting the weights. Here we instead retrieve the last tensor that is output by the convolutional layer.
End of explanation
"""
output_conv1 = get_layer_output(layer_name='conv_1_left')
output_conv2 = get_layer_output(layer_name='conv_1_right')
"""
Explanation: Get the output of the convoluational layers so we can plot them later.
End of explanation
"""
# to prevent tensorflow from allocating the totality of a GPU memory
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
session = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
"""
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
"""
save_dir = 'checkpoints_alex_net/'
"""
Explanation: Restore or initialize variables
End of explanation
"""
if not os.path.exists(save_dir):
os.makedirs(save_dir)
"""
Explanation: Create the directory if it does not exist.
End of explanation
"""
save_path = os.path.join(save_dir, 'cifar10_cnn')
"""
Explanation: This is the base-filename for the checkpoints, TensorFlow will append the iteration number, etc.
End of explanation
"""
try:
print("Trying to restore last checkpoint ...")
last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=save_dir)
saver.restore(session, save_path=last_chk_path)
print("Restored checkpoint from:", last_chk_path)
except:
print("Failed to restore checkpoint. Initializing variables instead.")
session.run(tf.global_variables_initializer())
"""
Explanation: First try to restore the latest checkpoint. This may fail and raise an exception e.g. if such a checkpoint does not exist, or if you have changed the TensorFlow graph.
End of explanation
"""
train_batch_size = 64
"""
Explanation: Helper-function to get a random training-batch
There are 50,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
"""
def random_batch():
num_images = len(images_train)
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
x_batch = images_train[idx, :, :, :]
y_batch = labels_train[idx, :]
return x_batch, y_batch
"""
Explanation: Function for selecting a random batch of images from the training-set.
End of explanation
"""
def optimize(num_iterations):
start_time = time.time()
for i in range(num_iterations):
x_batch, y_true_batch = random_batch()
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
i_global, _ = session.run([global_step, optimizer],
feed_dict=feed_dict_train)
if (i_global % 200 == 0) or (i == num_iterations - 1):
batch_acc = session.run(accuracy,
feed_dict=feed_dict_train)
msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
print(msg.format(i_global, batch_acc))
# Save a checkpoint to disk every 1000 iterations (and last).
if (i_global % 1000 == 0) or (i == num_iterations - 1):
saver.save(session,
save_path=save_path,
global_step=global_step)
print("Saved checkpoint.")
end_time = time.time()
time_dif = end_time - start_time
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
"""
Explanation: Optimization
The progress is printed every 100 iterations. A checkpoint is saved every 1000 iterations and also after the last iteration.
End of explanation
"""
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
incorrect = (correct == False)
images = images_test[incorrect]
cls_pred = cls_pred[incorrect]
cls_true = cls_test[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
"""
Explanation: Plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
"""
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
# Print the confusion matrix as text.
for i in range(num_classes):
class_name = "({}) {}".format(i, class_names[i])
print(cm[i, :], class_name)
# Print the class-numbers for easy reference.
class_numbers = [" ({0})".format(i) for i in range(num_classes)]
print("".join(class_numbers))
"""
Explanation: Plot confusion matrix
End of explanation
"""
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
num_images = len(images)
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
"""
Explanation: Calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
End of explanation
"""
def predict_cls_test():
return predict_cls(images = images_test,
labels = labels_test,
cls_true = cls_test)
"""
Explanation: Calculate the predicted class for the test-set.
End of explanation
"""
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
# Return the classification accuracy
# and the number of correct classifications.
return correct.mean(), correct.sum()
"""
Explanation: Helper-functions for the classification accuracy
End of explanation
"""
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = classification_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
"""
Explanation: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
End of explanation
"""
def plot_conv_weights(weights, input_channel=0):
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
print("Min: {0:.5f}, Max: {1:.5f}".format(w.min(), w.max()))
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
abs_max = max(abs(w_min), abs(w_max))
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=-abs_max, vmax=abs_max,
interpolation='nearest', cmap='seismic')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
"""
Explanation: Helper-function for plotting convolutional weights
End of explanation
"""
def plot_layer_output(layer_output, image):
feed_dict = {x: [image]}
# Retrieve the output of the layer after inputting this image.
values = session.run(layer_output, feed_dict=feed_dict)
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
values_min = np.min(values)
values_max = np.max(values)
# Number of image channels output by the conv. layer.
num_images = values.shape[3]
# Number of grid-cells to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_images))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid image-channels.
if i<num_images:
# Get the images for the i'th output channel.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, vmin=values_min, vmax=values_max,
interpolation='nearest', cmap='binary')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
"""
Explanation: Helper-function for plotting the output of convolutional layers
End of explanation
"""
def plot_distorted_image(image, cls_true):
# Repeat the input image 9 times.
image_duplicates = np.repeat(image[np.newaxis, :, :, :], 9, axis=0)
feed_dict = {x: image_duplicates}
# Calculate only the pre-processing of the TensorFlow graph
# which distorts the images in the feed-dict.
result = session.run(distorted_images, feed_dict=feed_dict)
plot_images(images=result, cls_true=np.repeat(cls_true, 9))
"""
Explanation: Examples of distorted input images
In order to artificially inflate the number of images available for training, the neural network uses pre-processing with random distortions of the input images. This should hopefully make the neural network more flexible at recognizing and classifying images.
This is a helper-function for plotting distorted input images.
End of explanation
"""
def get_test_image(i):
return images_test[i, :, :, :], cls_test[i]
"""
Explanation: Helper-function for getting an image and its class-number from the test-set.
End of explanation
"""
img, cls = get_test_image(16)
"""
Explanation: Get an image and its true class from the test-set.
End of explanation
"""
plot_distorted_image(img, cls)
"""
Explanation: Plot 9 random distortions of the image. If you re-run this code you will get slightly different results.
End of explanation
"""
tf.summary.FileWriter('graphs', sess.graph)
# if False:
optimize(num_iterations=100000)
"""
Explanation: Perform optimization
End of explanation
"""
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
"""
Explanation: Results
Examples of mis-classifications are plotted below.
End of explanation
"""
plot_conv_weights(weights=weights_conv1, input_channel=0)
"""
Explanation: Convolutional Weights
The following shows some of the weights (or filters) for the first convolutional layer. There are 3 input channels so there are 3 of these sets, which you may plot by changing the input_channel.
Note that positive weights are red and negative weights are blue.
End of explanation
"""
def plot_image(image):
fig, axes = plt.subplots(1, 2)
ax0 = axes.flat[0]
ax1 = axes.flat[1]
ax0.imshow(image, interpolation='nearest')
ax1.imshow(image, interpolation='spline16')
ax0.set_xlabel('Raw')
ax1.set_xlabel('Smooth')
plt.show()
"""
Explanation: Output of convolutional layers
Helper-function for plotting an image.
End of explanation
"""
img, cls = get_test_image(16)
plot_image(img)
"""
Explanation: Plot an image from the test-set. The raw pixelated image is used as input to the neural network.
End of explanation
"""
plot_layer_output(output_conv1, image=img)
"""
Explanation: Use the raw image as input to the neural network and plot the output of the first convolutional layer.
End of explanation
"""
plot_layer_output(output_conv2, image=img)
"""
Explanation: Using the same image as input to the neural network, now plot the output of the second convolutional layer.
End of explanation
"""
label_pred, cls_pred = session.run([y_pred, y_pred_cls],
feed_dict={x: [img]})
"""
Explanation: Predicted class-labels
Get the predicted class-label and class-number for this image.
End of explanation
"""
# Set the rounding options for numpy.
np.set_printoptions(precision=3, suppress=True)
# Print the predicted label.
print(label_pred[0])
class_names[3]
class_names[5]
"""
Explanation: Print the predicted class-label.
End of explanation
"""
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
"""
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation
"""
|
neoscreenager/JupyterNotebookWhirlwindTourOfPython | .ipynb_checkpoints/indic_nlp_examples-checkpoint.ipynb | gpl-3.0 | # The path to the local git repo for Indic NLP library
INDIC_NLP_LIB_HOME="/home/development/anoop/installs/indic_nlp_library"
# The path to the local git repo for Indic NLP Resources
INDIC_NLP_RESOURCES="/usr/local/bin/indicnlp/indic_nlp_resources"
"""
Explanation: Indic NLP Library
The goal of the Indic NLP Library is to build Python based libraries for common text processing and Natural Language Processing in Indian languages. Indian languages share a lot of similarity in terms of script, phonology, language syntax, etc. and this library is an attempt to provide a general solution to very commonly required toolsets for Indian language text.
The library provides the following functionalities:
Text Normalization
Script Conversion
Romanization
Indicization
Script Information
Phonetic Similarity
Syllabification
Tokenization
Word Segmenation
Transliteration
Translation
The data resources required by the Indic NLP Library are hosted in a different repository. These resources are required for some modules. You can download from the Indic NLP Resources project.
Pre-requisites
Python 2.7+
Morfessor 2.0 Python Library
Getting Started
----- Set these variables -----
End of explanation
"""
import sys
sys.path.append('{}/src'.format(INDIC_NLP_LIB_HOME))
"""
Explanation: Add Library to Python path
End of explanation
"""
from indicnlp import common
common.set_resources_path(INDIC_NLP_RESOURCES)
"""
Explanation: Export environment variable
export INDIC_RESOURCES_PATH=<path>
OR
set it programmatically
We will use that method for this demo
End of explanation
"""
from indicnlp import loader
loader.load()
"""
Explanation: Initialize the Indic NLP library
End of explanation
"""
from indicnlp.normalize.indic_normalize import IndicNormalizerFactory
input_text=u"\u0958 \u0915\u093c"
remove_nuktas=False
factory=IndicNormalizerFactory()
normalizer=factory.get_normalizer("hi",remove_nuktas)
output_text=normalizer.normalize(input_text)
print output_text
print 'Length before normalization: {}'.format(len(input_text))
print 'Length after normalization: {}'.format(len(output_text))
"""
Explanation: Let's actually try out some of the API methods in the Indic NLP library
Many of the API functions require a language code. We use 2-letter ISO 639-1 codes. Some languages do not have assigned 2-letter codes. We use the following two-letter codes for such languages:
Konkani: kK
Manipuri: mP
Bodo: bD
Text Normalization
Text written in Indic scripts display a lot of quirky behaviour on account of varying input methods, multiple representations for the same character, etc.
There is a need to canonicalize the representation of text so that NLP applications can handle the data in a consistent manner. The canonicalization primarily handles the following issues:
- Non-spacing characters like ZWJ/ZWNL
- Multiple representations of Nukta based characters
- Multiple representations of two part dependent vowel signs
- Typing inconsistencies: e.g. use of pipe (|) for poorna virama
End of explanation
"""
from indicnlp.transliterate.unicode_transliterate import UnicodeIndicTransliterator
input_text=u'राजस्थान'
print UnicodeIndicTransliterator.transliterate(input_text,"hi","pa")
"""
Explanation: Script Conversion
Convert from one Indic script to another. This is a simple script which exploits the fact that Unicode points of various Indic scripts are at corresponding offsets from the base codepoint for that script. The following scripts are supported:
Devanagari (Hindi,Marathi,Sanskrit,Konkani,Sindhi,Nepali), Assamese, Bengali, Oriya, Gujarati, Gurumukhi (Punjabi), Sindhi, Tamil, Telugu, Kannada, Malayalam
End of explanation
"""
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
input_text=u'राजस्थान'
lang='hi'
print ItransTransliterator.to_itrans(input_text,lang)
"""
Explanation: Romanization
Convert script text to Roman text in the ITRANS notation
End of explanation
"""
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
# input_text=u'rajasthAna'
input_text=u'pitL^In'
lang='hi'
x=ItransTransliterator.from_itrans(input_text,lang)
print x
for y in x:
print '{:x}'.format(ord(y))
"""
Explanation: Indicization (ITRANS to Indic Script)
Let's call conversion of ITRANS-transliteration to an Indic script as Indicization!
End of explanation
"""
from indicnlp.script import indic_scripts as isc
c=u'क'
lang='hi'
isc.get_phonetic_feature_vector(c,lang)
"""
Explanation: Script Information
Indic scripts have been designed keeping phonetic principles in nature and the design and organization of the scripts makes it easy to obtain phonetic information about the characters.
Get Phonetic Feature Vector
With each script character, a phontic feature vector is associated, which encodes the phontic properties of the character. This is a bit vector which is can be obtained as shown below:
End of explanation
"""
sorted(isc.PV_PROP_RANGES.iteritems(),key=lambda x:x[1][0])
"""
Explanation: This fields in this bit vector are (from left to right):
End of explanation
"""
from indicnlp.langinfo import *
c=u'क'
lang='hi'
print 'Is vowel?: {}'.format(is_vowel(c,lang))
print 'Is consonant?: {}'.format(is_consonant(c,lang))
print 'Is velar?: {}'.format(is_velar(c,lang))
print 'Is palatal?: {}'.format(is_palatal(c,lang))
print 'Is aspirated?: {}'.format(is_aspirated(c,lang))
print 'Is unvoiced?: {}'.format(is_unvoiced(c,lang))
print 'Is nasal?: {}'.format(is_nasal(c,lang))
"""
Explanation: You can check the phonetic information database files in Indic NLP resources to know the definition of each of the bits.
For Tamil Script: database
For other Indic Scripts: database
Query Phonetic Properties
Note: This interface below will be deprecated soon and a new interface will be available soon.
End of explanation
"""
from indicnlp.script import indic_scripts as isc
from indicnlp.script import phonetic_sim as psim
c1=u'क'
c2=u'ख'
c3=u'भ'
lang='hi'
print u'Similarity between {} and {}'.format(c1,c2)
print psim.cosine(
isc.get_phonetic_feature_vector(c1,lang),
isc.get_phonetic_feature_vector(c2,lang)
)
print
print u'Similarity between {} and {}'.format(c1,c3)
print psim.cosine(
isc.get_phonetic_feature_vector(c1,lang),
isc.get_phonetic_feature_vector(c3,lang)
)
"""
Explanation: Get Phonetic Similarity
Using the phonetic feature vectors, we can define phonetic similarity between the characters (and underlying phonemes). The library implements some measures for phonetic similarity between the characters (and underlying phonemes). These can be defined using the phonetic feature vectors discussed earlier, so users can implement additional similarity measures.
The implemented similarity measures are:
cosine
dice
jaccard
dot_product
sim1 (Kunchukuttan et al., 2016)
softmax
References
Anoop Kunchukuttan, Pushpak Bhattacharyya, Mitesh Khapra. Substring-based unsupervised transliteration with phonetic and contextual knowledge. SIGNLL Conference on Computational Natural Language Learning (CoNLL 2016) . 2016.
End of explanation
"""
from indicnlp.script import indic_scripts as isc
from indicnlp.script import phonetic_sim as psim
slang='hi'
tlang='ml'
sim_mat=psim.create_similarity_matrix(psim.cosine,slang,tlang,normalize=False)
c1=u'क'
c2=u'ഖ'
print u'Similarity between {} and {}'.format(c1,c2)
print sim_mat[isc.get_offset(c1,slang),isc.get_offset(c2,tlang)]
"""
Explanation: You may have figured out that you can also compute similarities of characters belonging to different scripts.
You can also get a similarity matrix which contains the similarities between all pairs of characters (within the same script or across scripts).
Let's see how we can compare the characters across Devanagari and Malayalam scripts
End of explanation
"""
slang='hi'
tlang='ml'
sim_mat=psim.create_similarity_matrix(psim.sim1,slang,tlang,normalize=True)
c1=u'क'
c2=u'ഖ'
print u'Similarity between {} and {}'.format(c1,c2)
print sim_mat[isc.get_offset(c1,slang),isc.get_offset(c2,tlang)]
"""
Explanation: Some similarity functions like sim do not generate values in the range [0,1] and it may be more convenient to have the similarity values in the range [0,1]. This can be achieved by setting the normalize paramter to True
End of explanation
"""
from indicnlp.syllable import syllabifier
w=u'जगदीशचंद्र'
lang='hi'
print u' '.join(syllabifier.orthographic_syllabify(w,lang))
"""
Explanation: Orthographic Syllabification
Orthographic Syllabification is an approximate syllabification process for Indic scripts, where CV+ units are defined to be orthographic syllables.
See the following paper for details:
Anoop Kunchukuttan, Pushpak Bhattacharyya. Orthographic Syllable as basic unit for SMT between Related Languages. Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). 2016.
End of explanation
"""
from indicnlp.tokenize import indic_tokenize
indic_string=u'अनूप,अनूप?।फोन'
print u'Input String: {}'.format(indic_string)
print u'Tokens: '
for t in indic_tokenize.trivial_tokenize(indic_string):
print t
"""
Explanation: Tokenization
A trivial tokenizer which just tokenizes on the punctuation boundaries. This also includes punctuations for the Indian language scripts (the purna virama and the deergha virama). It returns a list of tokens.
End of explanation
"""
from indicnlp.morph import unsupervised_morph
from indicnlp import common
analyzer=unsupervised_morph.UnsupervisedMorphAnalyzer('mr')
indic_string=u'आपल्या हिरड्यांच्या आणि दातांच्यामध्ये जीवाणू असतात .'
analyzes_tokens=analyzer.morph_analyze_document(indic_string.split(' '))
for w in analyzes_tokens:
print w
"""
Explanation: Word Segmentation
Unsupervised morphological analysers for various Indian language. Given a word, the analyzer returns the componenent morphemes.
The analyzer can recognize inflectional and derivational morphemes.
The following languages are supported:
Hindi, Punjabi, Marathi, Konkani, Gujarati, Bengali, Kannada, Tamil, Telugu, Malayalam
Support for more languages will be added soon.
End of explanation
"""
import urllib2
from django.utils.encoding import *
from django.utils.http import *
text=iri_to_uri(urlquote('anoop, ratish kal fone par baat karenge'))
url=u'http://www.cfilt.iitb.ac.in/indicnlpweb/indicnlpws/transliterate_bulk/en/hi/{}/statistical'.format(text)
response=urllib2.urlopen(url).read()
print response
"""
Explanation: Transliteration
We use the BrahmiNet REST API for transliteration.
End of explanation
"""
|
GustavoRP/IA369Z | dev/Apresentação JUPYTER/Apresentacao_JUPYTER.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import matplotlib.pylab as plt
from ipywidgets import *
#Variância do ruído
var = 0.3
#Conjunto de treino
train_size = 10
x_train = np.linspace(0,1,train_size)
y_train = np.sin(2*np.pi*x_train) + np.random.normal(0,var,train_size) #sinal + ruido
#Conjunto de teste
test_size = 100
x_test= np.linspace(0,1,test_size)
y = np.sin(2*np.pi*x_test)
y_test = y + np.random.normal(0,var,test_size) #sinal + ruido
# Gráfico do sinal sem ruído e do conhunto de treinamento gerado
plt.figure()
plt.plot(x_test,y,linewidth = 2.0,label = r'Modelo: $sin(2 \pi x)$')
plt.scatter(x_train,y_train,color='red',label = "Modelo + ruido")
plt.legend(loc = (0.02, 0.18))
plt.xlabel("x")
plt.ylabel("y")
plt.show()
"""
Explanation: JUPYTER NOTEBOOK
http://jupyter.org/
Project Jupyter is an open source project was born out of the IPython Project in 2014 as it evolved to support interactive data science and scientific computing across all programming languages. Jupyter will always be 100% open source software, free for all to use and released under the liberal terms of the modified BSD license
Características
- Interface no navegador
- Compartilhamento de notebooks (inslusive remoto)
- nbviwer - http://nbviewer.jupyter.org/
- jupterhub
- GitHub
- Docker
- Suporte para mais de 40 linguagens de programação
- Python
- R
- Julia
- Scala
- etc
- Big data integration
- Apache Spark
- from Python
- R
- Scala
- scikit-learn
- ggplot2
- dplyr
- etc
- Suporte para latex $\LaTeX$, videos e imagens
- Documentação de suporte - https://jupyter.readthedocs.io/en/latest/index.html
- Interatividade e Widgets - http://jupyter.org/widgets.html
- Exporta para - https://ipython.org/ipython-doc/3/notebook/nbconvert.html
- latex
- html
- py/ipynb
- PDF
- Iporta modulos .py e .ipynb
- Tabelas
instalação
http://jupyter.readthedocs.io/en/latest/install.html
- Linux
- pip
- pip3 install --upgrade pip
- pip3 install jupyter
- Anaconda
- Windows/ macOS
- Anaconda - https://www.continuum.io/downloads
Usar o pytthon 2.7 porque é compatível com a grande maioria dos pacotes.
Se quiser instalar mais de uma versão do python, é melhor criar multiplos enviroments.
- Para poder exportar PDF
- http://pandoc.org/installing.html
EXEMPLO DE USO
Ajuste Polinomial de Curvas
Esse tutorial visa explicar os conceitos de overfitting e regulzarização através de um exemplo de ajuste polinomial de curvas usando o método dos mínimos quadrados. Overfitting ocorre quando o modelo decora os dados de entrada, de forma que o modelo se torne incapaz de generalizar para novos dados. Regulzarização é uma técnica para evitar o overfitting.
O tutorial é uma adaptação do exemplo apresentado no capítulo 1 do livro:
"Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA."
End of explanation
"""
#Implementação da solução dos mínimos quadrados
def polynomial_fit(X,T,M):
A = np.power(X.reshape(-1,1),np.arange(0,M+1).reshape(1,-1))
T = T.reshape(-1,1)
W = np.dot(np.linalg.pinv(A),T)
return W.ravel()
"""
Explanation: Observações: $$\boldsymbol{X} =(x_1,x_2,...,x_N)^T$$
Alvo: $$\boldsymbol{T} =(t_1,t_2,...,t_N)^T$$
Dados
Observações: $$\boldsymbol{X} =(x_1,x_2,...,x_N)^T$$
Alvo: $$\boldsymbol{T} =(t_1,t_2,...,t_N)^T$$
Modelo
$$y(x,\boldsymbol{W})= w_0 + w_1x +w_2x^2+...+w_mx^m = \sum^M_{j=0}w_jx^j$$
Função de custo
Função de custo quadrática: $$E(\boldsymbol{W})=\frac{1}{2}\sum_{n=1}^N{y(x_n,\boldsymbol{W})-t_n}^2$$
Derivando a função de custo e igualando a zero obtemos o vetor W que minimiza o erro:
$$ \boldsymbol{W}^ = (\boldsymbol{A}^T\boldsymbol{A})^{-1}\boldsymbol{A} ^T\boldsymbol{T}$$
Onde A é definido por:
$$\boldsymbol{A} = \begin{bmatrix}
1 & x_{1} & x_{1}^2 & \dots & x_{1}^M \
1 & x_{2} & x_{2}^2 & \dots & x_{2}^M \
\vdots & \vdots & \vdots & \ddots & \vdots \
1 & x_{N} & x_{N}^2 & \dots & x_{N}^M
\end{bmatrix}$$
End of explanation
"""
def plotmodel(M):
coefs = polynomial_fit(x_train, y_train, M)[::-1]
p = np.poly1d(coefs)
plt.figure()
plt.plot(x_test,y,linewidth = 2.0,label = 'Real')
plt.scatter(x_train,y_train,color='red',label= "Treino")
plt.xlabel("x")
plt.ylabel(r'y')
y_fit = p(x_test)
plt.plot(x_test,y_fit,linewidth = 2.0,label ="Estimado")
plt.plot(x_test,y_test,'x',color='black',label = "Teste")
plt.legend(loc=(0.02,0.02))
interact(plotmodel,M=(0,9,1))
"""
Explanation: Plotando o resultado dos mínimos quadrados para polinômios de graus 0 a 9. Qual é um bom modelo?
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/05c57a644672d33707fd1264df7f5617/plot_time_frequency_global_field_power.ipynb | bsd-3-clause | # Authors: Denis A. Engemann <denis.engemann@gmail.com>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import somato
from mne.baseline import rescale
from mne.stats import bootstrap_confidence_interval
"""
Explanation: Explore event-related dynamics for specific frequency bands
The objective is to show you how to explore spectrally localized
effects. For this purpose we adapt the method described in [1]_ and use it on
the somato dataset. The idea is to track the band-limited temporal evolution
of spatial patterns by using the :term:Global Field Power(GFP) <GFP>.
We first bandpass filter the signals and then apply a Hilbert transform. To
reveal oscillatory activity the evoked response is then subtracted from every
single trial. Finally, we rectify the signals prior to averaging across trials
by taking the magniude of the Hilbert.
Then the :term:GFP is computed as described in [2], using the sum of the
squares but without normalization by the rank.
Baselining is subsequently applied to make the :term:GFPs <GFP> comparable
between frequencies.
The procedure is then repeated for each frequency band of interest and
all :term:GFPs <GFP> are visualized. To estimate uncertainty, non-parametric
confidence intervals are computed as described in [3] across channels.
The advantage of this method over summarizing the Space x Time x Frequency
output of a Morlet Wavelet in frequency bands is relative speed and, more
importantly, the clear-cut comparability of the spectral decomposition (the
same type of filter is used across all bands).
We will use this dataset: somato-dataset
References
.. [1] Hari R. and Salmelin R. Human cortical oscillations: a neuromagnetic
view through the skull (1997). Trends in Neuroscience 20 (1),
pp. 44-49.
.. [2] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals,
vol. 108, 328-342, NeuroImage.
.. [3] Efron B. and Hastie T. Computer Age Statistical Inference (2016).
Cambrdige University Press, Chapter 11.2.
End of explanation
"""
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# let's explore some frequency bands
iter_freqs = [
('Theta', 4, 7),
('Alpha', 8, 12),
('Beta', 13, 25),
('Gamma', 30, 45)
]
"""
Explanation: Set parameters
End of explanation
"""
# set epoching parameters
event_id, tmin, tmax = 1, -1., 3.
baseline = None
# get the header to extract events
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
frequency_map = list()
for band, fmin, fmax in iter_freqs:
# (re)load the data to save memory
raw = mne.io.read_raw_fif(raw_fname)
raw.pick_types(meg='grad', eog=True) # we just look at gradiometers
raw.load_data()
# bandpass filter
raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up.
l_trans_bandwidth=1, # make sure filter params are the same
h_trans_bandwidth=1) # in each band and skip "auto" option.
# epoch
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline,
reject=dict(grad=4000e-13, eog=350e-6),
preload=True)
# remove evoked response
epochs.subtract_evoked()
# get analytic signal (envelope)
epochs.apply_hilbert(envelope=True)
frequency_map.append(((band, fmin, fmax), epochs.average()))
del epochs
del raw
"""
Explanation: We create average power time courses for each frequency band
End of explanation
"""
# Helper function for plotting spread
def stat_fun(x):
"""Return sum of squares."""
return np.sum(x ** 2, axis=0)
# Plot
fig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)
colors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))
for ((freq_name, fmin, fmax), average), color, ax in zip(
frequency_map, colors, axes.ravel()[::-1]):
times = average.times * 1e3
gfp = np.sum(average.data ** 2, axis=0)
gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))
ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)
ax.axhline(0, linestyle='--', color='grey', linewidth=2)
ci_low, ci_up = bootstrap_confidence_interval(average.data, random_state=0,
stat_fun=stat_fun)
ci_low = rescale(ci_low, average.times, baseline=(None, 0))
ci_up = rescale(ci_up, average.times, baseline=(None, 0))
ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)
ax.grid(True)
ax.set_ylabel('GFP')
ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),
xy=(0.95, 0.8),
horizontalalignment='right',
xycoords='axes fraction')
ax.set_xlim(-1000, 3000)
axes.ravel()[-1].set_xlabel('Time [ms]')
"""
Explanation: Now we can compute the Global Field Power
We can track the emergence of spatial patterns compared to baseline
for each frequency band, with a bootstrapped confidence interval.
We see dominant responses in the Alpha and Beta bands.
End of explanation
"""
|
sbailey/knltest | doc/redrock/scaling_plots_orig.ipynb | bsd-3-clause | %pylab inline
import numpy as np
import io
data = b'''
# cpu ncpu ntarg time
hsw 4 64 123
hsw 8 64 85
hsw 16 64 68
hsw 32 64 68
hsw 64 64 76
hsw 16 32 50
hsw 16 128 102
hsw 16 256 169
knl 16 64 380
knl 64 64 315
knl 128 64 337
knl 64 128 408
knl 64 256 598
'''
x = np.loadtxt(io.BytesIO(data), dtype=[('cpu', 'S3'), ('ncpu', '<i8'), ('ntarg', '<i8'), ('time', '<f8')])
"""
Explanation: Redrock scaling tests
Starting point prior to Austin dungeon 2017
End of explanation
"""
x = x[np.argsort(x['ncpu'])]
ii = x['ntarg'] == 64
hsw = ii & (x['cpu'] == b'hsw')
knl = ii & (x['cpu'] == b'knl')
hsw_rate = 64/x['time'][hsw]
knl_rate = 64/x['time'][knl]
plot(x['ncpu'][hsw], hsw_rate, 's-', label='Haswell')
plot(x['ncpu'][knl], knl_rate, 'd-', label='KNL')
legend(loc='upper right')
xticks([4, 8, 16, 32, 64, 128])
ylim(0, 1)
xlabel('Number of cores used')
ylabel('Targets/second')
title('redshift fitting rate for 64 targets')
savefig('orig_rate_vs_cores.png')
print('HSW/KNL node rate ratio for 64 targets: {:.2f}'.format(np.max(hsw_rate) / np.max(knl_rate)))
"""
Explanation: Scaling with number of processes for fixed number of targets
End of explanation
"""
x = x[np.argsort(x['ntarg'])]
hsw = (x['ncpu'] == 16) & (x['cpu'] == b'hsw')
knl = (x['ncpu'] == 64) & (x['cpu'] == b'knl')
hsw_rate = x['ntarg'][hsw]/x['time'][hsw]
knl_rate = x['ntarg'][knl]/x['time'][knl]
subplot(211)
plot(x['ntarg'][hsw], hsw_rate, 's-', label='Haswell')
plot(x['ntarg'][knl], knl_rate, 's-', label='KNL')
legend(loc='upper left')
xlim(0, 260)
ylim(0, 1.6)
xticks([32,64,128,256])
title('Redshift fitting rate vs. sample size')
ylabel('Targets/second')
subplot(212)
plot(x['ntarg'][hsw][1:], hsw_rate[1:]/knl_rate, 'o-')
ylim(0, 5)
xlim(0, 260)
xticks([32,64,128,256])
xlabel('Number of targets')
ylabel('HSW/KNL rate')
savefig('orig_rate_vs_ntargets.png')
"""
Explanation: Scaling with number of targets
End of explanation
"""
|
sylvchev/coursMLpython | 2-AnalyseDuTitanic.ipynb | unlicense | import csv
import numpy as np
fichier_csv = csv.reader(open('train.csv', 'r'))
entetes = fichier_csv.__next__() # on récupère la première ligne qui contient les entetes
donnees = list() # on crée la liste qui va servir à récupérer les données
for ligne in fichier_csv: # pour chaque ligne lue dans le fichier csv
donnees.append(ligne) # on ajoute les valeurs lues dans le tableau donness
#entete = donnees[0]
#donnees[0] = []
donnees = np.array(donnees) # le tableau donnees est transformé en numpy array
"""
Explanation: Traitement de données classique
Nous allons voir dans cet exemple comment utiliser la bibliothèque numpy pour récupérer des valeurs dans un fichier csv et commencer à les traiter.
Nous allons utiliser le module csv qui permet de lire un fichier csv et d'en extraire les valeurs lignes par lignes.
Nous allons travailler sur le fichier de données d'entraînement du Titanic. Le but est de prédire les chance de survie à bord du bateau. Il faut récupérer le fichier train.csv (voir le premier cours ou téléchargez le depuis https://www.kaggle.com/c/titanic-gettingStarted/data ) et le sauvegarder dans le répertoire dans lequel le notebook s'éxecute. Vous pouvez utiliser la commande pwd pour connaître ce répertoire. Sinon, vous pouvez déplacer le répertoire courant pour rejoindre l'endroit où vous avez sauvegardé votre fichier avec la commande cd.
End of explanation
"""
print (donnees)
"""
Explanation: Regardons comment sont stockées les données en mémoire:
End of explanation
"""
print (donnees[1:15, 5])
"""
Explanation: Regardons maintenant la colonne de l'âge, n'affichons que les 15 premières valeurs:
End of explanation
"""
donnees[1:15, 5].astype(np.int)
"""
Explanation: On peut donc remarquer que les âges sont stockés comme des chaîne de caractères. Transformons les en réels :
End of explanation
"""
import pandas as pd
import numpy as np
"""
Explanation: Numpy ne sait pas convertir la chaîne de caractère vide '' (en 6e position dans notre liste) en réels. Pour traiter ces données, il faudrait écrire un petit algorithme. Nous allons voir comment on peut utiliser pandas pour faire ces traitements beaucoup plus facilement.
Traiter et manipuler les données avec pandas
End of explanation
"""
df = pd.read_csv('train.csv')
"""
Explanation: Pour lire le fichier csv nous allons utiliser la fonction read_csv
End of explanation
"""
df.head(6)
"""
Explanation: Pour vérifier si cela a bien fonctionné, affichons les premières valeurs. On voit apparaître l'identifiant du passager, s'il a survécu, sa classe, son nom, son sexe, son âge, le nombre de frères/soeurs/époux/épouse sur le bâteau, le nombre de parents ou d'enfants, le numéro de ticket, le prix, le numéro de cabine et le port d'embarquement.
End of explanation
"""
type(donnees)
type(df)
"""
Explanation: Comparons le type de donnees, obtenu précédemment. C'est un numpy array. Le type de df est un objet spécifique à pandas.
End of explanation
"""
df.dtypes
"""
Explanation: Nous avions vu qu'avec numpy, toutes les valeurs importées étaient des chaînes de caractères. Vérifions ce qu'il en est avec pandas
End of explanation
"""
df.info()
"""
Explanation: On peut voir que Pandas a détecté automatiquement le types des données de notre fichier csv: soit des entiers, soit des réels, soit des objets (chaînes de caractères). Il y a deux commandes importantes à connaître, c'est df.info() et df.describe()
End of explanation
"""
df.describe()
"""
Explanation: L'âge n'est pas renseigné pour tous les passagers, seulement pour 714 passagers sur 891. Idem pour le numéro de cabine et le port d'embarquement. On peut également utiliser describe() pour calculer plusieurs indicateurs statistiques utiles.
End of explanation
"""
df['Age'][0:15]
"""
Explanation: On peut voir que pandas a calculé automatiquement les indicateurs statistiques en tenant compte uniquement des données renseignées. Par exemple, il a calculé la moyenne d'âge uniquement sur les 714 valeurs connues. pandas a laissé de coté les valeurs non-numériques (nom, sexe, ticket, cabine, port d'embarquement).
Pour aller un peu plus loin avec pandas
Référencement et filtrage
Pour afficher uniquement les 15 premières valeurs de la colonne âge :
End of explanation
"""
df.Age[0:15]
"""
Explanation: On peut également utiliser la syntaxe
End of explanation
"""
df.Age.mean()
"""
Explanation: On peut calculer des critères statistiques directement sur les colonnes
End of explanation
"""
colonnes_interessantes = ['Sex', 'Pclass', 'Age']
df[ colonnes_interessantes ]
"""
Explanation: On peut voir que c'est la même valeur que celle affichée dans describe. Cette syntaxe permet d'utiliser facilement la valeur de la moyenne dans des calculs ou des algorithmes.
Pour filtrer les données, on va passer la liste de colonnes désirées:
End of explanation
"""
df[df['Age'] > 60]
"""
Explanation: En analyse, on est souvent intéressé par filtrer les données en fonction de certains critères. Par exemple, l'âge maximum est 80 ans. On peut examiner les informations relatives aux personnes âgées :
End of explanation
"""
df[df['Age'] > 60][['Pclass', 'Sex', 'Age', 'Survived']]
"""
Explanation: Comme on a trop d'informations, on peut les filtrer:
End of explanation
"""
df[df.Age.isnull()][['Sex', 'Pclass', 'Age']]
"""
Explanation: On peut voir que parmis les persones âges, il y a principalement des hommes. Les personnes qui ont survécues était principalement des femmes.
Nous allons maintenant voir comment traiter les valeurs manquantes pour l'âge. Nous allons filtrer les données pour afficher uniquement les valeurs manquantes
End of explanation
"""
for i in range(1, 4):
print ("Dans la classe", i, ", il y a", len( df[ (df['Sex'] == 'male') & (df['Pclass'] == i) ]), "hommes")
print ("Dans la classe", i, ", il y a", len( df[ (df['Sex'] == 'female') & (df['Pclass'] == i) ]), "femmes")
"""
Explanation: Pour combiner des filtres, on peut utiliser '&'. Affichons le nombre d'hommes dans chaque classe
End of explanation
"""
df.Age.hist(bins=20, range=(0,80))
"""
Explanation: Visualisons maintenant l'histogramme de répartition des âges.
End of explanation
"""
df['Gender'] = 4 # on ajoute une nouvelle colonne dans laquelle toutes les valeurs sont à 4
df.head()
df['Gender'] = df['Sex'].map( {'female': 0, 'male': 1} ) # la colonne Gender prend 0 pour les femmes et 1 pour les hommes
df.head()
"""
Explanation: Créations et modifications des colonnes
Pour pouvoir exploiter les informations sur le sexe des personnes, nous allons ajouter une nouvelle colonne, appellée genre, qui vaudra 1 pour les hommes et 0 pour les femmes.
End of explanation
"""
df['FamilySize'] = df.SibSp + df.Parch
df.head()
"""
Explanation: Pour créer et renommer de nouvelles colonnes, on peut également agréger des informations issues de différentes colonnes. Créons par exemple une colonne pour stocker les nombre de personnes de la même famille à bord du Titanic.
End of explanation
"""
ages_medians = np.zeros((2, 3))
ages_medians
for i in range(0,2):
for j in range(0,3):
ages_medians[i,j] = df[ (df['Gender'] == i) & (df['Pclass'] == j+1) ]['Age'].median()
ages_medians
"""
Explanation: Nous allons remplir les valeurs manquantes de l'âge avec la valeur médiane dépendant de la classe et du sexe.
End of explanation
"""
for i in range(0, 2):
for j in range (0, 3):
df.loc[ (df.Age.isnull()) & (df.Gender == i) & (df.Pclass == j+1), 'AgeFill'] = ages_medians[i,j]
# pour afficher les 10 premières valeurs qui sont complétées
df [df.Age.isnull()][['Gender', 'Pclass', 'Age', 'AgeFill']].head(10)
"""
Explanation: On va créer une nouvelle colonne AgeFill qui va utiliser ces âges médians
End of explanation
"""
import pickle
f = open('masauvegarde.pck', 'wb')
pickle.dump(df, f)
f.close()
"""
Explanation: Pour sauvegarder votre travail, vous pouvez utiliser le module pickle qui compresse et sauvegarde vos données :
End of explanation
"""
with open('masauvegarde.pck', 'rb') as f:
dff = pickle.load(f)
"""
Explanation: Pour récuperer votre travail, on utilise l'opération inverse, toujours avec pickle
End of explanation
"""
ex = df[ ['Gender', 'Pclass'] ] # on choisit seulement quelques features.
X = ex.as_matrix() # on convertit en numpy array
print(ex.head(5))
print(X[:5,:])
"""
Explanation: Retour à numpy pour l'apprentissage
Pour faire de l'apprentissage, et prédire la survie des passagers du Titanic, on peut utiliser scikit-learn. Ce dernier prend en entrée des données sous forme de numpy array, la conversion se fait simplement :
End of explanation
"""
y = df['Survived'].as_matrix()
print (y[:5])
from sklearn import svm
clf = svm.SVC()
clf.fit(X,y)
"""
Explanation: On cherche à prévoir la survie, on extrait donc l'information utile :
End of explanation
"""
print(clf.predict(X[:10,:]))
print (y[:10])
"""
Explanation: L'apprentissage du classifieur est fait, c'est-à-dire que nous avons entraîné une SVM sur nos données $X$ pour qu'elle soit capable de prédire la survie $y$. Pour vérifier que notre SVM a bien appris à prédire la survie des passagers, nous pouvons utiliser la méthode predict() et comparer visuellement pour les dix premières valeurs prédite par la SVM et la survie réelle des passagers.
End of explanation
"""
from sklearn import cross_validation
scores = cross_validation.cross_val_score(clf, X, y, cv=7)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
"""
Explanation: La SVM a bien appris à prédire ce que nous lui avons montré. Cela ne permet pas cependant d'évaluer sa capacité à généraliser à des cas qu'elle n'a pas vu. Pour ce faire, une approche classique est de faire de la validation croisée, c'est à dire qu'on entraîne le classifieur sur une partie des données et qu'on le teste sur une autre. Scikit-learn en donne une implémentation très simple d'utilisation.
End of explanation
"""
df['AgeFilled'] = df.Age # on recopie la colonne Age
df.loc[df.AgeFilled.isnull(), 'AgeFilled'] = df[df.Age.isnull()]['AgeFill'] # on met l'age médian pour les valeurs non renseignées
"""
Explanation: Sur les 7 partitions de nos données, la SVM prédit la survie des passagers dans 77% des cas, avec un écart-type de 0,04.
Pour améliorer les résultats, nous pouvons rajouter l'âge dans les features. Il faut cependant faire attention aux valeurs non-renseignées NaN, nous allons donc utiliser une nouvelle colonne AgeFilled avec l'âge ou la médiane.
End of explanation
"""
X = df[['Gender', 'Pclass', 'AgeFilled']].as_matrix()
scores = cross_validation.cross_val_score(svm.SVC(), X, y, cv=7)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
"""
Explanation: Nous pouvons maintenant créer un nouveau $X$ incluant l'âge, en plus du sexe et de la classe, et vérifier si cela améliore les performances de la SVM.
End of explanation
"""
|
nesterione/problem-solving-and-algorithms | problems/Calculus/mkr.ipynb | apache-2.0 | # Коэффициент теплопроводности
lam = 401
# Коэффициент удельной теплоемкости
c = 385
# Плотность материала
ro = 8900
# Вычисляем коэффициент задающий физические свойства материала
alpha = lam/(c*ro)
"""
Explanation: Метод конечных разностей
1. Подготовка
Вводим физические параметры материала
End of explanation
"""
# Длина стержня в метрах
stick_length = 1
# Время исследования в секундах
time_span = 200
# Шаг по длине стержня
dx = 0.02
# Шаг по времени
dt = 5
"""
Explanation: Теперь параметры задачи
End of explanation
"""
# Начальная температура стержня
Tinit = 300
# Температура справа
Tright = 350
# Температура слева
Tleft = 320
"""
Explanation: Вводим граничные и начальные условия
End of explanation
"""
# Вычислим количество шагов по времени
time_step_count = int(time_span / dt) + 1
# Вычислим количество шагов по координате
len_step_count = int(stick_length / dx) + 1
# Размерность гломальной матрицы
dim_global_matrix = time_step_count*len_step_count
"""
Explanation: Введем дополнительные переменные для удобства
End of explanation
"""
def get_gidx(time_line, x):
return time_line*len_step_count+x
"""
Explanation: Определим дополнительную функцию, для преобразования локальных координат, в индекс в глобальной матрицы
End of explanation
"""
# Подключим математическую библиотеку
import numpy as np
"""
Explanation: 2. Решение
на этой стадии мы сформируем матрицы, применим шаблон, и решим систему уравнений
End of explanation
"""
sigma = 1
c = 2*alpha*dt*(1-sigma) - dx**2 # u p m
v = 2*dt * alpha*sigma + dx**2 # u p+1 m
h = -alpha*dt*sigma # u p+1 m-1
j = -alpha*dt*sigma # u p+1 m+1
k = -alpha*dt*(1- sigma) # u p m-1
p = -alpha*dt*(1- sigma) # u p m+1
# Формируем матрицы
A = np.diag(np.ones(dim_global_matrix))
B = np.zeros([dim_global_matrix])
"""
Explanation: Введем коэффициенты для шаблона
Значения уравнений были получены для уравнения Пуассона, с применением двухслойного шеститочечного шаблона, в зависимости от значения sigma будут 3 различных шаблона:
* 0 - Явная схема (Не забывайте о условиях устойчивости)
* 0.5 - Схема Кранка-Николсона
* 1 - Не явная четырехточечная схема
End of explanation
"""
# Начальное условие
B[0:len_step_count]=Tinit
# Граничные условия
B[len_step_count::len_step_count] = Tleft
right_column = len_step_count+len_step_count-1
B[right_column::len_step_count] = Tright
"""
Explanation: Заполним вектор B, введем начальные и граничные условия
End of explanation
"""
for time_line in range(1, time_step_count):
for x_index in range(1, len_step_count-1):
active_row = get_gidx(time_line, x_index)
A[active_row, get_gidx(time_line, x_index)] = v
A[active_row, get_gidx(time_line, x_index+1)] = h
A[active_row, get_gidx(time_line, x_index-1)] = j
A[active_row, get_gidx(time_line-1, x_index)] = c
A[active_row, get_gidx(time_line-1, x_index-1)] = k
A[active_row, get_gidx(time_line-1, x_index+1)] = p
"""
Explanation: Формируем глобальную матрицу, заполняем нашим шаблоном
End of explanation
"""
X = np.linalg.solve(A, B)
print(X)
"""
Explanation: Решаем СЛАУ
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
x_axis_step = np.arange(0, stick_length+dx/2.0, dx)
for n in range(1, time_step_count):
plt.plot(x_axis_step, X[len_step_count*n:len_step_count*n+len_step_count])
plt.show()
"""
Explanation: 3. Результаты
Построим 2D графики для каждого временного слоя
End of explanation
"""
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
fig = plt.figure()
ax = fig.gca(projection='3d')
XA = np.arange(0, stick_length+dx/2.0, dx)
YA = np.arange(0, time_span+dt/2.0, dt)
XA, YA = np.meshgrid(XA, YA)
ZA = X.reshape((time_step_count,len_step_count)).tolist()
surf = ax.plot_surface(XA, YA, ZA, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
#mpld3.show()
# Показать интерактивно
%matplotlib qt
from matplotlib import interactive
interactive(True)
fig = plt.figure()
ax = fig.gca(projection='3d')
XA = np.arange(0, stick_length+dx/2.0, dx)
YA = np.arange(0, time_span+dt/2.0, dt)
XA, YA = np.meshgrid(XA, YA)
ZA = X.reshape((time_step_count,len_step_count)).tolist()
surf = ax.plot_surface(XA, YA, ZA, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
### Сохранить результат в файл
f=open('results.csv','wb')
header = ";".join([ 'X='+ str(x) for x in x_axis_step ])
f.write( bytes(header+'\n', 'UTF-8'))
np.savetxt(f,ZA, delimiter=";")
f.close()
"""
Explanation: Строим 3D график
End of explanation
"""
|
georgetown-analytics/classroom-occupancy | models/Sensor Data Ingestion & Cleaning_KM.ipynb | mit | %matplotlib inline
import os
import json
import time
import pickle
import requests
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.dates as md
import matplotlib.pyplot as plt
import matplotlib.ticker as tkr
import seaborn as sns
sns.set_palette('RdBu', 10)
"""
Explanation: Import & Clean Sensor Data
Dataset Features:
<strong>Instances:</strong> 53752 (5-second interval)
<strong>Attributes:</strong> 10
Table of Contents<a name='table of contents'></a>
Data Ingestion
Data Wrangling
a. Temperature Data
b. Humidity Data
c. CO₂ Data
d. Light Data
e. Light Status
f. Noise Data
g. Bluetooth Devices Data
h. Images
i. Door Status
j. Occupancy Count
Concatenate Data
Dummy Variables
Resample Data
Category Variable
Save Data
End of explanation
"""
URL = 'https://raw.githubusercontent.com/georgetown-analytics/classroom-occupancy/master/dataset_builder/dataset-5sec.csv'
def fetch_data(fname='dataset-5sec.csv'):
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'wb') as f:
f.write(response.content)
return outpath
# Defining fetching data from the URL
DATA = fetch_data()
# Read csv file in as a pandas dataframe with a DateTimeIndex: df
df = pd.read_csv('dataset-5sec.csv', index_col='datetime', parse_dates=True)
# Delete temperature_f and occupancy_category columns
df.drop(['temperature_f', 'occupancy_category'], axis=1, inplace=True)
# Slice data from Nick's office
df_nick = df.drop(df.loc['2017-03-25':'2017-05-16'].index)
df_nick = df_nick.drop(df_nick.loc['2017-06-03':'2017-06-10'].index)
# Slice Georgetown data
df.drop(df.loc['2017-05-17':'2017-05-24'].index, inplace=True)
# View updated list of days captured in the data: updated_byday
updated_byday = df.groupby(df.index.strftime('%D')).count()
print(updated_byday)
# Export data from Nick's officeto a csv file: df_nick.csv
df_nick.to_csv('nick_updated.csv')
df.info()
"""
Explanation: Data Ingestion<a name='data ingestion'></a>
End of explanation
"""
# Create temperature dataframe with DateTimeIndex
temperature_data = df[['temperature']].copy()
# Summary statistics of temperature data
temperature_data.info()
temperature_data.describe()
# Histogram of temperature data
plt.hist(temperature_data['temperature'])
plt.title('Temperature Data', size=18)
plt.xlabel('Temperature °C', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
# Distribution of temperature data
sns.distplot(temperature_data['temperature'])
plt.title('Distribution of Temperature Data', size=18)
plt.xlabel('Temperature °C', size=14)
plt.tick_params(labelsize=12)
# Create temperature dataframe with a daily PeriodIndex: temp_period
temp_period = temperature_data.to_period(freq='D')
# Box-and-whisker plots for daily temperature data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=temp_period.index, y='temperature', data=temp_period,
ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29',
'May 5', 'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Daily Temperature Data', fontsize= 24)
ax.set_xlabel('Date', fontsize=18)
ax.set_xticklabels(labels)
ax.set_ylabel('Temperature °C', fontsize=16)
plt.tick_params(labelsize=14)
plt.savefig('exploratory_data_analysis/daily_temperature_boxplots.png')
"""
Explanation: Data Wrangling<a name='data wrangling'></a>
Temperature Data <a name='temperature'></a>
For Reference: The temperature sensor has a range of -40 to 125°C. OSHA recommends temperature control in the range of 20-24.4°C (68-76°F).
End of explanation
"""
# Create humidity dataframe with DateTimeIndex: humidity_data
humidity_data = df[['humidity']].copy()
humidity_data.info()
humidity_data.describe()
# Histogram of humidity data
plt.hist(humidity_data['humidity'])
plt.title('Humidity Data', size=18)
plt.xlabel('Humidity %', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
# Distribution of humidity data
sns.distplot(humidity_data['humidity'])
plt.title('Distribution of Humidity Data', size=18)
plt.xlabel('Humidity %', size=14)
plt.tick_params(labelsize=12)
# Create humidity dataframe with a daily PeriodIndex: humidity_period
humidity_period = humidity_data.to_period(freq='D')
# Box-and-whisker plots for daily humidity data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=humidity_period.index, y='humidity', data=humidity_period,
ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29', 'May 5',
'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Daily Humidity Data',fontsize=24)
ax.set_xlabel('Date', fontsize=18)
ax.set_xticklabels(labels)
ax.set_ylabel('Humidity %', fontsize=18)
plt.tick_params(labelsize=14)
plt.savefig('exploratory_data_analysis/daily_humidity_boxplots.png')
"""
Explanation: Humidity <a name='humidity'></a>
For Reference: The sensor has a range of 0-100% relative humidity (RH). OSHA recommends humidity control in the range of 20%-60%.
End of explanation
"""
# Create CO2 dataframe with DateTimeIndex: co2_data
co2_data = df[['co2']].copy()
co2_data.info()
co2_data.describe()
# Histogram of CO2 data
plt.hist(co2_data['co2'])
plt.title('CO2 Data', size=18)
plt.xlabel('CO2 Level (ppm)', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
# Distribution of CO2 data
sns.distplot(co2_data['co2'])
plt.title('Distribution of CO2 Data', size=18)
plt.xlabel('CO2 Level (ppm)', size=14)
plt.tick_params(labelsize=12)
# Create CO2 dataframe with a daily PeriodIndex: co2_period
co2_period = co2_data.to_period(freq='D')
# Box-and-whisker plots of daily CO2 data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=co2_period.index, y='co2', data=co2_period,
ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29', 'May 5',
'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Daily CO2 Data', fontsize=26)
ax.set_xlabel('Date', fontsize=20)
ax.set_ylabel('CO2 Level (ppm)', fontsize=16)
ax.set_xticklabels(labels)
plt.tick_params(labelsize=14)
plt.savefig('exploratory_data_analysis/daily_co2.png')
# Plot spike in CO2 level on April 1, 2017
fig, ax = plt.subplots(figsize=(16,8))
ax.plot(co2_data.loc['April 1, 2017'])
ax.set_title('Spike in CO2 Level on April 1, 2017', fontsize=18)
ax.set_ylabel('CO2 Level (ppm)', fontsize=16, weight='bold')
ax.set_xlabel('Time of Day', fontsize=16)
plt.tick_params(labelsize=12)
ax.xaxis.set_major_formatter(md.DateFormatter('%I:%M %p'))
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
ax = plt.axes([.58, .55, .3, .3], facecolor='w')
ax.plot(co2_data['co2'].loc['2017-04-01 09:55:00':'2017-04-01 10:38:00'].index,
co2_data['co2'].loc['2017-04-01 09:55:00':'2017-04-01 10:38:00'], 'g', linewidth=2.0)
ax.xaxis.set_major_formatter(md.DateFormatter('%I:%M'))
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.savefig('exploratory_data_analysis/co2_spike.png')
"""
Explanation: CO₂ Data <a name='co2'></a>
For Reference: The CO₂ sensor has a range of 0-2000 parts per million (ppm). OSHA recommends keeping indoor CO₂ levels below 1000 ppm.
250-350 ppm: background (normal) outdoor air level
350-1000 ppm: typical level found in occupied spaces with good air exchange
* 1000-2000 ppm: level associated with complaints of drowsiness and poor air
End of explanation
"""
# Delete error co2 values
co2_data = co2_data[co2_data['co2'] <= 1628]
"""
Explanation: To remove the error values caused by the spike, I decided to delete the values above 1628. I choose that value by looking at the max value for the other days, since they did not have any spikes, and the highest value was 1628 on June 10th.
End of explanation
"""
# Create noise dataframe with DateTimeIndex: noise_data
noise_data = df[['noise']].copy()
noise_data.info()
noise_data.describe()
# Histogram of noise data
n_data = len(noise_data.noise)
n_bins = np.sqrt(n_data)
n_bins = int(n_bins)
fig, ax = plt.subplots()
ax.hist(noise_data['noise'], bins=n_bins, range=(noise_data['noise'].min(), noise_data['noise'].max()))
plt.title('Noise Data', size=18)
plt.xlabel('Noise Level (Hz)', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.savefig('exploratory_data_analysis/noise_histogram.png')
# Distribution of noise of data
sns.distplot(noise_data['noise'])
plt.title('Distribution of Noise Data', size=18)
plt.xlabel('Noise Level (Hz)', size=14)
plt.tick_params(labelsize=12)
# Create noise dataframe with a daily PeriodIndex: noise_period
noise_period = noise_data.to_period(freq='D')
# Box-and-whisker plots of daily noise data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=noise_period.index, y='noise', data=noise_period,
ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29', 'May 5',
'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Daily Noise Data',fontsize= 24)
ax.set_xlabel('Date', fontsize=18)
ax.set_xticklabels(labels)
ax.set_ylabel('Noise (Hz)', fontsize=16)
plt.tick_params(labelsize=14)
plt.savefig('exploratory_data_analysis/daily_noise_boxplots.png')
"""
Explanation: Noise Data<a name='noise'></a>
Human speech frequencies are in the range of 500 Hz to 4,000 Hz. A young person with normal hearing can hear frequencies between approximately 20 Hz and 20,000 Hz.
End of explanation
"""
# Create light dataframe with DateTimeIndex: light_data
light_data = df[['light']].copy()
light_data.info()
light_data.describe()
# Histogram of light data
n_data = len(light_data.light)
n_bins = np.sqrt(n_data)
n_bins = int(n_bins)
fig, ax = plt.subplots()
ax.hist(light_data['light'], bins=n_bins, range=(light_data['light'].min(), light_data['light'].max()))
plt.title('Light Data', size=18)
plt.xlabel('Light Level (lux)', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
ax.xaxis.set_major_formatter(tkr.FuncFormatter(lambda x, p: format(int(x), ',')))
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
#plt.savefig('exploratory_data_analysis/light_histogram.png')
# Create light dataframe with a daily PeriodIndex: light_period
light_period = light_data.to_period(freq='D')
# Box-and-whiskers plots of daily light data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=light_period.index, y='light', data=light_period, ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29', 'May 5',
'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Daily Light Data', fontsize=24)
ax.set_xlabel('Date', fontsize=20)
ax.set_xticklabels(labels)
ax.set_ylabel('Light (lux)', fontsize=20)
plt.tick_params(labelsize=14)
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.savefig('exploratory_data_analysis/daily_light_boxplots.png')
"""
Explanation: Light Data <a name='light'></a>
Illuminance is measured in foot candles or lux (in the metric SI system). GSA recommends a nominal illumination level (Lumens/Square Meter lux) of 300 for conference rooms, or 500 Lux in work station space, open and closed offices, and in training rooms.
End of explanation
"""
# Plot light data for May 5, 2017
fig, ax = plt.subplots()
ax.plot(light_data.loc['May 5, 2017 21:20:00':'May 5, 2017 22:00:00'])
plt.title('Light Data: May 5, 2017', fontsize=16)
plt.xlabel('Time of Day', fontsize=14)
plt.ylabel('Light, lux', fontsize=14)
plt.tick_params(labelsize=12)
ax.xaxis.set_major_formatter(md.DateFormatter('%I:%M %p'))
plt.savefig('exploratory_data_analysis/light_may5.png')
# Delete error light values
light_data = light_data[light_data['light'] < 4000]
light_data.info()
"""
Explanation: The light sensor generated several large error values or 0 readings when it would restart. In addition, it also generated error values at the end of the day when it was turned off. In the following plot of light data on May 5th, the light values never went over 400 lux until the final seconds of the day.
End of explanation
"""
light_data.light['March 25, 2017 11:48:20':'March 25, 2017 11:49:00']
# Delete error 0 light values
light_data = light_data[light_data['light'] != 0]
# Updated histogram of light data
fig, ax = plt.subplots()
ax.hist(light_data['light'], bins=15)
plt.title('Updated Light Data', size=18)
plt.xlabel('Light Level (lux)', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.savefig('exploratory_data_analysis/updated_light_hist.png')
# Distribution of light data
sns.distplot(light_data['light'])
plt.title('Distribution Light Data: Updated', size=18)
plt.xlabel('Light Level (lux)', size=14)
plt.tick_params(labelsize=12)
# Create light dataframe with a daily PeriodIndex: light_period
light_period = light_data.to_period(freq='D')
# Box-and-whiskers plots of daily light data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=light_period.index, y='light', data=light_period, ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29', 'May 5',
'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Updated Daily Light Data', fontsize=24)
ax.set_xlabel('Date', fontsize=20)
ax.set_xticklabels(labels)
ax.set_ylabel('Light (lux)', fontsize=20)
plt.tick_params(labelsize=14)
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.savefig('exploratory_data_analysis/light_boxplots_updated.png')
"""
Explanation: While a light value of 0 is possible, it's unlikely since even with the classroom lights turned off, there still would have been light from the hallway. In addition, I concluded that these 0 values were errors since they were isolated readings, as can be seen below.
End of explanation
"""
# Create light_status dataframe with DateTimeIndex: light_status
light_status = df[['light_status']].copy()
light_status.info()
# Count the number of each category value
light_status['light_status'].value_counts()
"""
Explanation: Light Status<a name='light status'></a>
Categorical variable with two possible values: 'light-on' and 'light-off'
End of explanation
"""
# Create bluetooth devices dataframe with DateTimeIndex: bluetooth_data
bluetooth_data = df[['bluetooth_devices']].copy()
bluetooth_data.info()
bluetooth_data.describe()
# Histogram of bluetooth data
fig, ax = plt.subplots()
ax.hist(bluetooth_data['bluetooth_devices'], bins=20)
plt.title('Bluetooth Devices', size=18)
plt.xlabel('No. of Bluetooth Devices', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
# Distribution of bluetooth data
sns.distplot(bluetooth_data['bluetooth_devices'])
plt.title('Distribution of Bluetooth Data', size=18)
plt.xlabel('Bluetooth Devices', size=14)
plt.tick_params(labelsize=12)
# Create bluetooth dataframe with a daily PeriodIndex
bluetooth_period = bluetooth_data.to_period(freq='D')
# Box-and-whiskers plots of daily bluetooth data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=bluetooth_period.index, y='bluetooth_devices', data=bluetooth_period,
ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29', 'May 5',
'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Daily Bluetooth Devices Data', fontsize=24)
ax.set_xlabel('Date', fontsize=20)
ax.set_xticklabels(labels)
ax.set_ylabel('Bluetooth Devices', fontsize=20)
plt.tick_params(labelsize=14)
plt.savefig('exploratory_data_analysis/daily_bluetooth.png')
# Plot bluetooth data for May 5, 2017
fig, ax = plt.subplots()
ax.plot(bluetooth_data.loc['May 5, 2017'])
plt.title('Bluetooth Devices Data: May 5, 2017', fontsize=16)
plt.xlabel('Time of Day', fontsize=14)
plt.ylabel('No. of Bluetooth Devices', fontsize=14)
plt.tick_params(labelsize=12)
plt.savefig('exploratory_data_analysis/bluetooth_may5.png')
"""
Explanation: Bluetooth Devices<a name='bluetooth devices'></a>
End of explanation
"""
# Create image dataframe with DateTimeIndex: image_data
image_data = df[['image_hist_change']].copy()
image_data.info()
image_data.describe()
# Histogram of image data
plt.hist(image_data['image_hist_change'], bins=20)
plt.title('Image Data', size=18)
plt.xlabel('% Change in Hist', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
# Create image dataframe with a daily PeriodIndex: image_period
image_period = image_data.to_period(freq='D')
# Box-and-whiskers plots of daily image data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=image_period.index, y='image_hist_change',
data=image_period, ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29', 'May 5',
'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Daily Image Data', fontsize=24)
ax.set_xlabel('Date', fontsize=16)
ax.set_xticklabels(labels)
ax.set_ylabel('% Change Hist', fontsize=16)
plt.tick_params(labelsize=14)
plt.savefig('exploratory_data_analysis/daily_bluetooth_devices.png')
# Plot image data for May 6, 2017
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(image_data.loc['May 6, 2017'])
plt.title('Image Data: May 6, 2017', fontsize=22)
plt.xlabel('Time of Day', fontsize=16)
plt.ylabel('% Change in Hist', fontsize=16)
plt.tick_params(labelsize=12)
ax.xaxis.set_major_formatter(md.DateFormatter('%I:%M %p'))
plt.savefig('images_may6.png')
# Plot spike in image data on May 6, 2017
fig, ax = plt.subplots(figsize=(16,8))
ax.plot(image_data.loc['May 6, 2017'])
ax.set_title('Image Data: May 6, 2017', fontsize=20)
ax.set_ylabel('% Change Hist', fontsize=16, weight='bold')
ax.set_xlabel('Time of Day', fontsize=16)
plt.tick_params(labelsize=12)
ax.xaxis.set_major_formatter(md.DateFormatter('%I:%M %p'))
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
ax = plt.axes([.58, .56, .3, .3], facecolor='w')
ax.plot(image_data['image_hist_change'].loc['2017-05-06 12:00:00':'2017-05-06 13:05:00'].index,
image_data['image_hist_change'].loc['2017-05-06 12:00:00':'2017-05-06 13:05:00'], 'g', linewidth=2.0)
ax.xaxis.set_major_formatter(md.DateFormatter('%I:%M'))
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.savefig('exploratory_data_analysis/image_lunch_may6.png')
"""
Explanation: Images<a name='images'></a>
End of explanation
"""
# Create door status dataframe with DateTimeIndex: door_status
door_status = df[['door_status']].copy()
# Count the number of each category value
door_status['door_status'].value_counts()
"""
Explanation: Door Status<a name='door status'></a>
Categorical variable with two possible values: 'closed' and 'opened'
End of explanation
"""
# Create occupancy count dataframe with DateTimeIndex: occupancy_count
occupancy_count = df[['occupancy_count']].copy()
occupancy_count.info()
occupancy_count.describe()
# Histogram of occupancy data
fig, ax = plt.subplots()
ax.hist(occupancy_count['occupancy_count'], bins=20)
plt.title('Room Occupancy', size=18)
plt.xlabel('Total People', size=14)
plt.ylabel('Count', size=14)
plt.tick_params(labelsize=12)
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: format(int(y), ',')))
# Create 'occupancy_count' dataframe with a daily PeriodIndex: occupancy_period
occupancy_period = occupancy_count.to_period(freq='D')
# Box-and-whiskers plots of daily 'occupancy_count' data
fig, ax = plt.subplots(figsize=(16,8))
sns.boxplot(x=occupancy_period.index, y='occupancy_count',
data=occupancy_period, ax=ax, palette=sns.color_palette('RdBu', 10))
labels = (['March 25', 'April 1', 'April 8', 'April 22', 'April 29', 'May 5',
'May 6', 'May 12', 'May 13', 'June 3', 'June 10'])
ax.set_title('Daily Occupancy Count', fontsize=24)
ax.set_xlabel('Date', fontsize=16)
ax.set_xticklabels(labels)
ax.set_ylabel('Total People in Classroom', fontsize=16)
plt.tick_params(labelsize=14)
plt.savefig('exploratory_data_analysis/daily_occupancy.png')
"""
Explanation: Occupancy Count<a name='occupancy count'></a>
End of explanation
"""
# Concatenate cleaned sensor data in a new dataframe: sensor_all
# Backward fill missing data
sensor_all = pd.concat([temperature_data, humidity_data, co2_data, noise_data,
light_data, light_status, bluetooth_data, image_data,
door_status, occupancy_count], axis=1).fillna(method='bfill')
"""
Explanation: Concatenate Sensor Data<a name='concatenate data'></a>
Concatenate temperature, humidity, CO₂, light, light status, noise, bluetooth, image, door_status, and occupancy count data into a new pandas dateframe
End of explanation
"""
# Create dummy variables with drop_first=True: sensor_data
sensor_data = pd.get_dummies(sensor_all, drop_first=True)
# Print the new columns of df
sensor_data.columns
# Rearrange columns
sensor_data = sensor_data[['temperature', 'humidity', 'co2', 'light', 'light_status_light-on', 'noise',
'bluetooth_devices', 'image_hist_change', 'door_status_opened', 'occupancy_count']]
sensor_data.columns = [['temp', 'humidity', 'co2', 'light', 'light_status', 'noise',
'bluetooth_devices', 'images', 'door_status', 'occupancy_count']]
sensor_data.info()
"""
Explanation: Dummy Variables<a name='dummy variables'></a>
End of explanation
"""
# Resample data by taking the mean per minute
sensor_data = sensor_data.resample('T').mean().dropna()
"""
Explanation: Resample Data<a name='resample data'></a>
End of explanation
"""
# Create target array by slicing 'occupancy_count' column: occupancy_level
sensor_data['occupancy_level'] = pd.cut(sensor_data['occupancy_count'], [0, 1, 16, 27, 45],
labels=['empty', 'low', 'mid-level', 'high'], include_lowest=True)
# Breakdown of classroom occupancy levels
sensor_data.occupancy_level.value_counts()
"""
Explanation: Create Category Variable<a name='occupancy level'></a>
End of explanation
"""
# Export updated sensor data to a CSV file: sensor_data_ml.csv
sensor_data.to_csv('sensor_data_ml.csv')
"""
Explanation: Save Data <a name='save data'></a>
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.2/tutorials/l3.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: "Third" Light
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.filter(qualifier='l3_mode')
"""
Explanation: Relevant Parameters
NEW in PHOEBE 2.2: an l3_mode parameter exists for each LC dataset, which determines whether third light will be provided in flux units, or as a fraction of the total flux.
Since this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
"""
Explanation: So let's add a LC dataset
End of explanation
"""
print(b.filter(qualifier='l3*'))
"""
Explanation: We now see that the LC dataset created an 'l3_mode' parameter, and since l3_mode is set to 'flux' the 'l3' parameter is also visible.
End of explanation
"""
print(b.filter(qualifier='l3*'))
print(b.get_parameter('l3'))
"""
Explanation: l3_mode = 'flux'
When l3_mode is set to 'flux', the l3 parameter defines (in flux units) how much extraneous light is added to the light curve in that particular passband/dataset.
End of explanation
"""
print(b.compute_l3s())
"""
Explanation: To compute the fractional third light from the provided value in flux units, call b.compute_l3s. This assumes that the flux of the system is the sum of the extrinsic passband luminosities (see the pblum tutorial for more details on intrinsic vs extrinsic passband luminosities) divided by $4\pi$ at t0@system, and according to the compute options.
Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
End of explanation
"""
b.set_value('l3_mode', 'fraction')
print(b.filter(qualifier='l3*'))
print(b.get_parameter('l3_frac'))
"""
Explanation: l3_mode = 'fraction'
When l3_mode is set to 'fraction', the l3 parameter is now replaced by an l3_frac parameter.
End of explanation
"""
print(b.compute_l3s())
"""
Explanation: Similarly to above, we can convert to actual flux units (under the same assumptions), by calling b.compute_l3s.
Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
End of explanation
"""
b.run_compute(irrad_method='none', model='no_third_light')
b.set_value('l3_mode', 'flux')
b.set_value('l3', 5)
b.run_compute(irrad_method='none', model='with_third_light')
"""
Explanation: Influence on Light Curves (Fluxes)
"Third" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy.
To see this we'll compare a light curve with and without "third" light.
End of explanation
"""
afig, mplfig = b['lc01'].plot(model='no_third_light')
afig, mplfig = b['lc01'].plot(model='with_third_light', legend=True, show=True)
"""
Explanation: As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01'])
b.set_value('l3', 0.0)
b.run_compute(irrad_method='none', model='no_third_light', overwrite=True)
b.set_value('l3', 5)
b.run_compute(irrad_method='none', model='with_third_light', overwrite=True)
print("no_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='no_third_light')))
print("with_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='with_third_light')))
print("no_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='no_third_light')))
print("with_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='with_third_light')))
"""
Explanation: Influence on Meshes (Intensities)
"Third" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, "third" light only scales the fluxes.
NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details.
To see this we can run both of our models again and look at the values of the intensities in the mesh.
End of explanation
"""
|
jesseklein406/bikeshare | Bikeshare.ipynb | mit | from pandas import DataFrame, Series
import pandas as pd
import numpy as np
weather_data = pd.read_table('data/daily_weather.tsv')
season_mapping = {'Spring': 'Winter', 'Winter': 'Fall', 'Fall': 'Summer', 'Summer': 'Spring'}
def fix_seasons(x):
return season_mapping[x]
weather_data['season_desc'] = weather_data['season_desc'].apply(fix_seasons)
weather_data.pivot_table(index='season_desc', values='temp', aggfunc=np.mean)
"""
Explanation: Question 1
Compute the average temperature by season ('season_desc'). (The temperatures are numbers between 0 and 1, but don't worry about that. Let's say that's the Shellman temperature scale.)
I get
season_desc
Fall 0.711445
Spring 0.321700
Summer 0.554557
Winter 0.419368
Which clearly looks wrong. Figure out what's wrong with the original data and fix it.
End of explanation
"""
weather_data.groupby('season_desc')['temp'].mean()
"""
Explanation: In this case, a pivot table is not really required, so a simple use of groupby and mean() will do the job.
End of explanation
"""
weather_data['Month'] = pd.DatetimeIndex(weather_data.date).month
weather_data.groupby('Month')['total_riders'].sum()
"""
Explanation: Question 2
Various of the columns represent dates or datetimes, but out of the box pd.read_table won't treat them correctly. This makes it hard to (for example) compute the number of rentals by month. Fix the dates and compute the number of rentals by month.
End of explanation
"""
pd.concat([weather_data['temp'], weather_data['total_riders']], axis=1).corr()
"""
Explanation: Question 3
Investigate how the number of rentals varies with temperature. Is this trend constant across seasons? Across months?
End of explanation
"""
weather_data[['total_riders', 'temp', 'Month']].groupby('Month').corr()
"""
Explanation: Check how correlation between temp and total riders varies across months.
End of explanation
"""
weather_data[['total_riders', 'temp', 'season_desc']].groupby('season_desc').corr()
"""
Explanation: Check how correlation between temp and total riders varies across seasons.
End of explanation
"""
month_riders = weather_data.groupby('Month')['total_riders'].sum()
month_avg_temp = weather_data.groupby('Month')['temp'].mean()
pd.concat([month_riders, month_avg_temp], axis=1)
"""
Explanation: Investigate total riders by month versus average monthly temp.
End of explanation
"""
season_riders = weather_data.groupby('season_desc')['total_riders'].sum()
season_temp = weather_data.groupby('season_desc')['temp'].mean()
pd.concat([season_riders, season_temp], axis=1)
"""
Explanation: Investigate total riders by season versus average seasonal temp.
End of explanation
"""
weather_data[['no_casual_riders', 'no_reg_riders', 'is_work_day', 'is_holiday']].corr()
"""
Explanation: Question 4
There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently?
Investigate correlations between casual and reg riders on work days and holidays.
End of explanation
"""
weather_data[['no_casual_riders', 'no_reg_riders', 'windspeed']].corr()
usage = pd.read_table('data/usage_2012.tsv')
"""
Explanation: Investigate correlations between casual and reg riders and windspeed.
End of explanation
"""
usage.groupby('cust_type')['duration_mins'].mean()
"""
Explanation: Compare average rental duration between customer types.
End of explanation
"""
|
telescopeuser/workshop_blog | wechat_tool/lesson_2.ipynb | mit | # Copyright 2016 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# !pip install --upgrade google-api-python-client
"""
Explanation: 如何使用和开发微信聊天机器人的系列教程
A workshop to develop & use an intelligent and interactive chat-bot in WeChat
WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='http://www.kudosdata.com/wp-content/uploads/2016/11/cropped-KudosLogo1.png' width=30% style="float: right;">
<img src='reference/WeChat_SamGu_QR.png' width=10% style="float: right;">
http://www.KudosData.com
by: Sam.Gu@KudosData.com
May 2017 ========== Scan the QR code to become trainer's friend in WeChat ========>>
第二课:图像识别和处理
Lesson 2: Image Recognition & Processing
识别图片消息中的物体名字 (Recognize objects in image)
[1] 物体名 (General Object)
[2] 地标名 (Landmark Object)
[3] 商标名 (Logo Object)
识别图片消息中的文字 (OCR: Extract text from image)
包含简单文本翻译 (Call text translation API)
识别人脸 (Recognize human face)
基于人脸的表情来识别喜怒哀乐等情绪 (Identify sentiment and emotion from human face)
不良内容识别 (Explicit Content Detection)
Using Google Cloud Platform's Machine Learning APIs
From the same API console, choose "Dashboard" on the left-hand menu and "Enable API".
Enable the following APIs for your project (search for them) if they are not already enabled:
<ol>
<li> Google Translate API </li>
<li> Google Cloud Vision API </li>
<li> Google Natural Language API </li>
<li> Google Cloud Speech API </li>
</ol>
Finally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)
End of explanation
"""
import io, os, subprocess, sys, time, datetime, requests, itchat
from itchat.content import *
from googleapiclient.discovery import build
"""
Explanation: 导入需要用到的一些功能程序库:
End of explanation
"""
# Here I read in my own API_KEY from a file, which is not shared in Github repository:
# with io.open('../../API_KEY.txt') as fp:
# for line in fp: APIKEY = line
# You need to un-comment below line and replace 'APIKEY' variable with your own GCP API key:
APIKEY='AIzaSyCvxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# Below is for GCP Language Tranlation API
service = build('translate', 'v2', developerKey=APIKEY)
"""
Explanation: Using Google Cloud Platform's Machine Learning APIs
First, visit <a href="http://console.cloud.google.com/apis">API console</a>, choose "Credentials" on the left-hand menu. Choose "Create Credentials" and generate an API key for your application. You should probably restrict it by IP address to prevent abuse, but for now, just leave that field blank and delete the API key after trying out this demo.
Copy-paste your API Key here:
End of explanation
"""
# Import the base64 encoding library.
import base64
# Pass the image data to an encoding function.
def encode_image(image_file):
with open(image_file, "rb") as image_file:
image_content = image_file.read()
return base64.b64encode(image_content)
"""
Explanation: 图片二进制base64码转换 (Define image pre-processing functions)
End of explanation
"""
# control parameter for Image API:
parm_image_maxResults = 10 # max objects or faces to be extracted from image analysis
# control parameter for Language Translation API:
parm_translation_origin_language = '' # original language in text: to be overwriten by TEXT_DETECTION
parm_translation_target_language = 'zh' # target language for translation: Chinese
"""
Explanation: 机器智能API接口控制参数 (Define control parameters for API)
End of explanation
"""
# Running Vision API
# 'LABEL_DETECTION'
def KudosData_LABEL_DETECTION(image_base64, API_type, maxResults):
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
# 'source': {
# 'gcs_image_uri': IMAGE
# }
"content": image_base64
},
'features': [{
'type': API_type,
'maxResults': maxResults,
}]
}],
})
responses = request.execute(num_retries=3)
image_analysis_reply = u'\n[ ' + API_type + u' 物体识别 ]\n'
# 'LABEL_DETECTION'
if responses['responses'][0] != {}:
for i in range(len(responses['responses'][0]['labelAnnotations'])):
image_analysis_reply += responses['responses'][0]['labelAnnotations'][i]['description'] \
+ '\n( confidence ' + str(responses['responses'][0]['labelAnnotations'][i]['score']) + ' )\n'
else:
image_analysis_reply += u'[ Nill 无结果 ]\n'
return image_analysis_reply
"""
Explanation: * 识别图片消息中的物体名字 (Recognize objects in image)
[1] 物体名 (General Object)
End of explanation
"""
# Running Vision API
# 'LANDMARK_DETECTION'
def KudosData_LANDMARK_DETECTION(image_base64, API_type, maxResults):
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
# 'source': {
# 'gcs_image_uri': IMAGE
# }
"content": image_base64
},
'features': [{
'type': API_type,
'maxResults': maxResults,
}]
}],
})
responses = request.execute(num_retries=3)
image_analysis_reply = u'\n[ ' + API_type + u' 地标识别 ]\n'
# 'LANDMARK_DETECTION'
if responses['responses'][0] != {}:
for i in range(len(responses['responses'][0]['landmarkAnnotations'])):
image_analysis_reply += responses['responses'][0]['landmarkAnnotations'][i]['description'] \
+ '\n( confidence ' + str(responses['responses'][0]['landmarkAnnotations'][i]['score']) + ' )\n'
else:
image_analysis_reply += u'[ Nill 无结果 ]\n'
return image_analysis_reply
"""
Explanation: * 识别图片消息中的物体名字 (Recognize objects in image)
[2] 地标名 (Landmark Object)
End of explanation
"""
# Running Vision API
# 'LOGO_DETECTION'
def KudosData_LOGO_DETECTION(image_base64, API_type, maxResults):
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
# 'source': {
# 'gcs_image_uri': IMAGE
# }
"content": image_base64
},
'features': [{
'type': API_type,
'maxResults': maxResults,
}]
}],
})
responses = request.execute(num_retries=3)
image_analysis_reply = u'\n[ ' + API_type + u' 商标识别 ]\n'
# 'LOGO_DETECTION'
if responses['responses'][0] != {}:
for i in range(len(responses['responses'][0]['logoAnnotations'])):
image_analysis_reply += responses['responses'][0]['logoAnnotations'][i]['description'] \
+ '\n( confidence ' + str(responses['responses'][0]['logoAnnotations'][i]['score']) + ' )\n'
else:
image_analysis_reply += u'[ Nill 无结果 ]\n'
return image_analysis_reply
"""
Explanation: * 识别图片消息中的物体名字 (Recognize objects in image)
[3] 商标名 (Logo Object)
End of explanation
"""
# Running Vision API
# 'TEXT_DETECTION'
def KudosData_TEXT_DETECTION(image_base64, API_type, maxResults):
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
# 'source': {
# 'gcs_image_uri': IMAGE
# }
"content": image_base64
},
'features': [{
'type': API_type,
'maxResults': maxResults,
}]
}],
})
responses = request.execute(num_retries=3)
image_analysis_reply = u'\n[ ' + API_type + u' 文字提取 ]\n'
# 'TEXT_DETECTION'
if responses['responses'][0] != {}:
image_analysis_reply += u'----- Start Original Text -----\n'
image_analysis_reply += u'( Original Language 原文: ' + responses['responses'][0]['textAnnotations'][0]['locale'] \
+ ' )\n'
image_analysis_reply += responses['responses'][0]['textAnnotations'][0]['description'] + '----- End Original Text -----\n'
##############################################################################################################
# translation of detected text #
##############################################################################################################
parm_translation_origin_language = responses['responses'][0]['textAnnotations'][0]['locale']
# Call translation if parm_translation_origin_language is not parm_translation_target_language
if parm_translation_origin_language != parm_translation_target_language:
inputs=[responses['responses'][0]['textAnnotations'][0]['description']] # TEXT_DETECTION OCR results only
outputs = service.translations().list(source=parm_translation_origin_language,
target=parm_translation_target_language, q=inputs).execute()
image_analysis_reply += u'\n----- Start Translation -----\n'
image_analysis_reply += u'( Target Language 译文: ' + parm_translation_target_language + ' )\n'
image_analysis_reply += outputs['translations'][0]['translatedText'] + '\n' + '----- End Translation -----\n'
print('Compeleted: Translation API ...')
##############################################################################################################
else:
image_analysis_reply += u'[ Nill 无结果 ]\n'
return image_analysis_reply
"""
Explanation: * 识别图片消息中的文字 (OCR: Extract text from image)
End of explanation
"""
# Running Vision API
# 'FACE_DETECTION'
def KudosData_FACE_DETECTION(image_base64, API_type, maxResults):
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
# 'source': {
# 'gcs_image_uri': IMAGE
# }
"content": image_base64
},
'features': [{
'type': API_type,
'maxResults': maxResults,
}]
}],
})
responses = request.execute(num_retries=3)
image_analysis_reply = u'\n[ ' + API_type + u' 面部表情 ]\n'
# 'FACE_DETECTION'
if responses['responses'][0] != {}:
for i in range(len(responses['responses'][0]['faceAnnotations'])):
image_analysis_reply += u'----- No.' + str(i+1) + ' Face -----\n'
image_analysis_reply += u'>>> Joy 喜悦: \n' \
+ responses['responses'][0]['faceAnnotations'][i][u'joyLikelihood'] + '\n'
image_analysis_reply += u'>>> Anger 愤怒: \n' \
+ responses['responses'][0]['faceAnnotations'][i][u'angerLikelihood'] + '\n'
image_analysis_reply += u'>>> Sorrow 悲伤: \n' \
+ responses['responses'][0]['faceAnnotations'][i][u'sorrowLikelihood'] + '\n'
image_analysis_reply += u'>>> Surprise 惊奇: \n' \
+ responses['responses'][0]['faceAnnotations'][i][u'surpriseLikelihood'] + '\n'
image_analysis_reply += u'>>> Headwear 头饰: \n' \
+ responses['responses'][0]['faceAnnotations'][i][u'headwearLikelihood'] + '\n'
image_analysis_reply += u'>>> Blurred 模糊: \n' \
+ responses['responses'][0]['faceAnnotations'][i][u'blurredLikelihood'] + '\n'
image_analysis_reply += u'>>> UnderExposed 欠曝光: \n' \
+ responses['responses'][0]['faceAnnotations'][i][u'underExposedLikelihood'] + '\n'
else:
image_analysis_reply += u'[ Nill 无结果 ]\n'
return image_analysis_reply
"""
Explanation: * 识别人脸 (Recognize human face)
* 基于人脸的表情来识别喜怒哀乐等情绪 (Identify sentiment and emotion from human face)
End of explanation
"""
# Running Vision API
# 'SAFE_SEARCH_DETECTION'
def KudosData_SAFE_SEARCH_DETECTION(image_base64, API_type, maxResults):
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
# 'source': {
# 'gcs_image_uri': IMAGE
# }
"content": image_base64
},
'features': [{
'type': API_type,
'maxResults': maxResults,
}]
}],
})
responses = request.execute(num_retries=3)
image_analysis_reply = u'\n[ ' + API_type + u' 不良内容 ]\n'
# 'SAFE_SEARCH_DETECTION'
if responses['responses'][0] != {}:
image_analysis_reply += u'>>> Adult 成人: \n' + responses['responses'][0]['safeSearchAnnotation'][u'adult'] + '\n'
image_analysis_reply += u'>>> Violence 暴力: \n' + responses['responses'][0]['safeSearchAnnotation'][u'violence'] + '\n'
image_analysis_reply += u'>>> Spoof 欺骗: \n' + responses['responses'][0]['safeSearchAnnotation'][u'spoof'] + '\n'
image_analysis_reply += u'>>> Medical 医疗: \n' + responses['responses'][0]['safeSearchAnnotation'][u'medical'] + '\n'
else:
image_analysis_reply += u'[ Nill 无结果 ]\n'
return image_analysis_reply
"""
Explanation: * 不良内容识别 (Explicit Content Detection)
Detect explicit content like adult content or violent content within an image.
End of explanation
"""
itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。
# itchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: 命令行显示QR图片
# @itchat.msg_register([PICTURE], isGroupChat=True)
@itchat.msg_register([PICTURE])
def download_files(msg):
parm_translation_origin_language = 'zh' # will be overwriten by TEXT_DETECTION
msg.download(msg.fileName)
print('\nDownloaded image file name is: %s' % msg['FileName'])
image_base64 = encode_image(msg['FileName'])
##############################################################################################################
# call image analysis APIs #
##############################################################################################################
image_analysis_reply = u'[ Image Analysis 图像分析结果 ]\n'
# 1. LABEL_DETECTION:
image_analysis_reply += KudosData_LABEL_DETECTION(image_base64, 'LABEL_DETECTION', parm_image_maxResults)
# 2. LANDMARK_DETECTION:
image_analysis_reply += KudosData_LANDMARK_DETECTION(image_base64, 'LANDMARK_DETECTION', parm_image_maxResults)
# 3. LOGO_DETECTION:
image_analysis_reply += KudosData_LOGO_DETECTION(image_base64, 'LOGO_DETECTION', parm_image_maxResults)
# 4. TEXT_DETECTION:
image_analysis_reply += KudosData_TEXT_DETECTION(image_base64, 'TEXT_DETECTION', parm_image_maxResults)
# 5. FACE_DETECTION:
image_analysis_reply += KudosData_FACE_DETECTION(image_base64, 'FACE_DETECTION', parm_image_maxResults)
# 6. SAFE_SEARCH_DETECTION:
image_analysis_reply += KudosData_SAFE_SEARCH_DETECTION(image_base64, 'SAFE_SEARCH_DETECTION', parm_image_maxResults)
print('Compeleted: Image Analysis API ...')
return image_analysis_reply
itchat.run()
# interupt kernel, then logout
itchat.logout() # 安全退出
"""
Explanation: 用微信App扫QR码图片来自动登录
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nasa-giss/cmip6/models/sandbox-2/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
pkreissl/espresso | doc/tutorials/ferrofluid/ferrofluid_part1.ipynb | gpl-3.0 | import espressomd
import espressomd.magnetostatics
import espressomd.magnetostatic_extensions
import espressomd.cluster_analysis
import espressomd.pair_criteria
espressomd.assert_features('DIPOLES', 'LENNARD_JONES')
import numpy as np
"""
Explanation: Ferrofluid - Part 1
Table of Contents
Introduction
The Model
Structure of this tutorial
Compiling ESPResSo for this Tutorial
A Monolayer-Ferrofluid System in ESPResSo
Setup
Sampling
Sampling with animation
Sampling without animation
Cluster distribution
Introduction
Ferrofluids are colloidal suspensions of ferromagnetic single-domain particles in a liquid carrier. As the single particles contain only one magnetic domain, they can be seen as small permanent magnets. To prevent agglomeration of the particles, due to van-der-Waals or magnetic attraction, they are usually sterically or electrostatically stabilized (see <a href='#fig_1'>figure 1</a>). The former is achieved by adsorption of long chain molecules onto the particle surface, the latter by adsorption of charged coating particles. The size of the ferromagnetic particles are in the region of 10 nm. With the surfactant layer added they can reach a size of a few hundred nanometers. Have in mind that if we refer to the particle diameter $\sigma$ we mean the diameter of the magnetic core plus two times the thickness of the surfactant layer.
Some of the liquid properties, like the viscosity, the phase behavior or the optical birefringence can be altered via an external magnetic field or simply the fluid can be guided by such an
field. Thus ferrofluids possess a wide range of biomedical applications like magnetic drug
targeting or magnetic thermoablation and technical applications like fine positioning systems or adaptive bearings and
dampers.
In <a href='#fig_2'>figure 2</a> the picture of a ferrofluid exposed to the magnetic field of a permanent magnet is shown. The famous energy minimizing thorn-like surface is clearly visible.
<a id='fig_1'></a><figure>
<img src="figures/Electro-Steric_Stabilization.jpg" style="float: center; width: 49%">
<center>
<figcaption>Figure 1: Schematic representation of electrostatically stabilization (picture top) and steric stabilization (picture bottom) <a href='#[3]'>[3]</a></figcaption>
</center>
</figure>
<a id='fig_2'></a><figure>
<img src='figures/Ferrofluid_Magnet_under_glass_edit.jpg' alt='ferrofluid on glass plate under which a strong magnet is placed' style='width: 600px;'/>
<center>
<figcaption>Figure 2: Real Ferrofluid exposed to an external magnetic field (neodymium magnet) <a href='#[4]'>[4]</a></figcaption>
</center>
</figure>
The Model
For simplicity in this tutorial we simulate spherical particles in a monodisperse ferrofluid system which means all particles have the same diameter $\sigma$ and dipole moment $\mu$. The point dipole moment is placed at the center of the particles and is constant both in magnitude and direction (in the coordinate system of the particle). This can be justified as the Néel relaxation times are usually negligible for the usual sizes of ferrofluid particles.
Thus the magnetic interaction potential between two single particles is the dipole-dipole interaction potential which reads
\begin{equation}
u_{\text{DD}}(\vec{r}{ij}, \vec{\mu}_i, \vec{\mu}_j) = \gamma \left(\frac{\vec{\mu}_i \cdot \vec{\mu}_j}{r{ij}^3} - 3\frac{(\vec{\mu}i \cdot \vec{r}{ij}) \cdot (\vec{\mu}j \cdot \vec{r}{ij})}{r_{ij}^5}\right)
\end{equation}
with $\gamma = \frac{\mu_0}{4 \pi}$ and $\mu_0$ the vacuum permeability.
For the steric interaction in this tutorial we use the purely repulsive Weeks-Chandler-Andersen (WCA) potential which is a Lennard-Jones potential with cut-off radius $r_{\text{cut}}$ at the minimum of the potential $r_{\text{cut}} = r_{\text{min}} = 2^{\frac{1}{6}}\cdot \sigma$ and shifted by $\varepsilon_{ij}$ such that the potential is continuous at the cut-off radius. Thus the potential has the shape
\begin{equation}
u_{\text{sr}}^{\text{WCA}}(r_{ij}) = \left{
\begin{array}{ll}
4\varepsilon_{ij}\left[ \left( \frac{\sigma}{r_{ij}} \right)^{12} - \left( \frac{\sigma}{r_{ij}} \right)^6 \right] + \varepsilon_{ij} & r_{ij} < r_{\text{cut}} \
0 & r_{ij} \geq r_{\text{cut}} \
\end{array}
\right.
\end{equation}
where $r_{ij}$ are the distances between two particles.
The purely repulsive character of the potential can be justified by the fact that the particles in real ferrofluids are sterically or electrostatically stabilized against agglomeration.
The whole interaction potential reads
\begin{equation}
u(\vec{r}{ij}, \vec{\mu}_i, \vec{\mu}_j) = u{\text{sr}}(\vec{r}{ij}) + u{\text{DD}}(\vec{r}_{ij}, \vec{\mu}_i, \vec{\mu}_j)
\end{equation}
The liquid carrier of the system is simulated through a Langevin thermostat.
For ferrofluid systems there are three important parameters. The first is the volume fraction in three dimensions or the area fraction in two dimensions or quasi two dimensions. The second is the dipolar interaction parameter $\lambda$
\begin{equation}
\lambda = \frac{\tilde{u}{\text{DD}}}{u_T} = \gamma \frac{\mu^2}{k{\text{B}}T\sigma^3}
\end{equation}
where $u_\mathrm{T} = k_{\text{B}}T$ is the thermal energy and $\tilde{u}_{DD}$ is the absolute value of the dipole-dipole interaction energy at close contact (cc) and head-to-tail configuration (htt) (see <a href='#fig_4'>figure 4</a>) per particle, i.e. in formulas it reads
\begin{equation}
\tilde{u}{\text{DD}} = \frac{ \left| u{\text{DD}}^{\text{htt, cc}} \right| }{2}
\end{equation}
The third parameter takes a possible external magnetic field into account and is called Langevin parameter $\alpha$. It is the ratio between the energy of a dipole moment in the external magnetic field $B$ and the thermal energy
\begin{equation}
\alpha = \frac{\mu_0 \mu}{k_{\text{B}} T}B
\end{equation}
<a id='fig_4'></a><figure>
<img src='figures/headtotailconf.png' alt='schematic representation of head to tail configuration' style='width: 200px;'/>
<center>
<figcaption>Figure 4: Schematic representation of the head-to-tail configuration of two magnetic particles at close contact.</figcaption>
</center>
</figure>
Structure of this tutorial
The aim of this tutorial is to introduce the basic features of ESPResSo for ferrofluids or dipolar fluids in general. In part I and part II we will do this for a monolayer-ferrofluid, in part III for a three dimensional system. In part I we will examine the clusters which are present in all interesting ferrofluid systems. In part II we will examine the influence of the dipole-dipole-interaction on the magnetization curve of a ferrofluid. In part III we calculate estimators for the initial susceptibility using fluctuation formulas and sample the magnetization curve.
We assume the reader is familiar with the basic concepts of Python and MD simulations.
Remark: The equilibration and sampling times used in this tutorial would be not sufficient for scientific purposes, but they are long enough to get at least a qualitative insight of the behaviour of ferrofluids. They have been shortened so we achieve reasonable computation times for the purpose of a tutorial.
Compiling ESPResSo for this Tutorial
For this tutorial the following features of ESPResSo are needed
```c++
define EXTERNAL_FORCES
define ROTATION
define DIPOLES
define LENNARD_JONES
```
Please uncomment them in the <tt>myconfig.hpp</tt> and compile ESPResSo using this <tt>myconfig.hpp</tt>.
A Monolayer-Ferrofluid System in ESPResSo
For interesting ferrofluid systems, where the fraction of ferromagnetic particles in the liquid carrier and their dipole moment are not vanishingly small, the ferromagnetic particles form clusters of different shapes and sizes. If the fraction and/or dipole moments are big enough the clusters can interconnect with each other and form a whole space occupying network.
In this part we want to investigate the number of clusters as well as their shape and size in our simulated monolayer ferrofluid system. It should be noted that a monolayer is a quasi three dimensional system (q2D), i.e. two dimensional for the positions and three dimensional for the orientation of the dipole moments.
Setup
We start with checking for the presence of ESPResSo features and importing all necessary packages.
End of explanation
"""
# Lennard-Jones parameters
LJ_SIGMA = 1
LJ_EPSILON = 1
LJ_CUT = 2**(1. / 6.) * LJ_SIGMA
# Particles
N_PART = 1200
# Area fraction of the mono-layer
PHI = 0.1
# Dipolar interaction parameter lambda = mu_0 m^2 /(4 pi sigma^3 KT)
DIP_LAMBDA = 4
# Temperature
KT = 1.0
# Friction coefficient
GAMMA = 1.0
# Time step
TIME_STEP = 0.01
"""
Explanation: Now we set up all simulation parameters.
End of explanation
"""
# System setup
# BOX_SIZE = ...
print("Box size", BOX_SIZE)
# Note that the dipolar P3M and dipolar layer correction need a cubic
# simulation box for technical reasons.
system = espressomd.System(box_l=(BOX_SIZE, BOX_SIZE, BOX_SIZE))
system.time_step = TIME_STEP
"""
Explanation: Note that we declared a <tt>lj_cut</tt>. This will be used as the cut-off radius of the Lennard-Jones potential to obtain a purely repulsive WCA potential.
Now we set up the system. The length of the simulation box is calculated using the desired area fraction and the area all particles occupy. Then we create the ESPResSo system and pass the simulation step. For the Verlet list skin parameter we use the built-in tuning algorithm of ESPResSo.
Exercise:
How large does BOX_SIZE have to be for a system of N_PART particles with a volume (area) fraction PHI?
Define BOX_SIZE.
$$
L_{\text{box}} = \sqrt{\frac{N A_{\text{sphere}}}{\varphi}}
$$
python
BOX_SIZE = (N_PART * np.pi * (LJ_SIGMA / 2.)**2 / PHI)**0.5
End of explanation
"""
# Lennard-Jones interaction
system.non_bonded_inter[0, 0].lennard_jones.set_params(epsilon=LJ_EPSILON, sigma=LJ_SIGMA, cutoff=LJ_CUT, shift="auto")
"""
Explanation: Now we set up the interaction between the particles as a non-bonded interaction and use the Lennard-Jones potential as the interaction potential. Here we use the above mentioned cut-off radius to get a purely repulsive interaction.
End of explanation
"""
# Random dipole moments
# ...
# dip = ...
# Random positions in the monolayer
pos = BOX_SIZE * np.hstack((np.random.random((N_PART, 2)), np.zeros((N_PART, 1))))
"""
Explanation: Now we generate random positions and orientations of the particles and their dipole moments.
Hint:
It should be noted that we seed the random number generator of numpy. Thus the initial configuration of our system is the same every time this script will be executed. You can change it to another one to simulate with a different initial configuration.
Exercise:
How does one set up randomly oriented dipole moments?
Hint: Think of the way that different methods could introduce a bias in the distribution of the orientations.
Create a variable dip as a N_PART x 3 numpy array, which contains the randomly distributed dipole moments.
```python
Random dipole moments
np.random.seed(seed=1)
dip_phi = 2. * np.pi * np.random.random((N_PART, 1))
dip_cos_theta = 2 * np.random.random((N_PART, 1)) - 1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta * np.sin(dip_phi),
dip_sin_theta * np.cos(dip_phi),
dip_cos_theta))
```
End of explanation
"""
# Add particles
system.part.add(pos=pos, rotation=N_PART * [(1, 1, 1)], dip=dip, fix=N_PART * [(0, 0, 1)])
"""
Explanation: Now we add the particles with their positions and orientations to our system. Thereby we activate all degrees of freedom for the orientation of the dipole moments. As we want a two dimensional system we only allow the particles to translate in $x$- and $y$-direction and not in $z$-direction by using the <tt>fix</tt> argument.
End of explanation
"""
# Set integrator to steepest descent method
system.integrator.set_steepest_descent(
f_max=0, gamma=0.1, max_displacement=0.05)
"""
Explanation: Be aware that we do not set the magnitude of the magnetic dipole moments to the particles. As in our case all particles have the same dipole moment it is possible to rewrite the dipole-dipole interaction potential to
\begin{equation}
u_{\text{DD}}(\vec{r}{ij}, \vec{\mu}_i, \vec{\mu}_j) = \gamma \mu^2 \left(\frac{\vec{\hat{\mu}}_i \cdot \vec{\hat{\mu}}_j}{r{ij}^3} - 3\frac{(\vec{\hat{\mu}}i \cdot \vec{r}{ij}) \cdot (\vec{\hat{\mu}}j \cdot \vec{r}{ij})}{r_{ij}^5}\right)
\end{equation}
where $\vec{\hat{\mu}}_i$ is the unit vector of the dipole moment $i$ and $\mu$ is the magnitude of the dipole moments.
Thus we can only prescribe the initial orientation of the dipole moment to the particles and take the magnitude of the moments into account when calculating the dipole-dipole interaction with Dipolar P3M, by modifying the original Dipolar P3M prefactor $\gamma$ such that
\begin{equation}
\tilde{\gamma} = \gamma \mu^2 = \frac{\mu_0}{4\pi}\mu^2 = \lambda \sigma^3 k_{\text{B}}T
\end{equation}
Of course it would also be possible to prescribe the whole dipole moment vectors to every particle and leave the prefactor of Dipolar P3M unchanged ($\gamma$). In fact we have to do this if we want to simulate polydisperse systems.
Now we choose the steepest descent integrator to remove possible overlaps of the particles.
End of explanation
"""
# Switch to velocity Verlet integrator
system.integrator.set_vv()
system.thermostat.set_langevin(kT=KT, gamma=GAMMA, seed=1)
"""
Explanation: Exercise:
Perform a steepest descent energy minimization.
Track the relative energy change $E_{\text{rel}}$ per minimization loop (where the integrator is run for 10 steps) and terminate once $E_{\text{rel}} \le 0.05$, i.e. when there is less than a 5% difference in the relative energy change in between iterations.
```python
import sys
energy = system.analysis.energy()['total']
relative_energy_change = 1.0
while relative_energy_change > 0.05:
system.integrator.run(10)
energy_new = system.analysis.energy()['total']
# Prevent division by zero errors:
if energy < sys.float_info.epsilon:
break
relative_energy_change = (energy - energy_new) / energy
print(f"Minimization, relative change in energy: {relative_energy_change}")
energy = energy_new
```
For the simulation of our system we choose the velocity Verlet integrator.
After that we set up the thermostat which is, in our case, a Langevin thermostat to simulate in an NVT ensemble.
Hint:
It should be noted that we seed the Langevin thermostat, thus the time evolution of the system is partly predefined. Partly because of the numeric accuracy and the automatic tuning algorithms of Dipolar P3M and DLC where the resulting parameters are slightly different every time. You can change the seed to get a guaranteed different time evolution.
End of explanation
"""
CI_DP3M_PARAMS = {} # debug variable for continuous integration, can be left empty
# Setup dipolar P3M and dipolar layer correction
dp3m = espressomd.magnetostatics.DipolarP3M(accuracy=5E-4, prefactor=DIP_LAMBDA * LJ_SIGMA**3 * KT, **CI_DP3M_PARAMS)
dlc = espressomd.magnetostatic_extensions.DLC(maxPWerror=1E-4, gap_size=BOX_SIZE - LJ_SIGMA)
system.actors.add(dp3m)
system.actors.add(dlc)
# tune verlet list skin
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# print skin value
print('tuned skin = {}'.format(system.cell_system.skin))
"""
Explanation: To calculate the dipole-dipole interaction we use the Dipolar P3M method (see Ref. <a href='#[1]'>[1]</a>) which is based on the Ewald summation. By default the boundary conditions of the system are set to conducting which means the dielectric constant is set to infinity for the surrounding medium. As we want to simulate a two dimensional system we additionally use the dipolar layer correction (DLC) (see Ref. <a href='#[2]'>[2]</a>). As we add <tt>DipolarP3M</tt> to our system as an actor, a tuning function is started automatically which tries to find the optimal parameters for Dipolar P3M and prints them to the screen. The last line of the output is the value of the tuned skin.
End of explanation
"""
# Equilibrate
print("Equilibration...")
EQUIL_ROUNDS = 20
EQUIL_STEPS = 1000
for i in range(EQUIL_ROUNDS):
system.integrator.run(EQUIL_STEPS)
print(
f"progress: {(i + 1) * 100. / EQUIL_ROUNDS}%, dipolar energy: {system.analysis.energy()['dipolar']}",
end="\r")
print("\nEquilibration done")
"""
Explanation: Now we equilibrate the dipole-dipole interaction for some time
End of explanation
"""
LOOPS = 100
"""
Explanation: Sampling
The system will be sampled over 100 loops.
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import tempfile
import base64
VIDEO_TAG = """<video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>"""
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with tempfile.NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264'])
with open(f.name, "rb") as g:
video = g.read()
anim._encoded_video = base64.b64encode(video).decode('ascii')
plt.close(anim._fig)
return VIDEO_TAG.format(anim._encoded_video)
animation.Animation._repr_html_ = anim_to_html
def init():
# Set x and y range
ax.set_ylim(0, BOX_SIZE)
ax.set_xlim(0, BOX_SIZE)
x_data, y_data = [], []
part.set_data(x_data, y_data)
return part,
"""
Explanation: As the system is two dimensional, we can simply do a scatter plot to get a visual representation of a system state. To get a better insight of how a ferrofluid system develops during time we will create a video of the development of our system during the sampling. If you only want to sample the system simply go to Sampling without animation
Sampling with animation
To get an animation of the system development we have to create a function which will save the video and embed it in an html string.
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 10))
part, = ax.plot([], [], 'o')
animation.FuncAnimation(fig, run, frames=LOOPS, blit=True, interval=0, repeat=False, init_func=init)
"""
Explanation: Exercise:
In the following an animation loop is defined, however it is incomplete.
Extend the code such that in every loop the system is integrated for 100 steps.
Afterwards x_data and y_data have to be populated by the folded $x$- and $y$- positions of the particles.
(You may copy and paste the incomplete code template to the empty cell below.)
```python
def run(i):
# < excercise >
# Save current system state as a plot
x_data, y_data = # < excercise >
ax.figure.canvas.draw()
part.set_data(x_data, y_data)
print("progress: {:3.0f}%".format((i + 1) * 100. / LOOPS), end="\r")
return part,
```
```python
def run(i):
system.integrator.run(100)
# Save current system state as a plot
x_data, y_data = system.part.all().pos_folded[:, 0], system.part.all().pos_folded[:, 1]
ax.figure.canvas.draw()
part.set_data(x_data, y_data)
print("progress: {:3.0f}%".format((i + 1) * 100. / LOOPS), end="\r")
return part,
```
Now we use the <tt>animation</tt> class of <tt>matplotlib</tt> to save snapshots of the system as frames of a video which is then displayed after the sampling is finished. Between two frames are 100 integration steps.
In the video chain-like and ring-like clusters should be visible, as well as some isolated monomers.
End of explanation
"""
n_clusters = []
cluster_sizes = []
"""
Explanation: Cluster analysis
To quantify the number of clusters and their respective sizes, we now want to perform a cluster analysis.
For that we can use ESPREsSo's cluster analysis class.
Exercise:
Setup a cluster analysis object (ClusterStructure class) and assign its instance to the variable cluster_structure.
As criterion for the cluster analysis use a distance criterion where particles are assumed to be part of a cluster if the neaest neighbors are closer than $1.3\sigma_{\text{LJ}}$.
```python
Setup cluster analysis
cluster_structure = espressomd.cluster_analysis.ClusterStructure(pair_criterion=espressomd.pair_criteria.DistanceCriterion(cut_off=1.3 * LJ_SIGMA))
```
Now we sample our system for some time and do a cluster analysis in order to get an estimator of the cluster observables.
For the cluster analysis we create two empty lists. The first for the number of clusters and the second for their respective sizes.
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
plt.xlim(0, BOX_SIZE)
plt.ylim(0, BOX_SIZE)
plt.xlabel('x-position', fontsize=20)
plt.ylabel('y-position', fontsize=20)
plt.plot(system.part.all().pos_folded[:, 0], system.part.all().pos_folded[:, 1], 'o')
plt.show()
"""
Explanation: Sampling without animation
The following code just samples the system and does a cluster analysis every <tt>loops</tt> (100 by default) simulation steps.
Exercise:
Write an integration loop which runs a cluster analysis on the system, saving the number of clusters n_clusters and the size distribution cluster_sizes.
Take the following as a starting point:
```python
for i in range(LOOPS):
# Run cluster analysis
cluster_structure.run_for_all_pairs()
# Gather statistics:
n_clusters.append(# < excercise >)
for c in cluster_structure.clusters:
cluster_sizes.append(# < excercise >)
system.integrator.run(100)
print("progress: {:3.0f}%".format((float(i)+1)/LOOPS * 100), end="\r")
```
```python
for i in range(LOOPS):
# Run cluster analysis
cluster_structure.run_for_all_pairs()
# Gather statistics:
n_clusters.append(len(cluster_structure.clusters))
for c in cluster_structure.clusters:
cluster_sizes.append(c[1].size())
system.integrator.run(100)
print("progress: {:3.0f}%".format((float(i) + 1) / LOOPS * 100), end="\r")
```
You may want to get a visualization of the current state of the system. For that we plot the particle positions folded to the simulation box using <tt>matplotlib</tt>.
End of explanation
"""
plt.figure(figsize=(10, 10))
plt.grid()
plt.xticks(range(0, 20))
plt.plot(size_dist[1][:-2], size_dist[0][:-1] / float(LOOPS))
plt.xlabel('size of clusters', fontsize=20)
plt.ylabel('distribution', fontsize=20)
plt.show()
"""
Explanation: In the plot chain-like and ring-like clusters should be visible. Some of them are connected via Y- or X-links to each other. Also some monomers should be present.
Cluster distribution
After having sampled our system we now can calculate estimators for the expectation value of the cluster sizes and their distribution.
Exercise:
Use numpy to calculate a histogram of the cluster sizes and assign it to the variable size_dist.
Take only clusters up to a size of 19 particles into account.
Hint: In order not to count clusters with size 20 or more, one may include an additional bin containing these.
The reason for that is that numpy defines the histogram bins as half-open intervals with the open border at the upper bin edge.
Consequently clusters of larger sizes are attributed to the last bin.
By not using the last bin in the plot below, these clusters can effectively be neglected.
python
size_dist = np.histogram(cluster_sizes, range=(2, 21), bins=19)
Now we can plot this histogram and should see an exponential decrease in the number of particles in a cluster along the size of a cluster, i.e. the number of monomers in it
End of explanation
"""
|
KMFleischer/PyEarthScience | Visualization/miscellaneous/create_street_maps_from_geolocations.ipynb | mit | from geopy.geocoders import Nominatim
import folium
"""
Explanation: Retrieve geo-locations, create maps with markers and popups
Use OpenStreetMap data and the DKRZ logo.
<br>
geopy - Python client for several popular geocoding web services
folium - visualization tool for maps
<br>
End of explanation
"""
geolocator = Nominatim(user_agent='any_agent')
"""
Explanation: <br>
Use Nominatim geocoder for OpenStreetMap data.
<br>
End of explanation
"""
location = geolocator.geocode('Hamburg')
print(location.address)
print((location.latitude, location.longitude))
print(location.raw)
"""
Explanation: <br>
Retrieve the geo-location of the given address.
<br>
End of explanation
"""
m = folium.Map(location=[location.latitude, location.longitude])
"""
Explanation: <br>
Create the map with the retrieved location.
<br>
End of explanation
"""
display(m)
"""
Explanation: <br>
Display the map in the notebook.
<br>
End of explanation
"""
tooltip = location.latitude, location.longitude
folium.Marker([location.latitude, location.longitude], tooltip=tooltip).add_to(m)
display(m)
"""
Explanation: <br>
Set marker at the center of the city.
<br>
End of explanation
"""
m = folium.Map(location=[location.latitude, location.longitude], zoom_start=12, zoom_control=False)
display(m)
"""
Explanation: <br>
Zoom in.
<br>
End of explanation
"""
dkrz_location = geolocator.geocode('Bundesstrasse 45a, Hamburg, Germany', language='en')
print(dkrz_location.address)
"""
Explanation: <br>
Retrieve the location data of the DKRZ. Set Marker type.
<br>
End of explanation
"""
dkrz_map = folium.Map(location=[dkrz_location.latitude, dkrz_location.longitude], zoom_start=16, zoom_control=False)
tooltip = dkrz_location.latitude, dkrz_location.longitude
popup_name = 'Deutsches Klimarechenzentrum GmbH'
folium.Marker([dkrz_location.latitude, dkrz_location.longitude], popup=popup_name, icon=folium.Icon(icon="cloud"),).add_to(dkrz_map)
display(dkrz_map)
"""
Explanation: <br>
Locate DKRZ on map
<br>
End of explanation
"""
from folium import IFrame
import base64
width, height = 700, 700
f = folium.Figure(width=width, height=height)
dkrz_map = folium.Map(location=[dkrz_location.latitude, dkrz_location.longitude],
zoom_start=16, zoom_control=False,
width=width, height=height).add_to(f)
png = 'DKRZ_Logo_plus_text_small.png'.format(42)
encoded = base64.b64encode(open(png, 'rb').read())
html = '<img src="data:image/png;base64,{}">'.format
iframe = IFrame(html(encoded.decode('UTF-8')), width=200+20, height=100+20)
popup = folium.Popup(iframe, max_width=2650)
icon = folium.Icon(color='blue', icon='cloud')
marker = folium.Marker(location=[dkrz_location.latitude, dkrz_location.longitude], popup=popup, icon=icon)
marker.add_to(dkrz_map)
display(dkrz_map)
"""
Explanation: <br>
Display DKRZ logo as marker popup.
<br>
End of explanation
"""
from functools import partial
geocode = partial(geolocator.geocode, language='es')
print(geocode('london'))
reverse = partial(geolocator.reverse, language='es')
print(reverse('52.509669, 13.376294'))
"""
Explanation: <br>
Retrieve location information in a different language.
<br>
End of explanation
"""
from geopy import distance
newport_ri = (41.49008, -71.312796)
cleveland_oh = (41.499498, -81.695391)
print(distance.distance(newport_ri, cleveland_oh).miles)
wellington = (-41.32, 174.81)
salamanca = (40.96, -5.50)
print(distance.distance(wellington, salamanca).km)
"""
Explanation: <br>
Calculate distances
<br>
End of explanation
"""
print(distance.great_circle(newport_ri, cleveland_oh).km)
"""
Explanation: <br>
Using great circle distance
<br>
End of explanation
"""
ne, cl = newport_ri, cleveland_oh
print(distance.geodesic(ne, cl, ellipsoid='GRS-80').km)
"""
Explanation: Change the ellispoid
<pre>
model major (km) minor (km) flattening
ELLIPSOIDS = {'WGS-84': (6378.137, 6356.7523142, 1 / 298.257223563),
'GRS-80': (6378.137, 6356.7523141, 1 / 298.257222101),
'Airy (1830)': (6377.563396, 6356.256909, 1 / 299.3249646),
'Intl 1924': (6378.388, 6356.911946, 1 / 297.0),
'Clarke (1880)': (6378.249145, 6356.51486955, 1 / 293.465),
'GRS-67': (6378.1600, 6356.774719, 1 / 298.25),
}
<\pre>
End of explanation
"""
|
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera | Improving Deep Neural Networks/Regularization.ipynb | mit | # import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
"""
Explanation: Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!
You will learn to: Use regularization in your deep learning models.
Let's first import the packages you are going to use.
End of explanation
"""
train_X, train_Y, test_X, test_Y = load_2D_dataset()
"""
Explanation: Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> Figure 1 </u>: Football field<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
End of explanation
"""
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
"""
Explanation: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
Analysis of the dataset: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in regularization mode -- by setting the lambd input to a non-zero value. We use "lambd" instead of "lambda" because "lambda" is a reserved keyword in Python.
- in dropout mode -- by setting the keep_prob to a value less than one
You will first try the model without any regularization. Then, you will implement:
- L2 regularization -- functions: "compute_cost_with_regularization()" and "backward_propagation_with_regularization()"
- Dropout -- functions: "forward_propagation_with_dropout()" and "backward_propagation_with_dropout()"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
End of explanation
"""
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
"""
Explanation: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
End of explanation
"""
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
"""
Explanation: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
End of explanation
"""
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = sum([np.sum(np.square(W1)),np.sum(np.square(W2)),np.sum(np.square(W3))])*(1/m)*(lambd/2)
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
"""
Explanation: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
python
np.sum(np.square(Wl))
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
End of explanation
"""
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
Exercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
End of explanation
"""
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The model() function will call:
- compute_cost_with_regularization instead of compute_cost
- backward_propagation_with_regularization instead of backward_propagation
End of explanation
"""
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
"""
Explanation: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
End of explanation
"""
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1 * D1 # Step 3: shut down some neurons of A1
A1 /= keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 /= keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
"""
Explanation: Observations:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
What is L2-regularization actually doing?:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
What you should remember -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
3 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning.
It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep_prob$ or keep it with probability $keep_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
3.1 - Forward propagation with dropout
Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
Instructions:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X < 0.5). Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
End of explanation
"""
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 /=keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
3.2 - Backward propagation with dropout
Exercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
Instruction:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to dA1.
2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if $A^{[1]}$ is scaled by keep_prob, then its derivative $dA^{[1]}$ is also scaled by the same keep_prob).
End of explanation
"""
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (keep_prob = 0.86). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function model() will now call:
- forward_propagation_with_dropout instead of forward_propagation.
- backward_propagation_with_dropout instead of backward_propagation.
End of explanation
"""
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
"""
Explanation: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
End of explanation
"""
|
feststelltaste/software-analytics | notebooks/Calculating the Structural Similarity of Test Cases.ipynb | gpl-3.0 | import pandas as pd
invocations = pd.read_csv("datasets/test_code_invocations.csv", sep=";")
invocations.head()
"""
Explanation: Introduction
This blog is a three-part series. See part 1 for retrieving the dataset and part 3 (upcoming) for visualization.
In big and old legacy systems, tests are often a mess. Especially end-to-end-tests with UI testing frameworks like Selenium quickly become a PITA aka unmaintainable. They are running slow and you quickly get overwhelmed by plenty of tests that do partly the same, too.
In this data analysis, I want to illustrate a way that can take us out of this misery. We want to spot test cases that are structurally very similar and thus can be seen as duplicate. We'll calculate the similarity between tests based on their invocations of production code. We can achieve this by treating our software data as observations of linear features. This opens up ways for us to leverage existing mathematically techniques like vector distance calculation (as we'll see in this post) as well as machine learning techniques like multidimensional scaling or clustering (in a follow-up post).
As software data under analysis, we'll use the JUnit tests of a Java application for demonstrating the approach. We want to figure out, if there are any test cases that test production code where other, more dedicated tests are already testing as well. With the result, we could be able to delete some superfluous test cases (and always remember: less code is good code, no code is best :-)).
Reality check
The real use case originates from a software system with a massive amount of Selenium end-to-end-tests that uses the Page Object pattern. Each page object represents one HTML site of a web application. Technically, a page object exposes methods in the programming language you use that enables the interaction with websites programmatically. In such a scenario, you can infer which tests are calling the same websites and are triggering the same set of UI components (like buttons). This is a good estimator for test cases that test the same use cases in the application. We can use the results of such an analysis to find repeating test scenarios.
Dataset
I'm using a dataset that I've created in a previous blog post with jQAssistant. It shows which test methods call which code in the application (the "production code"). It's a pure static and structural view of our code, but can be very helpful as we'll see shortly.
Note: There are also other ways to get these kinds of information e. g. by mining the log file of a test execution (this would even add real runtime information as well). But for the demonstration of the general approach, the pure static and structural information between the test code and our production code is sufficient.
First, we read in the data with Pandas – my favorite data analysis framework for getting things easily done.
End of explanation
"""
invocation_matrix = invocations.pivot_table(
index=['test_type', 'test_method'],
columns=['prod_type', 'prod_method'],
values='invocations',
fill_value=0
)
# show interesting parts of results
invocation_matrix.iloc[4:8,4:6]
"""
Explanation: What we've got here are
* all names of our test types (test_type) and production types (prod_type)
* the signatures of the test methods (test_method) and production methods (prod_method)
* the number of calls from the test methods to the production methods (invocations).
Analysis
OK, let's do some actual work! We want
* to calculate the structural similarity of test cases
* to spot possible duplications of tests
to figure out which test cases are superfluous (and can be deleted).
What we have are all tests cases (aka test methods) and their calls to the production code base (= the production methods). We can transform this data to a matrix representation that shows which test method triggers which production method by using Pandas' pivot_table function on our invocations DataFrame.
End of explanation
"""
from sklearn.metrics.pairwise import cosine_distances
distance_matrix = cosine_distances(invocation_matrix)
# show some interesting parts of results
distance_matrix[81:85,60:62]
"""
Explanation: What we've got now is the information for each invocation (or non-invocation) of test methods to production methods. In mathematical words, we've now got an n-dimensional vector for each test method where n is the number of tested production methods in our code base. That means we've just transformed our software data to a representation so that we can work with standard Data Science tooling :-D! That means all further problem-solving techniques in this area can be reused by us.
And this is exactly what we do now in our further analysis: We've reduced our problem to a distance calculation between vectors (we use distance instead of similarity because later used visualization techniques work with distances). For this, we can use the cosine_distances function (see this article for the mathematical background) of the machine learning library scikit-learn to calculate a pair-wise distance matrix between the test methods aka linear features.
End of explanation
"""
distance_df = pd.DataFrame(distance_matrix, index=invocation_matrix.index, columns=invocation_matrix.index)
# show some interesting parts of results
distance_df.iloc[81:85,60:62]
"""
Explanation: From this result, we create a DataFrame to get a better visual representation of the data.
End of explanation
"""
invocations[
(invocations.test_method == "void readRoundtripWorksWithFullData()") |
(invocations.test_method == "void postCommentActuallyCreatesComment()")]
"""
Explanation: You find the complete DataFrame as Excel file as well (~0.5 MB). It shows all dissimilarities between test cases based on the static calls to production code and looks something like this:
Can you already spot some clusters? We'll have a detailed look at that in the next blog post ;-)!
Discussion
Let's have a look at what we've achieved by discussing some of the results. We compare the actual source code of the test method readRoundtripWorksWithFullData() from the test class CommentGatewayTest
java
@Test
public void readRoundtripWorksWithFullData() {
createDefaultComment();
assertEquals(1, commentGateway.read(SITE_NAME).size());
checkDefaultComment(commentGateway.read(SITE_NAME).get(0));
}
with the test method postCommentActuallyCreatesComment() of the another test class CommentsResourceTest
java
@Test
public void postCommentActuallyCreatesComment() {
this.client.path("sites/sitewith3comments/comments").accept(...
Assert.assertEquals(4L, (long)this.commentGateway.read("sitewith3comments").size());
Assert.assertEquals("comment3", ((Comment)this.commentGateway.read("sitewith3comments").get(3)).getContent());
}
Albeit both classes represent different test levels (unit vs. integration test), they share some similarities (with ~0.1 dissimilarity aka ~90% similar calls to production methods). We can see exactly which invoked production methods are part of both test cases by filtering out the methods in the original invocations DataFrame.
End of explanation
"""
invocations[
(invocations.test_method == "void readRoundtripWorksWithFullData()") |
(invocations.test_method == "void postTwiceCreatesTwoElements()")]
"""
Explanation: We see that both test methods share calls to the production method read(...), but differ in the call of the method with the name getContent() in the class Comment, because only the test method postCommentActuallyCreatesComment() of CommentsResourceTest invokes it.
We can repeat this discussion for another method named postTwiceCreatesTwoElements() in the test class CommentsResourceTest:
java
public void postTwiceCreatesTwoElements() {
this.client.path("sites/sitewith3comments/comments").accept(...
this.client.path("sites/sitewith3comments/comments").accept(...
Assert.assertEquals(5L, (long)comments.size());
Assert.assertEquals("comment1", ((Comment)comments.get(0)).getContent());
Assert.assertEquals("comment2", ((Comment)comments.get(1)).getContent());
Assert.assertEquals("comment3", ((Comment)comments.get(2)).getContent());
Assert.assertEquals("comment4", ((Comment)comments.get(3)).getContent());
Assert.assertEquals("comment5", ((Comment)comments.get(4)).getContent());
Albeit the test method is a little bit awkward (with all those subsequent getContent() calls), we can see a slight slimilarity of ~20%. Here are details on the production method calls as well:
End of explanation
"""
invocations[
(invocations.test_method == "void readRoundtripWorksWithFullData()") |
(invocations.test_method == "void keyWorks()")]
"""
Explanation: Both test classes invoke the read(...) method, but only postTwiceCreatesTwoElements() calls getContent() – and this for five times. This explains the dissimilarity between both test methods.
In contrast, we can have a look at the method void keyWorks() from the test class ConfigurationFileTest, which has absolutely nothing to do (= dissimilarity 1.0) with the method readRoundtripWorksWithFullData() nor the underlying calls to the production code.
java
@Test
public void keyWorks() {
assertEquals("InMemory", config.get("gateway.type"));
}
Looking at the corresponding invocation data, we see, that there are no common uses of production methods.
End of explanation
"""
|
ReactiveX/RxPY | notebooks/reactivex.io/Part VIII - Hot & Cold.ipynb | mit | rst(O.publish)
def emit(obs):
log('.........EMITTING........')
sleep(0.1)
obs.on_next(rand())
obs.on_completed()
rst(title='Reminder: 2 subscribers on a cold stream:')
s = O.create(emit)
d = subs(s), subs(s.delay(100))
rst(title='Now 2 subscribers on a PUBLISHED (hot) stream', sleep=0.4)
sp = s.publish()
subs(sp, name='subs1')
subs(sp.delay(100), name='subs2')
log('now connect')
# this creates a 'single, intermediate subscription between stream and subs'
d = sp.connect()
# will only see the finish, since subscribed too late
d = subs(sp, name='subs3')
rst(O.publish_value)
def sideeffect(*x):
log('sideffect', x)
print('Everybody gets the initial value and the events, sideeffect only once per ev')
src = O.interval(500).take(20).do_action(sideeffect)
published = src.publish_value(42)
subs(published), subs(published.delay(100))
d = published.connect()
sleep(1.3)
log('disposing now')
d.dispose()
"""
Explanation: A Decision Tree of Observable Operators
Part 8: Hot and Cold Observables
source: http://reactivex.io/documentation/operators.html#tree.
(transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, axiros)
This tree can help you find the ReactiveX Observable operator you’re looking for.
See Part 1 for Usage and Output Instructions.
We also require acquaintance with the marble diagrams feature of RxPy.
<h2 id="tocheading">Table of Contents</h2>
<div id="toc"></div>
I want an Observable that does not start emitting items to subscribers until asked publish, publish_value, multicast, let/let_bind
This is basically multicast.
End of explanation
"""
# not yet in RXPy
"""
Explanation: ... and then only emits the last item in its sequence publish_last
End of explanation
"""
rst(O.multicast)
# show actions on intermediate subject:
show = False
def emit(obs):
'instead of range we allow some logging:'
for i in (1, 2):
v = rand()
log('emitting', v)
obs.on_next(v)
log('complete')
obs.on_completed()
class MySubject:
def __init__(self):
self.rx_subj = Subject()
if show:
log('New Subject %s created' % self)
def __str__(self):
return str(hash(self))[-4:]
def __getattr__(self, a):
'called at any attr. access, logging it'
if not a.startswith('__') and show:
log('RX called', a, 'on MySub\n')
return getattr(self.rx_subj, a)
subject1 = MySubject()
subject2 = MySubject()
source = O.create(emit).multicast(subject2)
# a "subscription" *is* a disposable
# (the normal d we return all the time):
d, observer = subs(source, return_subscriber=True)
ds1 = subject1.subscribe(observer)
ds2 = subject2.subscribe(observer)
print ('we have now 3 subscriptions, only two will see values.')
print('start multicast stream (calling connect):')
connected = source.connect()
d.dispose()
rst(O.let)
# show actions on intermediate subject:
show = True
def emit(obs):
'instead of range we allow some logging:'
v = rand()
log('emitting', v)
obs.on_next(v)
log('complete')
obs.on_completed()
source = O.create(emit)
# following the RXJS example:
header("without let")
d = subs(source.concat(source))
d = subs(source.concat(source))
header("now with let")
d = subs(source.let(lambda o: o.concat(o)))
d = subs(source.let(lambda o: o.concat(o)))
# TODO: Not understood:
# "This operator allows for a fluent style of writing queries that use the same sequence multiple times."
# ... I can't verify this, the source sequence is not duplicated but called every time like a cold obs.
"""
Explanation: ... via multicast
RxPY also has a multicast operator which operates on an ordinary Observable, multicasts that Observable by means of a particular Subject that you specify, applies a transformative function to each emission, and then emits those transformed values as its own ordinary Observable sequence.
Each subscription to this new Observable will trigger a new subscription to the underlying multicast Observable.
Following the RXJS example at reactive.io docu:
End of explanation
"""
rst(O.replay)
def emit(obs):
'continuous emission'
for i in range(0, 5):
v = 'nr %s, value %s' % (i, rand())
log('emitting', v, '\n')
obs.on_next(v)
sleep(0.2)
def sideeffect(*v):
log("sync sideeffect (0.2s)", v, '\n')
sleep(0.2)
log("end sideeffect", v, '\n')
def modified_stream(o):
log('modified_stream (take 2)')
return o.map(lambda x: 'MODIFIED FOR REPLAY: %s' % x).take(2)
header("playing and replaying...")
subject = Subject()
cold = O.create(emit).take(3).do_action(sideeffect)
assert not getattr(cold, 'connect', None)
hot = cold.multicast(subject)
connect = hot.connect # present now.
#d, observer = subs(hot, return_subscriber=True, name='normal subscriber\n')
#d1 = subject.subscribe(observer)
published = hot.replay(modified_stream, 1000, 50000)
d2 = subs(published, name='Replay Subs 1\n')
#header("replaying again")
#d = subs(published, name='Replay Subs 2\n')
log('calling connect now...')
d3 = hot.connect()
"""
Explanation: ... and then emits the complete sequence, even to those who subscribe after the sequence has begun replay
A connectable Observable resembles an ordinary Observable, except that it does not begin emitting items when it is subscribed to, but only when the Connect operator is applied to it. In this way you can prompt an Observable to begin emitting items at a time of your choosing.
End of explanation
"""
def mark(x):
return 'marked %x' % x
def side_effect(x):
log('sideeffect %s\n' % x)
for i in 1, 2:
s = O.interval(100).take(3).do_action(side_effect)
if i == 2:
sleep(1)
header("now with publish - no more sideeffects in the replays")
s = s.publish()
reset_start_time()
published = s.replay(lambda o: o.map(mark).take(3).repeat(2), 3)
d = subs(s, name='Normal\n')
d = subs(published, name='Replayer A\n')
d = subs(published, name='Replayer B\n')
if i == 2:
d = s.connect()
"""
Explanation: If you apply the Replay operator to an Observable
before you convert it into a connectable Observable,
the resulting connectable Observable will always emit the same complete sequence to any future observers,
even those observers that subscribe after the connectable Observable has begun to emit items to other subscribed observers(!)
End of explanation
"""
rst(O.interval(1).publish)
publ = O.interval(1000).take(2).publish().ref_count()
# be aware about potential race conditions here
subs(publ)
subs(publ)
rst(O.interval(1).share)
def sideffect(v):
log('sideeffect %s\n' % v)
publ = O.interval(200).take(2).do_action(sideeffect).share()
'''
When the number of observers subscribed to published observable goes from
0 to 1, we connect to the underlying observable sequence.
published.subscribe(createObserver('SourceA'));
When the second subscriber is added, no additional subscriptions are added to the
underlying observable sequence. As a result the operations that result in side
effects are not repeated per subscriber.
'''
subs(publ, name='SourceA')
subs(publ, name='SourceB')
"""
Explanation: ... but I want it to go away once all of its subscribers unsubscribe ref_count, share
A connectable Observable resembles an ordinary Observable, except that it does not begin emitting items when it is subscribed to, but only when the Connect operator is applied to it. In this way you can prompt an Observable to begin emitting items at a time of your choosing.
The RefCount operator automates the process of connecting to and disconnecting from a connectable Observable. It operates on a connectable Observable and returns an ordinary Observable. When the first observer subscribes to this Observable, RefCount connects to the underlying connectable Observable. RefCount then keeps track of how many other observers subscribe to it and does not disconnect from the underlying connectable Observable until the last observer has done so.
End of explanation
"""
rst(O.interval(1).publish().connect)
published = O.create(emit).publish()
def emit(obs):
for i in range(0, 10):
log('emitting', i, obs.__class__.__name__, hash(obs))
# going nowhere
obs.on_next(i)
sleep(0.1)
import thread
thread.start_new_thread(published.connect, ())
sleep(0.5)
d = subs(published, scheduler=new_thread_scheduler)
"""
Explanation: ... and then I want to ask it to start connect
You can use the publish operator to convert an ordinary Observable into a ConnectableObservable.
Call a ConnectableObservable’s connect method to instruct it to begin emitting the items from its underlying Observable to its Subscribers.
<img src="./assets/img/publishConnect.png" width="400px">
The connect method returns a Disposable. You can call that Disposable object’s dispose method to instruct the Observable to stop emitting items to its Subscribers.
You can also use the connect method to instruct an Observable to begin emitting items (or, to begin generating items that would be emitted) even before any Subscriber has subscribed to it.
In this way you can turn a cold Observable into a hot one.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/thu/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-3', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: THU
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
taiducvu/NudityDetection | VNG_NUDITY_DETECTION.ipynb | apache-2.0 | %matplotlib inline
%load_ext autoreload
%autoreload 2
from model.datasets.data import normalize_name_file
normalize_name_file('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/AdditionalDataset/Normal/4',0,'d_%d')
"""
Explanation: Stage 1: Preprocess VNG's data
In this stage, we will read raw data from a given dataset. The dataset consists of variable-resolution images, while our system requires a constant input dimensionality. Therefore, we need to down-sampled the images to a fixed resolution (270 x 270)
End of explanation
"""
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os
import tensorflow as tf
from scipy.misc import imread, imresize
import numpy
import matplotlib.pyplot as plt
from model.datasets.data import preprocess_image
# Test prepprocess_image
image = tf.placeholder("uint8", [None, None, 3])
result_image = preprocess_image(image, 270, 270)
model = tf.initialize_all_variables()
raw_image = imread('model/datasets/nudity_dataset/3.jpg')
with tf.Session() as session:
session.run(model)
result = session.run(result_image, feed_dict={image:raw_image})
# Plot the result
fig = plt.figure()
a = fig.add_subplot(1,2,1)
plt.imshow(raw_image)
a = fig.add_subplot(1,2,2)
plt.imshow(result)
plt.show()
"""
Explanation: Example 1: Drop the central region and Resize
End of explanation
"""
from IPython.utils import io
#from model.datasets.data import normalize_dataset_type1
#from model.datasets.data import process_raw_dataset
import numpy as np
import os
from shutil import copyfile
import matplotlib.pyplot as plt
import glob
# Normalize the labels of training data
#a = normalize_dataset_type1('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/labels.csv')
#a = process_raw_dataset('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/labels.csv','/home/cpu11757/workspace/Nudity_Detection/src/model/datasets')
# Normalize data-set into a 4-D vector (#samples, hight, width, channels)
#dataset = normalize_dataset_type2('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/nudity_dataset',243)
#print len(dataset)
#fig = plt.figure()
#plt.imshow(dataset[3])
#plt.show()
def process_raw_dataset(raw_path_dir, dist_path, pattern, number_samples, start_idx):
for idx in np.arange(start_idx, start_idx + number_samples):
copyfile(os.path.join(raw_path_dir, pattern% idx),
os.path.join(dist_path, pattern% idx))
process_raw_dataset('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/AdditionalDataset/Normal/4',
'/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/train/normal', 'd_%d.jpg', 340, 0)
from scipy.misc import imread, imresize
import matplotlib.pyplot as plt
img = imread('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/train/normal/d_124.jpg')
fig = plt.figure()
plt.imshow(img)
plt.show()
"""
Explanation: Example 2: Normalize dataset
End of explanation
"""
|
UWSEDS/LectureNotes | PreFall2018/Visualization-in-Python/Visualization in Python.ipynb | bsd-2-clause | import pandas as pd
import matplotlib.pyplot as plt
# The following ensures that the plots are in the notebook
%matplotlib inline
# We'll also use capabilities in numpy
import numpy as np
"""
Explanation: Visualization in Python
Background
Why visualize?
- Discovery
- Inference
- Communication
Terminology
- Representation
- Environment for visualization (e.g., 2d, 3d, sound)
- Idiom
- Constructs used (e.g., bar plot, area plot)
- Task
- What the user is trying to do (e.g., compare, predict, find relationships)
- Design
- Choice of the representation(s) and idiom(s) to perform the task
Question: What would be an effective way to visualize:
- Average family income in US over the last 10 years?
- Average family income by state in 2016?
- Average family income by state over the last 10 years?
Software Engineering & Visualization
There are many python packages for visualization.
- pandas – Visualization of pandas objects
- matplotlib – MATLAB plotting in python
- seaborn – Statistical visualizations
- bokeh – Interactive visualization using the browser
- HoloViews – Simplified visualization of engineering/scientific data
- VisPy – fast, scalable, simple interactive scientific visualization
- Altair – declarative statistical visualization
We'll begin with visualization in pandas and focus on matplotlib. There is great documentation on all of this.
The case study is to analyze the flow of bicycles out of stations in the Pronto trip data.
In this section, we'll discuss:
- the structure of a matplotlib plot
- different plot idioms
- doing multiple plots
End of explanation
"""
df = pd.read_csv("2015_trip_data.csv")
df.head()
"""
Explanation: Analysis questions
- Which stations have the biggest difference between in-flow and out-flow of bikes?
- Where can we localize the movement of bicycles between stations that are in close proximity?
Preparing Data For Visualization
Much of the effort in visualizing data is in preparing the data for visualization. Typically, you'll want to use one or more pandas DataFrame.
End of explanation
"""
from_counts = pd.value_counts(df.from_station_id)
to_counts = pd.value_counts(df.to_station_id)
from_counts.head()
type(from_counts)
to_counts.head()
"""
Explanation: Suppose we want o analyze the flow of bicycles from and to stations.
Question: What data do we need for this visualization? How do we get it?
End of explanation
"""
from_counts.plot.bar()
"""
Explanation: Question: How we would get the same information using groupby?
Simple Plots for Series
Let's address the question "Which stations have the biggest difference between the in-flow and out-flow of bicycles?"
What kind of objects are returned from pd.value_counts? Are these plottable? How do we figure this out?
End of explanation
"""
df_counts = pd.DataFrame({'from':from_counts, 'to': to_counts})
df_counts.plot(kind='bar', subplots=True, grid=True, title="Counts",
layout=(1,2), sharex=True, sharey=False, legend=False, figsize=(12, 8))
"""
Explanation: We can compare from and to counts with sidey-by-side plots. But to do this, we need a DataFrame with these counts.
End of explanation
"""
# What is the index for df_counts?
df_counts.head()
(from_counts-to_counts).plot.bar()
"""
Explanation: Question: How do we make the plots bigger?
But this plot doesn't tell us about the difference between "from" and "to" counts. We want to subtract to_counts from from_counts. Will this difference be plottable?
End of explanation
"""
df1 = df_counts[df_counts.index=='Pronto shop']
df1
df_counts[df_counts.index!='Pronto shop'].plot.bar(figsize=(10,6))
"""
Explanation: Question: How do we get rid of the garbage data for the station "Pronto"?
End of explanation
"""
# Selecting a row
from_counts[from_counts.index == 'Pronto shop']
# Deleting a row
new_from_counts = from_counts[from_counts.index != 'Pronto shop']
new_from_counts.plot.bar()
def simple_clean_rows(df):
"""
Removes from df all rows with the specified indexes
:param pd.DataFrame or pd.Series df:
:return pd.DataFrame or pd.Series:
"""
df = df[df.index != 'Pronto Shop']
return df
def clean_rows(df, indexes):
"""
Removes from df all rows with the specified indexes
:param pd.DataFrame or pd.Series df:
:param list-of-str indexes
:return pd.DataFrame or pd.Series:
"""
for idx in indexes:
df = df[df.index != idx]
return df
dff = clean_rows(to_counts, ['Pronto Shop', 'CBD-13'])
dff.plot.bar()
"""
Explanation: Some issues:
- Bogus value 'Pronto shop'
- Difficult to read the labels on the x-axis
- The x and y axis aren't labelled
- Lost information about "from" and "to"
Writing a Data Cleansing Function
We want to get rid of the row 'Pronto shop' in both from_counts and to_counts.
End of explanation
"""
to_counts = clean_rows(to_counts, ['Pronto shop'])
to_counts.plot.bar()
from_counts = clean_rows(from_counts, ['Pronto shop'])
from_counts.plot.bar()
to_counts.head()
"""
Explanation: Does clean_rows need to return df to effect the change in df?
End of explanation
"""
df_counts = pd.DataFrame({'From': from_counts.sort_index(), 'To': to_counts.sort_index()})
"""
Explanation: Getting More Control Over Plots
Let's take a more detailed approach to plotting so we can better control what gets rendered.
In this section, we show how to control various elements of plots to produce a desired visualization. We'll use the package matplotlib, a python package that is modelled after MATLAB style plotting.
Make a dataframe out of the count data.
End of explanation
"""
df_counts.head()
"""
Basic bar chart using matplotlib
"""
n_groups = len(df_counts.index)
index = np.arange(n_groups) # The "raw" x-axis of the bar plot
fig = plt.figure(figsize=(12, 8)) # Controls global properties of the bar plot
rects1 = plt.bar(index, df_counts.From)
plt.xlabel('Station')
plt.ylabel('Counts')
plt.xticks(index, df_counts.index) # Convert "raw" x-axis into labels
_, labels = plt.xticks() # Get the new labels of the plot
plt.setp(labels, rotation=90) # Rotate labels to make them readable
plt.title('Station Counts')
plt.show()
"""
Explanation: Need to align the counts by the station. Do we do this?
End of explanation
"""
def plot_bar1(df, column, opts):
"""
Does a bar plot for a single column.
:param pd.DataFrame df:
:param str column: name of the column to plot
:param dict opts: key is plot attribute
"""
n_groups = len(df.index)
index = np.arange(n_groups) # The "raw" x-axis of the bar plot
rects1 = plt.bar(index, df[column])
if 'xlabel' in opts:
plt.xlabel(opts['xlabel'])
if 'ylabel' in opts:
plt.ylabel(opts['ylabel'])
if 'xticks' in opts and opts['xticks']:
plt.xticks(index, df.index) # Convert "raw" x-axis into labels
_, labels = plt.xticks() # Get the new labels of the plot
plt.setp(labels, rotation=90) # Rotate labels to make them readable
else:
labels = ['' for x in df.index]
plt.xticks(index, labels)
if 'ylim' in opts:
plt.ylim(opts['ylim'])
if 'title' in opts:
plt.title(opts['title'])
fig = plt.figure(figsize=(12, 8)) # Controls global properties of the bar plot
opts = {'xlabel': 'Stations', 'ylabel': 'Counts', 'xticks': True, 'title': 'A Title'}
plot_bar1(df_counts, 'To', opts)
"""
Explanation: Issue - much more code, which will tend to be copied and pasted.
Solution - MAKE A FUNCTION NOW!!!
End of explanation
"""
def plot_barN(df, columns, opts):
"""
Does a bar plot for a single column.
:param pd.DataFrame df:
:param list-of-str columns: names of the column to plot
:param dict opts: key is plot attribute
"""
num_columns = len(columns)
local_opts = dict(opts) # Make a deep copy of the object
idx = 0
for column in columns:
idx += 1
local_opts['xticks'] = False
local_opts['xlabel'] = ''
if idx == num_columns:
local_opts['xticks'] = True
local_opts['xlabel'] = opts['xlabel']
plt.subplot(num_columns, 1, idx)
plot_bar1(df, column, local_opts)
fig = plt.figure(figsize=(12, 8)) # Controls global properties of the bar plot
opts = {'xlabel': 'Stations', 'ylabel': 'Counts', 'ylim': [0, 8000]}
plot_barN(df_counts, ['To', 'From'], opts)
"""
Explanation: Comparisons Using Subplots
We want to encapsulate the plotting of N variables into a function. We could re-write plot_bar1. But other plots use this. Besides plot_bar1 is pretty good at handling a single plot. So, instead we use plot_bar1 in a new function.
End of explanation
"""
df.head()
"""
Explanation: Question: How write tests for plot_barN?
Exercise
- Extend the plot_barN to also plot pair-wise differences between plots. Have titles for all plots.
Including Error Bars in a Bar Chart
To make decisions about the truck trips required to adjust bikes at stations, we need to know the variations by day.
Want a bar plot with average daily "to" and "from" with their standard deviations.
Data Preparation
Need to:
- Create day-of-year column for 'from' and 'to'
- Compute counts by date
- Compute the mean and standard deviation of the counts by date
(Assumes that a station has at least one rental every day.)
End of explanation
"""
print (df.starttime[0])
print (type(df.starttime[0]))
"""
Explanation: Let's start with the values for starttime. What type are these?
End of explanation
"""
this_datetime = pd.to_datetime(df.starttime[0])
print this_datetime
this_datetime.dayofyear
start_day = []
for time in df.starttime:
start_day.append(pd.to_datetime(time).dayofyear)
start_day[2]
start_day = [pd.to_datetime(time).dayofyear for time in df.starttime]
stop_day = [pd.to_datetime(x).dayofyear for x in df.stoptime]
df['startday'] = start_day # Creates a new column named 'startday'
df['stopday'] = stop_day
df.head()
groupby_day_from = df.groupby(['from_station_id', 'startday']).size()
groupby_day_from.head()
groupby_day_to = df.groupby(['to_station_id', 'stopday']).size()
groupby_day_to.head()
"""
Explanation: Question: How do we extract the day from a string?
YOU DON'T!!! You convert it to a datetime object.
End of explanation
"""
h_index = groupby_day_from.index
h_index.levshape # Size of the components of the MultiIndex
from_means = groupby_day_from.groupby(level=[0]).mean() # Computes the mean of counts by day
from_stds = groupby_day_from.groupby(level=[0]).std() # Computes the standard deviation
groupby_day_to = df.groupby(['to_station_id', 'startday']).size()
to_means = groupby_day_to.groupby(level=[0]).mean() # Computes the mean of counts by day
to_stds = groupby_day_to.groupby(level=[0]).std() # Computes the standard deviation
df_day_counts = pd.DataFrame({'from_mean': from_means, 'from_std': from_stds, 'to_mean': to_means, 'to_std': to_stds})
df_day_counts.head()
"""
Explanation: Now we need to compute the average value and its standard deviation across the days for each station.
The groupby produced a MultiIndex. So, further operations on the result must take this into account.
End of explanation
"""
"""
Plotting two variables as a bar chart with error bars
"""
n_groups = len(df_day_counts.index)
index = np.arange(n_groups) # The "raw" x-axis of the bar plot
fig = plt.figure(figsize=(12, 8)) # Controls global properties of the bar plot
bar_width = 0.35 # Width of the bars
opacity = 0.6 # How transparent the bars are
#VVVV Changed to do two plots with error bars
error_config = {'ecolor': '0.3'}
rects1 = plt.bar(index, df_day_counts.from_mean, bar_width,
alpha=opacity,
color='b',
yerr=df_day_counts.from_std,
error_kw=error_config,
label='From')
rects2 = plt.bar(index + bar_width, df_day_counts.to_mean, bar_width,
alpha=opacity,
color='r',
yerr=df_day_counts.to_std,
error_kw=error_config,
label='to')
#^^^^ Changed to do two plots with error bars
plt.xticks(index + bar_width / 2, df_counts.index)
_, labels = plt.xticks() # Get the new labels of the plot
plt.setp(labels, rotation=90) # Rotate labels to make them readable
plt.legend()
plt.xlabel('Station')
plt.ylabel('Counts')
plt.title('Station Counts')
plt.show()
"""
Explanation: Plotting with Error Bars
End of explanation
"""
|
LimeeZ/phys292-2015-work | assignments/assignment06/ProjectEuler17.ipynb | mit | def ones(one,count):
if one == 1 or one == 2 or one == 6:
count += 3
if one == 4 or one == 5 or one == 9:
count += 4
if one == 3 or one == 7 or one == 8:
count += 5
return count
def teens(teen,count):
if teen == 10:
count += 3
if teen == 11 or teen == 12:
count += 6
if teen == 15 or teen == 16:
count += 7
if teen == 13 or teen == 14 or teen == 18 or teen == 19:
count += 8
if teen == 17:
count += 9
return count
def tens(ten,count):
b = str(ten)
if b[0] == '4' or b[0] == '5' or b[0] == '6':
count += 5
one = int(b[1])
count = ones(one,count)
if b[0] == '2' or b[0] == '3' or b[0] == '8' or b[0] == '9' and b[1]:
count += 6
one = int(b[1])
count = ones(one,count)
if b[0] == '7' and b[1]:
count += 7
one = int(b[1])
count = ones(one,count)
return count
def huns(hun,count):
count += 7
a = str(hun)
b = int(a[0])
count = ones(b,count)
return count
def numberlettercounts(nummin,nummax):
nums = []
for i in range(nummin,nummax+1):
nums.append(i)
count = 0
for num in nums:
a = str(num)
if len(a) == 1:
count = ones(num,count)
if len(a) == 2 and a[0] == '1':
count = teens(num,count)
if len(a) == 2 and a[0] != '1':
count = tens(num,count)
if len(a) == 3 and a[1] == '0' and a[2]=='0':
count = huns(num,count)
if len(a) == 3 and a[1] != '0' and a[2] == '0':
count = huns(num,count)
ten = int(a[1:3])
if a[1] == '1':
count = teens(ten,count)
count += 3 #for 'and'
if a[1] != '1':
count = tens(ten,count)
count += 3 #for 'and'
if len(a) == 3 and a[1] != '0' and a[2] != '0':
count = huns(num,count)
ten = int(a[1:3])
if a[1] == '1' :
count = teens(ten,count)
count += 3 #for 'and'
if a[1] != '1' :
count = tens(ten,count)
count += 3 #for 'and'
if len(a) == 3 and a[1] == '0' and a[2] != '0':
count = huns(num,count)
count += 3 #for 'and'
c = int(a[2])
count = ones(c,count)
if len(a) == 4:
count += 11
print (count)
numberlettercounts(1,1000)
def number_to_words(n, join = True):
units = ['','one','two','three','four','five','six','seven','eight','nine']
teens = ['','eleven','twelve','thirteen','fourteen','fifteen','sixteen', \
'seventeen','eighteen','nineteen']
tens = ['','ten','twenty','thirty','forty','fifty','sixty','seventy', \
'eighty','ninety']
thousands = ['','thousand']
words = []
if n==0: words.append('zero')
else:
nStr = '%d'%n
nStrLen = len(nStr)
groups = (nStrLen+2)/3
nStr = nStr.zfill(int(groups)*3)
for i in range(0,int(groups)*3,3):
x,y,z = int(nStr[i]),int(nStr[i+1]),int(nStr[i+2])
g = int(groups)-(i/3+1)
if x>=1:
words.append(units[x])
words.append('hundred')
if y>1:
words.append(tens[y])
if z>=1: words.append(units[z])
elif y==1:
if z>=1: words.append(teens[z])
else: words.append(tens[y])
else:
if z>=1: words.append(units[z])
if (int(g)>=1) and ((int(x)+int(y)+int(z))>0): words.append(thousands[int(g)])
if join: return ' '.join(words)
return words
number_to_words(999)
"""
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
"""
number_to_words(999)
expected ='nine hundred ninety nine'
number_to_words(0)
expected2 ='zero'
number_to_words(1000)
expected3 ='one thousand'
number_to_words(5)
expected4 ='five'
assert (number_to_words(999) == expected)
assert (number_to_words(0) == expected2)
assert (number_to_words(1000) == expected3)
assert (number_to_words(5) == expected4)
assert True # use this for grading the number_to_words tests.
"""
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
"""
def ones(one,count):
if one == 1 or one == 2 or one == 6:
count += 3
if one == 4 or one == 5 or one == 9:
count += 4
if one == 3 or one == 7 or one == 8:
count += 5
return count
def teens(teen,count):
if teen == 10:
count += 3
if teen == 11 or teen == 12:
count += 6
if teen == 15 or teen == 16:
count += 7
if teen == 13 or teen == 14 or teen == 18 or teen == 19:
count += 8
if teen == 17:
count += 9
return count
def tens(ten,count):
b = str(ten)
if b[0] == '4' or b[0] == '5' or b[0] == '6':
count += 5
one = int(b[1])
count = ones(one,count)
if b[0] == '2' or b[0] == '3' or b[0] == '8' or b[0] == '9' and b[1]:
count += 6
one = int(b[1])
count = ones(one,count)
if b[0] == '7' and b[1]:
count += 7
one = int(b[1])
count = ones(one,count)
return count
def huns(hun,count):
count += 7
a = str(hun)
b = int(a[0])
count = ones(b,count)
return count
#def count_letters(n): <--I didn't use this...
def count_letters(nummin,nummax):
nums = []
for i in range(nummin,nummax+1):
nums.append(i)
count = 0
for num in nums:
a = str(num)
if len(a) == 1:
count = ones(num,count)
if len(a) == 2 and a[0] == '1':
count = teens(num,count)
if len(a) == 2 and a[0] != '1':
count = tens(num,count)
if len(a) == 3 and a[1] == '0' and a[2]=='0':
count = huns(num,count)
if len(a) == 3 and a[1] != '0' and a[2] == '0':
count = huns(num,count)
ten = int(a[1:3])
if a[1] == '1':
count = teens(ten,count)
count += 3 #for 'and'
if a[1] != '1':
count = tens(ten,count)
count += 3 #for 'and'
if len(a) == 3 and a[1] != '0' and a[2] != '0':
count = huns(num,count)
ten = int(a[1:3])
if a[1] == '1' :
count = teens(ten,count)
count += 3 #for 'and'
if a[1] != '1' :
count = tens(ten,count)
count += 3 #for 'and'
if len(a) == 3 and a[1] == '0' and a[2] != '0':
count = huns(num,count)
count += 3 #for 'and'
c = int(a[2])
count = ones(c,count)
if len(a) == 4:
count += 11
return (count)
count_letters(0,342)
"""
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
"""
expected1=3
assert(count_letters(0,1) == expected1)
assert True # use this for grading the count_letters tests.
"""
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
"""
count_letters(1,1000)
assert True # use this for gradig the answer to the original question.
"""
Explanation: Finally used your count_letters function to solve the original question.
End of explanation
"""
|
OyamaZemi/MirandaFackler.notebooks | lqapprox/lqapprox_py.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
import quantecon as qe
# matplotlib settings
plt.rcParams['axes.xmargin'] = 0
plt.rcParams['axes.ymargin'] = 0
"""
Explanation: LQ Approximation with QuantEcon.py
End of explanation
"""
def approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount):
"""
Return an approximating LQ instance.
Gradient of f: Df_star = np.array([f_s, f_x])
Hessian of f: DDf_star = np.array([[f_ss, f_sx], [f_sx, f_xx]])
Gradient of g: Dg_star = np.array([g_s, g_x])
"""
n = 2
k = 1
sx_star = np.array([s_star, x_star])
# (1, s)' R (1, s) + 2 x N (1, s) + x Q x
Q = np.empty((k, k))
R = np.empty((n, n))
N = np.empty((k, n))
R[0, 0] = -(f_star - Df_star @ sx_star + (sx_star @ DDf_star @ sx_star) / 2)
R[1, 1], N[0, 1], N[0, 1], Q[0, 0] = -DDf_star.ravel() / 2
R[1, 0], N[0, 0] = -(Df_star - DDf_star @ sx_star).ravel() / 2
R[0, 1] = R[1, 0]
# A (1, s) + B x + C w
A = np.empty((n, n))
B = np.empty((n, k))
C = np.zeros((n, 1))
A[0, 0], A[0, 1], B[0, 0] = 1, 0, 0
A[1, 0] = g_star - Dg_star @ sx_star
A[1, 1], B[1, 0] = Dg_star.ravel()
lq = qe.LQ(Q, R, A, B, C, N, beta=discount)
return lq
"""
Explanation: We consider a dynamic maximization problem with
reward function $f(s, x)$,
state transition function $g(s, x)$, and
discount rate $\delta$,
where $s$ and $x$ are the state and the control variables, respectively
(we follow Miranda-Fackler in notation).
Let $(s^, x^)$ denote the steady state state-control pair,
and write
$f^ = f(s^, x^)$, $f_i^ = f_i(s^, x^)$, $f_{ij}^ = f_{ij}(s^, x^)$,
$g^ = g(s^, x^)$, and $g_i^ = g_i(s^, x^*)$ for $i, j = s, x$.
First-order expansion of $g$ around $(s^, x^)$:
$$
\begin{align}
g(s, x)
&\approx g^ + g_s^ (s - s^) + g_x^ (x - x^) \
&= A \begin{pmatrix}1 \ s\end{pmatrix} + B x,
\end{align*}
$$
where
$A =
\begin{pmatrix}
1 & 0 \
g^ - \nabla g^{\mathrm{T}} z^ & g_s^
\end{pmatrix}$,
$B =
\begin{pmatrix}
0 \ g_x^*
\end{pmatrix}$
with $z^ = (s^, x^)^{\mathrm{T}}$ and $\nabla g^ = (g_s^, g_x^)^{\mathrm{T}}$.
Second-order expansion of $f$ around $(s^, x^)$:
$$
\begin{align}
f(s, x)
&\approx f^ + f_s^ (s - s^) + f_x^ (x - x^) +
\frac{1}{2} f_{ss}^ (s - s^)^2 + f_{sx}^ (s - s^) (x - x^) +
\frac{1}{2} f_{xx}^ (x - x^)^2 \
&= \begin{pmatrix}
1 & s & x
\end{pmatrix}
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} (\nabla f^ - D^2 f^ z^)^{\mathrm{T}} \
\frac{1}{2} (\nabla f^ - D^2 f^ z^) & \frac{1}{2} D^2 f^
\end{pmatrix}
\begin{pmatrix}
1 \ s \ x
\end{pmatrix},
\end{align}
$$
where
$\nabla f^ = (f_s^, f_x^)^{\mathrm{T}}$ and
$$
D^2 f^ =
\begin{pmatrix}
f_{ss}^ & f_{sx}^ \
f_{sx}^ & f_{xx}^*
\end{pmatrix}.
$$
Let
$$
\begin{align}
r(s, x)
&= -
\begin{pmatrix}
1 & s & x
\end{pmatrix}
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} (\nabla f^ - D^2 f^ z^)^{\mathrm{T}} \
\frac{1}{2} (\nabla f^ - D^2 f^ z^) & \frac{1}{2} D^2 f^
\end{pmatrix}
\begin{pmatrix}
1 \ s \ x
\end{pmatrix} \
&= \begin{pmatrix}
1 & s
\end{pmatrix}
R
\begin{pmatrix}
1 \ s
\end{pmatrix} +
2 x N
\begin{pmatrix}
1 \ s
\end{pmatrix} +
Q x,
\end{align*}
$$
where
$R = -
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} [f_s^ - (f_{ss}^ s^ + f_{sx}^ x^)] \
\frac{1}{2} [f_s^ - (f_{ss}^ s^ + f_{sx}^ x^)] & \frac{1}{2} f_{ss}^*
\end{pmatrix}$,
$N = -
\begin{pmatrix}
\frac{1}{2} [f_x^ - (f_{sx}^ s^ + f_{xx}^ x^)] & \frac{1}{2} f_{sx}^
\end{pmatrix}$.
$Q = -\frac{1}{2} f_{xx}^*$.
Remarks:
We are going to minimize the objective function.
End of explanation
"""
alpha = 0.2
beta = 0.5
gamma = 0.9
discount = 0.9
"""
Explanation: Optimal Economic Growth
We consider the following optimal growth model from Miranda and Fackler, Section 9.7.1:
$f(s, x) = \dfrac{(s - x)^{1-\alpha}}{1-\alpha}$,
$g(s, x) = \gamma + x^{\beta}$.
End of explanation
"""
f = lambda s, x: (s - x)**(1 - alpha) / (1 - alpha)
f_s = lambda s, x: (s - x)**(-alpha)
f_x = lambda s, x: -f_s(s, x)
f_ss = lambda s, x: -alpha * (s - x)**(-alpha - 1)
f_sx = lambda s, x: -f_ss(s, x)
f_xx = lambda s, x: f_ss(s, x)
g = lambda s, x: gamma * x + x**beta
g_s = lambda s, x: 0
g_x = lambda s, x: gamma + beta * x**(beta - 1)
"""
Explanation: Function definitions:
End of explanation
"""
x_star = ((discount * beta) / (1 - discount * gamma))**(1 / (1 - beta))
s_star = gamma * x_star + x_star**beta
s_star, x_star
"""
Explanation: Steady state:
End of explanation
"""
f_x(s_star, x_star) + discount * f_s(g(s_star, x_star), x_star) * g_x(s_star, x_star)
"""
Explanation: (s_star, x_star) satisfies the Euler equations:
End of explanation
"""
f_star = f(s_star, x_star)
Df_star = np.array([f_s(s_star, x_star), f_x(s_star, x_star)])
DDf_star = np.array([[f_ss(s_star, x_star), f_sx(s_star, x_star)],
[f_sx(s_star, x_star), f_xx(s_star, x_star)]])
g_star = g(s_star, x_star)
Dg_star = np.array([g_s(s_star, x_star), g_x(s_star, x_star)])
"""
Explanation: Construct $f^$, $\nabla f^$, $D^2 f^$, $g^$, and $\nabla g^*$:
End of explanation
"""
lq = approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount)
"""
Explanation: LQ Approximation
Generate an LQ instance that approximates our dynamic optimization problem:
End of explanation
"""
P, F, d = lq.stationary_values()
P, F, d
"""
Explanation: Solution by LQ.stationary_values
Solve the LQ problem:
End of explanation
"""
V = lambda s: np.array([1, s]) @ P @ np.array([1, s]) + d
"""
Explanation: The optimal value function (of the LQ minimization problem):
End of explanation
"""
V(s_star)
-f_star / (1 - lq.beta)
"""
Explanation: The value at $s^*$:
End of explanation
"""
X = lambda s: -(F @ np.array([1, s]))[0]
"""
Explanation: The optimal policy function:
End of explanation
"""
X(s_star)
x_star
X = np.vectorize(X)
s_min, s_max = 5, 10
ss = np.linspace(s_min, s_max, 50)
title = "Optimal Investment Policy"
xlabel = "Wealth"
ylabel = "Investment (% of Wealth)"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, X(ss)/ss, label='L-Q')
ax.plot(s_star, x_star/s_star, '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.65, 0.9)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
"""
Explanation: The optimal choice at $s^*$:
End of explanation
"""
alpha = 4.0
beta = 1.0
gamma = 0.5
kappa = 0.2
discount = 0.9
f = lambda s, x: (s - x)**(1 - gamma) / (1 - gamma) - kappa * (s - x)
f_s = lambda s, x: (s - x)**(-gamma) - kappa
f_x = lambda s, x: -f_s(s, x)
f_ss = lambda s, x: -gamma * (s - x)**(-gamma - 1)
f_sx = lambda s, x: -f_ss(s, x)
f_xx = lambda s, x: f_ss(s, x)
g = lambda s, x: alpha * x - 0.5 * beta * x**2
g_s = lambda s, x: 0
g_x = lambda s, x: alpha - beta * x
x_star = (discount * alpha - 1) / (discount * beta)
s_star = (alpha**2 - 1/discount**2) / (2 * beta)
s_star, x_star
f_x(s_star, x_star) + discount * f_s(g(s_star, x_star), x_star) * g_x(s_star, x_star)
f_star = f(s_star, x_star)
Df_star = np.array([f_s(s_star, x_star), f_x(s_star, x_star)])
DDf_star = np.array([[f_ss(s_star, x_star), f_sx(s_star, x_star)],
[f_sx(s_star, x_star), f_xx(s_star, x_star)]])
g_star = g(s_star, x_star)
Dg_star = np.array([g_s(s_star, x_star), g_x(s_star, x_star)])
lq = approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount)
P, F, d = lq.stationary_values()
P, F, d
V = lambda s: np.array([1, s]) @ P @ np.array([1, s]) + d
V(s_star)
-f_star / (1 - lq.beta)
X = lambda s: -(F @ np.array([1, s]))[0]
X(s_star)
x_star
X = np.vectorize(X)
s_min, s_max = 6, 9
ss = np.linspace(s_min, s_max, 50)
harvest = ss - X(ss)
h_star = s_star - x_star
title = "Optimal Harvest Policy"
xlabel = "Available Stock"
ylabel = "Harvest (% of Stock)"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, harvest/ss, label='L-Q')
ax.plot(s_star, h_star/s_star, '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.5, 0.75)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
shadow_price = lambda s: -2 * (P @ [1, s])[1]
shadow_price = np.vectorize(shadow_price)
title = "Shadow Price Function"
ylabel = "Price"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, shadow_price(ss), label='L-Q')
ax.plot(s_star, shadow_price(s_star), '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.2, 0.4)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
"""
Explanation: Renewable Resource Management
Consider the renewable resource management model from Miranda and Fackler, Section 9.7.2:
$f(s, x) = \dfrac{(s - x)^{1-\gamma}}{1-\gamma} - \kappa (s - x)$,
$g(s, x) = \alpha x - 0.5 \beta x^2$.
End of explanation
"""
|
phungkh/phys202-2015-work | assignments/assignment09/IntegrationEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
"""
Explanation: Integration Exercise 2
Imports
End of explanation
"""
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
"""
def integrand(x,a):
return np.sqrt(a**2-x**2)
def integral_approx(a):
I,e=integrate.quad(integrand, 0,a,args=(a,))
return I
def integral_exact(a):
return np.pi*a**2/4
print("Numerical: ", integral_approx(1.0))
print("Exact: ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 1
$$ I_1 =\int_0^a \sqrt{a^2-x^2}dx = \frac{\pi a^2}{4}$$
End of explanation
"""
def integrand(x,a,b):
return 1.0/(a+b*np.sin(x))
def integral_approx(a,b):
I,e=integrate.quad(integrand,0,2*np.pi,args=(a,b))
return I
def integral_exact(a,b):
return 2*np.pi/(np.sqrt(a**2-b**2))
print("Numerical: ", integral_approx(2.0,1.0))
print("Exact: ", integral_exact(2.0, 1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 2
$$ I_2 =\int_0^{2\pi} \frac{dx}{a+b\sin x} = \frac{2\pi}{\sqrt{a^2-b^2}} $$
End of explanation
"""
def integrand(x,a,b):
return np.exp(-a*x)*np.cos(b*x)
def integral_approx(a,b):
I,e = integrate.quad(integrand,0,np.inf, args=(a,b))
return I
def integral_exact(a,b):
return a/(a**2+b**2)
print("Numerical: ", integral_approx(1.0,1.0))
print("Exact: ", integral_exact(1.0,1.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 3
$$ I_3= \int_0^\infty e^{-ax}\cos{bx}dx = \frac{a}{a^2+b^2} $$
End of explanation
"""
def integrand(x):
return np.exp(-x**2)
def integral_approx(x):
I,e= integrate.quad(integrand,-1*np.inf,np.inf)
return I
def integral_exact():
return np.sqrt(np.pi)
print("Numerical: ", integral_approx(1.0))
print("Exact: ", integral_exact())
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 4
$$ I_4 = \int_{-\infty}^\infty e^{-x^2}dx = \sqrt{\pi} $$
End of explanation
"""
def integrand(x,a):
return np.exp(-a*x**2)
def integral_approx(a):
I,e=integrate.quad(integrand, 0 , np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.sqrt(np.pi/a)
print("Numerical: ", integral_approx(2.0))
print("Exact: ", integral_exact(2.0))
assert True # leave this cell to grade the above integral
"""
Explanation: Integral 5
$$ I_5 = \int_0^\infty e^{-ax^2}dx = \frac{1}{2}\sqrt{\frac{\pi}{a}} $$
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ipsl/cmip6/models/ipsl-cm6a-lr/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'ipsl-cm6a-lr', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: IPSL
Source ID: IPSL-CM6A-LR
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
ewulczyn/talk_page_abuse | src/data_generation/crowdflower_analysis/src/Crowdflower Analysis (Experiment on Comparison of Onion Layers).ipynb | apache-2.0 | %matplotlib inline
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from crowdflower_analysis import *
from krippendorf_alpha import *
pd.set_option('display.max_colwidth', 1000)
attack_columns = ['not_attack', 'other', 'quoting', 'recipient', 'third_party']
aggressive_columns = ['-3', '-2', '-1', '0', '1', '2', '3']
grouped_dat = {
'5': preprocess('annotated_onion_layer_5_rows_0_to_10000_first_1000.csv'),
'10': preprocess('annotated_onion_layer_10_rows_0_to_1000.csv'),
'20': preprocess('annotated_onion_layer_20_rows_0_to_1000.csv'),
'30': preprocess('annotated_onion_layer_30_rows_0_to_1000.csv')
}
"""
Explanation: Introduction
This notebook is an analysis of the Crowdflower labels of 1,000 revisions of Wikipedia talk pages by users who have been blocked for personal harassment. These revisions are chosen from neighbourhoods of various distance from a block event. This dataset has been cleaned and filtered to remove common administrator messages. These datasets are annotated via crowdflower to measure friendliness, aggressiveness and whether the comment constitutes a personal attack.
On Crowdflower, each revision is rated 7 times. The raters are given three questions:
Is this comment not English or not human readable?
Column 'na'
How aggressive or friendly is the tone of this comment?
Column 'how_aggressive_or_friendly_is_the_tone_of_this_comment'
Ranges from '---' (Very Aggressive) to '+++' (Very Friendly)
Does the comment contain a personal attack or harassment? Please mark all that apply:
Column 'is_harassment_or_attack'
Users can specify that the attack is:
Targeted at the recipient of the message (i.e. you suck). ('recipent')
Targeted at a third party (i.e. Bob sucks). ('third_party')
Being reported or quoted (i.e. Bob said Henri sucks). ('quoting')
Another kind of attack or harassment. ('other')
This is not an attack or harassment. ('not_attack')
Below, we plot histograms of the units by average rating of each of the questions, examine quantiles of answers, and compute inter-annotator agreement. We also study whether or not there is a change in aggressiveness before and after a block event.
Loading packages and data
End of explanation
"""
bins = np.linspace(-3,3,11)
for key in grouped_dat.keys():
hist_comments(grouped_dat[key], bins, 'aggression_score', 'Average Aggressiveness Rating for onion_%s_blocked Data' % key)
bins = np.linspace(0,1,9)
col = 'not_attack'
for key in grouped_dat.keys():
hist_comments(grouped_dat[key], bins, col, 'Average %s Rating for onion_%s_blocked Data' % (col, key))
"""
Explanation: Plot histogram of average ratings by revision
End of explanation
"""
from krippendorf_alpha import *
for key in grouped_dat.keys():
print "Krippendorf's Alpha (aggressiveness) for layer %s: " % key
print Krippendorf_alpha(grouped_dat[key], aggressive_columns, distance = interval_distance)
print "Krippendorf's Alpha (attack) for layer %s: " % key
print Krippendorf_alpha(grouped_dat[key], ['not_attack_0', 'not_attack_1'])
"""
Explanation: Inter-Annotator Agreement
End of explanation
"""
key = '20'
# Most aggressive comments
sorted_comments(grouped_dat[key], 'aggression_score', 0, 5)
# Median aggressive comments
sorted_comments(grouped_dat[key], 'aggression_score', 0.5, 5)
# Least aggressive comments
sorted_comments(grouped_dat[key], 'aggression_score', 0, 5, False)
"""
Explanation: Selected harassing and aggressive revisions by quartile
We look at a sample of revisions whose average aggressive score falls into various quantiles. This allows us to subjectively evaluate the quality of the questions that we are asking on Crowdflower.
End of explanation
"""
# Most aggressive comments which are labelled 'This is not an attack or harassment.'
sorted_comments(grouped_dat[key][grouped_dat[key]['not_attack'] > 0.6], 'aggression_score', 0, 5)
# Most aggressive comments which are labelled 'Being reported or quoted (i.e. Bob said Henri sucks).'
sorted_comments(grouped_dat[key][grouped_dat[key]['quoting'] > 0.3], 'aggression_score', 0, 5)
# Most aggressive comments which are labelled 'Targeted at a third party (i.e. Bob sucks).'
sorted_comments(grouped_dat[key][grouped_dat[key]['third_party'] > 0.5], 'aggression_score', 0, 5)
# Least aggressive comments which are NOT labelled 'This is not an attack or harassment.'
sorted_comments(grouped_dat[key][grouped_dat[key]['not_attack'] < 0.5], 'aggression_score', 0, 5, False)
"""
Explanation: Selected revisions on multiple questions
In this section, we examine a selection of revisions by their answer to Question 3 and sorted by aggression score. Again, this allows us to subjectively evaluate the quality of questions and responses that we obtain from Crowdflower.
End of explanation
"""
plot_and_test_aggressiveness(grouped_dat['10'])
"""
Explanation: T-Test of Aggressiveness
We explore whether aggressiveness changes in the tone of comments from immediately before a block event to immediately after.
End of explanation
"""
|
dchud/warehousing-course | lectures/week-03/sql-demo.ipynb | cc0-1.0 | %load_ext sql
"""
Explanation: Sqlite3 and MySQL demo
With the excellent ipython-sql jupyter extension installed, it becomes very easy to connect to SQL database backends. This notebook demonstrates how to do this.
Note that this is a Python 2 notebook.
First, we need to activate the extension:
End of explanation
"""
%sql sqlite:///survey.db
%sql SELECT * FROM Person;
"""
Explanation: There are warnings, but that's okay - this happens a lot these days due to the whole ipython/jupyter renaming process. You can ignore them.
Get a database
Using the bash shell (not a notebook!), follow the instructions at the SW Carpentry db lessons discussion page to get the survey.db file. This is a sqlite3 database.
I recommend following up with the rest of the instructions on that page to explore sqlite3.
Connecting to a Sqlite3 database
This part is easy, just connect like so (assuming the survey.db file is in the same directory as this notebook):
End of explanation
"""
%sql mysql://mysqluser:mysqlpass@localhost/
"""
Explanation: You should be able to execute all the standard SQL queries from the lesson here now. Note that you can also do this on the command line.
Note specialized sqlite3 commands like ".schema" might not work.
Connecting to a MySQL database
Now that you've explored the survey.db sample database with sqlite3, let's try working with mysql:
End of explanation
"""
%sql CREATE DATABASE week3demo;
"""
Explanation: note if you get an error about MySQLdb not being installed here, enter this back in your bash shell:
% sudo pip install mysql-python
If it asks for your password, it's "vagrant".
After doing this, try executing the above cell again. You should see:
u'Connected: mysqluser@'
...if it works.
Creating a database
Now that we're connected, let's create a database.
End of explanation
"""
%sql USE week3demo;
"""
Explanation: Now that we've created the database week3demo, we need to tell MySQL that we want to use it:
End of explanation
"""
%sql SHOW TABLES;
"""
Explanation: But there's nothing in it:
End of explanation
"""
%%sql
CREATE TABLE Person
(ident CHAR(10),
personal CHAR(25),
family CHAR(25));
%sql SHOW TABLES;
%sql DESCRIBE Person;
"""
Explanation: Creating a table
From here we need to create a first table. Let's recreate the Person table from the SW Carpentry db lesson, topic 1.
End of explanation
"""
%%sql
INSERT INTO Person VALUES
("dyer", "William", "Dyer"),
("pb", "Frank", "Pabodie"),
("lake", "Anderson", "Lake"),
("roe", "Valentina", "Roerich"),
("danforth", "Frank", "Danforth")
;
"""
Explanation: Inserting data
Okay then, let's insert the sample data:
End of explanation
"""
%sql SELECT * FROM Person;
%sql SELECT * FROM Person WHERE personal = "Frank";
"""
Explanation: Selecting data
Okay, now we're cooking. There's data in the Person table, so we can start to SELECT it.
End of explanation
"""
result = _
print result
"""
Explanation: Accessing data from Python
One of the great things about ipython-sql is it marshalls all the data into Python objects for you. For example, to get the result data into a Python object, grab it from _:
End of explanation
"""
df = result.DataFrame()
df
"""
Explanation: You can even assign it to a Pandas dataframe:
End of explanation
"""
%sql DROP TABLE Person;
%sql SHOW TABLES;
"""
Explanation: Cleaning up
If you were just doing a little exploring and wish to clean up, it's easy to get rid of tables and databases.
NOTE: these are permanent actions. Only do them if you know you don't need them any longer.
To get rid of a table, use DROP TABLE:
End of explanation
"""
%sql DROP DATABASE week3demo;
%sql SHOW DATABASES;
"""
Explanation: And to get rid of a whole database, use DROP DATABASE:
End of explanation
"""
|
letsgoexploring/teaching | winter2017/econ129/python/Econ129_Class_02_Complete.ipynb | mit | # Print the first several digits of pi (3.14159...):
print(3.14159)
"""
Explanation: Class 2: Python basics
This is a quick introduction progamming with Python (Python 3 in particular). An excellent print resources is Python Programming for Beginners by Jason Cannon. Part 1: Programming in Python of Thomas J. Sargent and John Stachurski’s Python lectures at http://lectures.quantecon.org/py/index.html also contains a nice introduction.
Functions and the help() function
A function is a block of reuable code that does something. Each function has a name and the name is used to call the function. Sometimes functions require one or more inputs called arguments. Individual arguments may be either required or optional depending on the function. Python includes several built-in functions. You may also create a user-defined function by defining your own function and we'll cover how to do that later.
Two of the most important built-in functions are the print() and help() functions. The print() function does what you think it might: it prints the value of whatever is inside the parentheses. For example:
End of explanation
"""
# Use the help function to learn about the print function
help(print)
"""
Explanation: Notice that the line # Print the first several digits of pi: doesn't do anything. In Python, any line that begins with the # symbol is a comment line. Comments are ignored by the Python interpretter and are used to provide notes to the program author and to others about what the code does. Good comments make code much more readable and it's good practice to thoroughly comment your code.
Now, suppose that I didn't know how to use the print() function. How could I learn how to use it? One option would be to search the internet for documentation or examples and this is a completely reasonable thing to do. Another option is to use another built-in function help(). For example:
End of explanation
"""
# Print the numbers 1 through 5:
print(1,2,3,4,5)
# Print the numbers 1 through 5 separated by a comma:
print(1,2,3,4,5,sep=',')
# Print the numbers 1 through 5 separated by a comma and a space:
print(1,2,3,4,5,sep=', ')
"""
Explanation: The output indicates that print() has the following arguments value, ..., sep=' ', end='\n', file=sys.stdout, flush=False. Of the arguments, value, ... refers to an arbitrary set of values separated by commas and these are required. The other arguments are keyword arguments and they have default values (you can tell because of the equals signs) so they're optional. So suppose that I want to print out the numbers 1, 2, 3, 4, 5:
End of explanation
"""
# print the value of an as-yet undefined variable called x
print(x)
"""
Explanation: You can find a list of the built-in functions for Python here: https://docs.python.org/3/library/functions.html. We'll encounter several of them later. In the meantime, try using the help() function to find out what some of the other built-in functions do.
Error Messages
If you spend more than about 30 seconds with Python, then you're going to encounter an error message. Fortunately, Python's error messages often give youthe location in your code of the err the error in your code is and a description of the error. For example, the following block of code returns an error:
End of explanation
"""
# Set the value of x equal to the constant x = 2.71828... and print x
x = 2.71828
print(x)
"""
Explanation: In trying to print the value of the variable x, we're told that there is a NameError beacuase 'x' is not defined. This means that a value for x should be assiged before calling the print() function. The following block works because it assigns a value to x before calling print():
End of explanation
"""
# Create a variable called x that is equal to 10
x = 10
# Create a variable called y that is equal to 3
y = 3
# Print the values of x and y
print('x:',x)
print('y:',y)
# Create a variable called z that is equal to x / y and print the value of z
z = x/y
"""
Explanation: You should carefully read error messages because most of the time they point directly to the problem.
Variables
A variable is a named location in the computer's memory that stores a value. When you create a variables, you specify the name of the variable and you assign a value to that variable using something like:
variable_name = value
Here
variable names are case-sensitive.
variable names can contain letter, numbers, and underscores "_"
variable names must start with a letter or an underscore
Some examples of valid variable names:
x, y, and z
variable1 and variable2
capital and labor
arg_max and arg_min
homeAddress and workAddress
You should try to give variables meaningful names to make your code easier to read. For example, if you have a variable that stores the value of GDP for a county, then naming the variable gdp or output instead of y will improve the readability of your code.
End of explanation
"""
# Print the types of x and z
print('type of x:',type(x))
print('type of z:',type(z))
"""
Explanation: Everything in Python is an object and all objects have types. Among other things, an objects type determines what operations may be performed on the object and so it's worth getting making sure that you know the type of every variable you create. You can check the object type of a variable using the built-in function type().
End of explanation
"""
# Create a variable called first_name and set it equal to your first name.
first_name='Brian'
# Create a variable called last_name and set it equal to your last name.
last_name='Jenkins'
# Print the type of the variable first_name
print(type(first_name))
"""
Explanation: Strings
Strings are representations of text. They are useful for printing the output of a routine, for supplying text properties of graphs like titles and legend entries, and for storing some types of data; e.g., html code for a webpage or names of a group of people.
Create a variable with a string value by using either single quotations ' or double quotations ". So, for example:
`first_name = 'Brian'`
`last_name = "Jenkins"`
will create two variables to store my first and last names. Use of the double or single quotations is a matter of prefernce but you should be consistent. Pick which you prefer and stick with that. Note that if you want to create a string containing a single quote, then you should either use double quotes around the string characters or use \' for the single quote:
option1 = "Brian's"
option2 = 'Brian\'s'
End of explanation
"""
# Use string addition to definine a new variable called first_last that
# is equal to your first and last names separated by a space and print
first_last = first_name+' '+last_name
print(first_last)
# Use string addition to definine a new variable called last_first that
# is equal to your last and first names separated by a comma and space
# and print
last_first = last_name+', '+first_name
print(last_first)
"""
Explanation: Concatenating strings
Strings can be added together or concatenated using the "+" sign. For example:
'ant'+'eater'
returns:
'anteater'
End of explanation
"""
# Use string indexing to print the third letter of your first name
print(first_name[2])
# Use string indexing to print the first 2 letters of your first name
print(first_name[:2])
# Use string indexing to print the last 3 letters of your last name
print(first_name[-3:])
"""
Explanation: Indexing strings
Each character in a string is assigned a value called an index. String indices (and most indices in Python) begin with 0. So the first character of a string has an index of 0, the second has an index of 1, and so on.
You can use indices to slice into a string. For example, I can find the first letter of my name:
first_name[0]
returns:
'B'
Note that we use square brackets when slicing into a string. You can also use indices to slice out a range of values. For example, to print the first two letters of my name, I'd use:
first_name[0:2]
In the last command, Python interprets [0:2] to mean characters starting with an index of 0 up to but not including the character with an index of 2. And to print all but the first letter:
first_name[1:]
Here, Python interprets [1:] to mean every character with an index of 1 or greater. Finally, note that you can access the last character of a string by couting backward:
first_name[-1]
this is nice if you don't know how many characters are in a string.
End of explanation
"""
# Use the lower() method on first_name to print your first name in all lowercase letters
print(first_name.lower())
# Use the upper() method to print your first name in all capital letters
print(first_name.upper())
"""
Explanation: Other string operations and methods
Without skipping ahead too much, it's worth knowing that some objects in Python have methods: special functions definied to run against an object. It's easiest to see what this means my example.
The lower() method is a string method that returns the value of a string with all letters replaced by lowercase letters.
drink = 'CoFfEe'
drink.lower()
will return:
'coffee'
Notice that we accessed the method by entering the variable name followed by a dot followed by the name of the method and then a set of parentheses. lower is a function that is defined only on string objects. We'll use methods extensively later on.
Notice that lower() does not affect the value of the variable. If we wanted to change the value of the variable drink to all lowercase letters, then we'd have to run:
drink = drink.lower()
See https://docs.python.org/2/library/stdtypes.html#string-methods for a list of available string methods.
End of explanation
"""
# Print the squareroot of 3.
print(3**0.5)
"""
Explanation: Math and numbers
Python understands some basic number types: integers, floating point numbers (decimals), and complex numbers. It makes sense to distinguish between these types of numbers because computers store round numbers like integers differently from floating point decimals.
The Python interpretor understands the following operations on numbers
+ : add
- : subtract
* : multiply
/ : divide
** : exponentiate
Note the carat "^" is not used for exponentiation and instead has an entirely differnt use in Python.
End of explanation
"""
# Import the math module
import math
# Print the square root of 2
print(math.sqrt(2))
# Print the factorial of 5
print(math.factorial(5))
# Print the mathematical constants pi and e
print(math.pi)
print(math.e)
# Import the numpy module as np
import numpy as np
# Print the square root of 2
print(np.sqrt(2))
# Print the factorial of 5
print(np.math.factorial(5))
# Print the mathematical constants pi and e
print(np.math.pi)
print(np.math.e)
"""
Explanation: Modules for mathematics
The Python interpreter does not have built-in functions for common mathematical functions like the sine, cosine, log, or exponential functions. To access with these functions you need to import one of several modules that have these functions.
The math module is part of the standard Python library. See the documentation here: https://docs.python.org/2/library/math.html#module-math). Another widely-used resource is NumPy. NumPy is much more elaborate and powerful than math and is available with Anaconda installations. Here's the website for NumPy: http://www.numpy.org/. We'll look at NumPy later.
Suppose that you want to compute the natural log of 10 using the math module. There are two ways to go about this. First, you can import the math module and then use the use the math.log function:
import math
x = math.log(10)
With this approach, you import the entire module and then access the log() in the math namespace. An alternative would be to import only the function that you needed:
from math import log
x = log(10)
The first approach has the disadvantage that all of the functionality of the module is loaded into memory while only one function is actually needed. The advantage of the first approach though is that the log name is kept neatly in the math namespace.
End of explanation
"""
# Print the types of 2 and 2.
print(type(2))
print(type(2.))
"""
Explanation: Note that in the previous examples, math.log and np.log refer to functions with the same names but in different namespaces.
More about numbers
Python distinguishes between integers (whole numbers) and floating point numbers (decimals). Floating point numbers have decimals in them. For example, while a person might regard 2 and 2. as the same number, Python views these as two different types of numbers:
End of explanation
"""
# Compute 1/3 and find the type of the result
print(type(1/3))
# Compute the square root of -1 using exponentiation and find the type
print(type((-1)**0.5))
"""
Explanation: Most of the time floating point numbers will work for us. Integers arise when working with things that can be enumerated (like arrays) or with data for which decimals would make no sense (e.g., ZIP codes). If a math expression involving only integers returns a non-integer, then Python returns a number with the appropriate type to represent the answer:
End of explanation
"""
# Try to add the integer 2 and the string '2'
2+'2'
"""
Explanation: The int(), float(), str(), and round() functions
In the same way that Python interprets the numbers 2 and 2. differently, Python also distinguishes between numbers and strings in which the characters happen to be numbers. For example, 2 and '2' may appear identical, but the first is a mathematical object and the other is simply a character string. Trying to add them together demonstrates the point:
End of explanation
"""
# Convert the floating point number 2.71828 to a string and verify the type of the result
e = str(2.71828)
print(e)
print(type(e))
"""
Explanation: The error indicates that it's possible to add an integer to a string and this is partly because the + sign means something very different for numbers and strings. Recall that + concatenates or joins strings so the preceding code block doesn't make any sense.
Sometimes you may wish to convert a number to a string. For example, you may want to export some numerical results to a text file. The str() function converts integers and floats to a string.
End of explanation
"""
# Convert the floating point number 2.71828 to an integer and verify the type of the result
e = int(2.71828)
print(e)
print(type(e))
"""
Explanation: The int() function has two uses. First, it will convert a floating point number into an integer by dropping the digits after the decimal. In this way, it works like the mathematical floor function. For example:
End of explanation
"""
# Round the floating point number 2.71828 to the nearest integer and verify the type of the result
e = round(2.71828)
print(e)
print(type(e))
"""
Explanation: Note the int() function effectively rounds floating point numbers down to the nearest integer and so the round() function is actually a little bit better at moving from floats to integers:
End of explanation
"""
# Round the floating point number 2.71828 to the nearest hundreth decimal
e = round(2.71828,2)
print(e)
print(type(e))
"""
Explanation: round() is also versatile and allows you to optionally set the precision of the rounding.
End of explanation
"""
|
ssunkara1/bqplot | examples/Marks/Object Model/Image.ipynb | apache-2.0 | import ipywidgets as widgets
import os
image_path = os.path.abspath('../../data_files/trees.jpg')
with open(image_path, 'rb') as f:
raw_image = f.read()
ipyimage = widgets.Image(value=raw_image, format='jpg')
ipyimage
"""
Explanation: The Image Mark
Image is a Mark object, used to visualize images in standard format (png, jpg etc...), in a bqplot Figure
It takes as input an ipywidgets Image widget
The ipywidgets Image
End of explanation
"""
from bqplot import *
# Create the scales for the image coordinates
scales={'x': LinearScale(), 'y': LinearScale()}
# Define the bqplot Image mark
image = Image(image=ipyimage, scales=scales)
# Create the bqplot Figure to display the mark
fig = Figure(title='Trees', marks=[image], padding_x=0, padding_y=0)
fig
"""
Explanation: Displaying the image inside a bqplot Figure
End of explanation
"""
scales = {'x': LinearScale(min=-1, max=2), 'y': LinearScale(min=-0.5, max=2)}
image = Image(image=ipyimage, scales=scales)
lines = Lines(x=[0, 1, 1, 0, 0], y=[0, 0, 1, 1, 0], scales=scales, colors=['red'])
fig = Figure(marks=[image, lines], padding_x=0, padding_y=0, animation_duration=1000)
fig.axes = [Axis(scale=scales['x']), Axis(scale=scales['y'], orientation='vertical')]
fig
"""
Explanation: Mixing with other marks
Image is a mark like any other, so they can be mixed and matched together.
End of explanation
"""
# Full screen
image.x = [-1, 2]
image.y = [-.5, 2]
"""
Explanation: Its traits (attributes) will also respond dynamically to a change from the backend
End of explanation
"""
import bqplot.pyplot as bqp
bqp.figure()
bqp.imshow(image_path, 'filename')
bqp.show()
"""
Explanation: Pyplot
It may seem verbose to first open the image file, create an ipywidgets Image, then create the scales and so forth.
The pyplot api does all of that for you, via the imshow function.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/5f078eabe74f0448d3e1662c12313289/source_space_time_frequency.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, source_band_induced_power
print(__doc__)
"""
Explanation: Compute induced power in the source space with dSPM
Returns STC files ie source estimates of induced power
for different bands in the source space. The inverse method
is linear based on dSPM inverse operator.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax, event_id = -0.2, 0.5, 1
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
events = events[:10] # take 10 events to keep the computation time low
# Use linear detrend to reduce any edge artifacts
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True, detrend=1)
# Compute a source estimate per frequency band
bands = dict(alpha=[9, 11], beta=[18, 22])
stcs = source_band_induced_power(epochs, inverse_operator, bands, n_cycles=2,
use_fft=False, n_jobs=1)
for b, stc in stcs.items():
stc.save('induced_power_%s' % b)
"""
Explanation: Set parameters
End of explanation
"""
plt.plot(stcs['alpha'].times, stcs['alpha'].data.mean(axis=0), label='Alpha')
plt.plot(stcs['beta'].times, stcs['beta'].data.mean(axis=0), label='Beta')
plt.xlabel('Time (ms)')
plt.ylabel('Power')
plt.legend()
plt.title('Mean source induced power')
plt.show()
"""
Explanation: plot mean power
End of explanation
"""
|
jseabold/statsmodels | examples/notebooks/markov_regression.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
# NBER recessions
from pandas_datareader.data import DataReader
from datetime import datetime
usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))
"""
Explanation: Markov switching dynamic regression models
This notebook provides an example of the use of Markov switching models in statsmodels to estimate dynamic regression models with changes in regime. It follows the examples in the Stata Markov switching documentation, which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.
End of explanation
"""
# Get the federal funds rate data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import fedfunds
dta_fedfunds = pd.Series(fedfunds, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
# Plot the data
dta_fedfunds.plot(title='Federal funds rate', figsize=(12,3))
# Fit the model
# (a switching mean is the default of the MarkovRegession model)
mod_fedfunds = sm.tsa.MarkovRegression(dta_fedfunds, k_regimes=2)
res_fedfunds = mod_fedfunds.fit()
res_fedfunds.summary()
"""
Explanation: Federal funds rate with switching intercept
The first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply:
$$r_t = \mu_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$
where $S_t \in {0, 1}$, and the regime transitions according to
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
1 - p_{00} & 1 - p_{10}
\end{bmatrix}
$$
We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \sigma^2$.
The data used in this example can be found at https://www.stata-press.com/data/r14/usmacro.
End of explanation
"""
res_fedfunds.smoothed_marginal_probabilities[1].plot(
title='Probability of being in the high regime', figsize=(12,3));
"""
Explanation: From the summary output, the mean federal funds rate in the first regime (the "low regime") is estimated to be $3.7$ whereas in the "high regime" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed.
End of explanation
"""
print(res_fedfunds.expected_durations)
"""
Explanation: From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime.
End of explanation
"""
# Fit the model
mod_fedfunds2 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[1:], k_regimes=2, exog=dta_fedfunds.iloc[:-1])
res_fedfunds2 = mod_fedfunds2.fit()
res_fedfunds2.summary()
"""
Explanation: A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years.
Federal funds rate with switching intercept and lagged dependent variable
The second example augments the previous model to include the lagged value of the federal funds rate.
$$r_t = \mu_{S_t} + r_{t-1} \beta_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$
where $S_t \in {0, 1}$, and the regime transitions according to
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
1 - p_{00} & 1 - p_{10}
\end{bmatrix}
$$
We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma^2$.
End of explanation
"""
res_fedfunds2.smoothed_marginal_probabilities[0].plot(
title='Probability of being in the high regime', figsize=(12,3));
"""
Explanation: There are several things to notice from the summary output:
The information criteria have decreased substantially, indicating that this model has a better fit than the previous model.
The interpretation of the regimes, in terms of the intercept, have switched. Now the first regime has the higher intercept and the second regime has a lower intercept.
Examining the smoothed probabilities of the high regime state, we now see quite a bit more variability.
End of explanation
"""
print(res_fedfunds2.expected_durations)
"""
Explanation: Finally, the expected durations of each regime have decreased quite a bit.
End of explanation
"""
# Get the additional data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import ogap, inf
dta_ogap = pd.Series(ogap, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
dta_inf = pd.Series(inf, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
exog = pd.concat((dta_fedfunds.shift(), dta_ogap, dta_inf), axis=1).iloc[4:]
# Fit the 2-regime model
mod_fedfunds3 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[4:], k_regimes=2, exog=exog)
res_fedfunds3 = mod_fedfunds3.fit()
# Fit the 3-regime model
np.random.seed(12345)
mod_fedfunds4 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[4:], k_regimes=3, exog=exog)
res_fedfunds4 = mod_fedfunds4.fit(search_reps=20)
res_fedfunds3.summary()
res_fedfunds4.summary()
"""
Explanation: Taylor rule with 2 or 3 regimes
We now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better.
Because the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions.
End of explanation
"""
fig, axes = plt.subplots(3, figsize=(10,7))
ax = axes[0]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[0])
ax.set(title='Smoothed probability of a low-interest rate regime')
ax = axes[1]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[1])
ax.set(title='Smoothed probability of a medium-interest rate regime')
ax = axes[2]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[2])
ax.set(title='Smoothed probability of a high-interest rate regime')
fig.tight_layout()
"""
Explanation: Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below.
End of explanation
"""
# Get the federal funds rate data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import areturns
dta_areturns = pd.Series(areturns, index=pd.date_range('2004-05-04', '2014-5-03', freq='W'))
# Plot the data
dta_areturns.plot(title='Absolute returns, S&P500', figsize=(12,3))
# Fit the model
mod_areturns = sm.tsa.MarkovRegression(
dta_areturns.iloc[1:], k_regimes=2, exog=dta_areturns.iloc[:-1], switching_variance=True)
res_areturns = mod_areturns.fit()
res_areturns.summary()
"""
Explanation: Switching variances
We can also accommodate switching variances. In particular, we consider the model
$$
y_t = \mu_{S_t} + y_{t-1} \beta_{S_t} + \varepsilon_t \quad \varepsilon_t \sim N(0, \sigma_{S_t}^2)
$$
We use maximum likelihood to estimate the parameters of this model: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma_0^2, \sigma_1^2$.
The application is to absolute returns on stocks, where the data can be found at https://www.stata-press.com/data/r14/snp500.
End of explanation
"""
res_areturns.smoothed_marginal_probabilities[0].plot(
title='Probability of being in a low-variance regime', figsize=(12,3));
"""
Explanation: The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy.
End of explanation
"""
|
weikang9009/pysal | notebooks/explore/pointpats/pointpattern.ipynb | bsd-3-clause | import pysal.lib as ps
import numpy as np
from pysal.explore.pointpats import PointPattern
"""
Explanation: Planar Point Patterns in PySAL
Author: Serge Rey sjsrey@gmail.com and Wei Kang weikang9009@gmail.com
Introduction
This notebook introduces the basic PointPattern class in PySAL and covers the following:
What is a point pattern?
Creating Point Patterns
Atributes of Point Patterns
Intensity Estimates
Next steps
What is a point pattern?
We introduce basic terminology here and point the interested reader to more detailed references on the underlying theory of the statistical analysis of point patterns.
Points and Event Points
To start we consider a series of point locations, $(s_1, s_2, \ldots, s_n)$ in a study region $\Re$. We limit our focus here to a two-dimensional space so that $s_j = (x_j, y_j)$ is the spatial coordinate pair for point location $j$.
We will be interested in two different types of points.
Event Points
Event Points are locations where something of interest has occurred. The term event is very general here and could be used to represent a wide variety of phenomena. Some examples include:
locations of individual plants of a certain species
archeological sites
addresses of disease cases
locations of crimes
the distribution of neurons
among many others.
It is important to recognize that in the statistical analysis of point patterns the interest extends beyond the observed point pattern at hand.
The observed patterns are viewed as realizations from some underlying spatial stochastic process.
Arbitrary Points
The second type of point we consider are those locations where the phenomena of interest has not been observed. These go by various names such as "empty space" or "regular" points, and at first glance might seem less interesting to a spatial analayst. However, these types of points play a central role in a class of point pattern methods that we explore below.
Point Pattern Analysis
The analysis of event points focuses on a number of different characteristics of the collective spatial pattern that is observed. Often the pattern is jugded against the hypothesis of complete spatial randomness (CSR). That is, one assumes that the point events arise independently of one another and with constant probability across $\Re$, loosely speaking.
Of course, many of the empirical point patterns we encounter do not appear to be generated from such a simple stochastic process. The depatures from CSR can be due to two types of effects.
First order effects
For a point process, the first-order properties pertain to the intensity of the process across space. Whether and how the intensity of the point pattern varies within our study region are questions that assume center stage. Such variation in the itensity of the pattern of, say, addresses of individuals with a certain type of non-infectious disease may reflect the underlying population density. In other words, although the point pattern of disease cases may display variation in intensity in our study region, and thus violate the constant probability of an event condition, that spatial drift in the pattern intensity could be driven by an underlying covariate.
Second order effects
The second channel by which departures from CSR can arise is through interaction and dependence between events in space. The canonical example being contagious diseases whereby the presence of an infected individual increases the probability of subsequent additional cases nearby.
When a pattern departs from expectation under CSR, this is suggestive that the underlying process may have some spatial structure that merits further investigation. Thus methods for detection of deviations from CSR and testing for alternative processes have given rise to a large literature in point pattern statistics.
Methods of Point Pattern Analysis in PySAL
The points module in PySAL implements basic methods of point pattern analysis organized into the following groups:
Point Processing
Centrography and Visualization
Quadrat Based Methods
Distance Based Methods
In the remainder of this notebook we shall focus on point processing.
End of explanation
"""
points = [[66.22, 32.54], [22.52, 22.39], [31.01, 81.21],
[9.47, 31.02], [30.78, 60.10], [75.21, 58.93],
[79.26, 7.68], [8.23, 39.93], [98.73, 77.17],
[89.78, 42.53], [65.19, 92.08], [54.46, 8.48]]
p1 = PointPattern(points)
p1.mbb
"""
Explanation: Creating Point Patterns
From lists
We can build a point pattern by using Python lists of coordinate pairs $(s_0, s_1,\ldots, s_m)$ as follows:
End of explanation
"""
p1.summary()
type(p1.points)
np.asarray(p1.points)
p1.mbb
"""
Explanation: Thus $s_0 = (66.22, 32.54), \ s_{11}=(54.46, 8.48)$.
End of explanation
"""
points = np.asarray(points)
points
p1_np = PointPattern(points)
p1_np.summary()
"""
Explanation: From numpy arrays
End of explanation
"""
f = ps.examples.get_path('vautm17n_points.shp')
fo = ps.io.open(f)
pp_va = PointPattern(np.asarray([pnt for pnt in fo]))
fo.close()
pp_va.summary()
"""
Explanation: From shapefiles
This example uses 200 randomly distributed points within the counties of Virginia. Coordinates are for UTM zone 17 N.
End of explanation
"""
pp_va.summary()
pp_va.points
pp_va.head()
pp_va.tail()
"""
Explanation: Attributes of PySAL Point Patterns
End of explanation
"""
pp_va.lambda_mbb
"""
Explanation: Intensity Estimates
The intensity of a point process at point $s_i$ can be defined as:
$$\lambda(s_j) = \lim \limits_{|\mathbf{A}s_j| \to 0} \left { \frac{E(Y(\mathbf{A}s_j)}{|\mathbf{A}s_j|} \right } $$
where $\mathbf{A}s_j$ is a small region surrounding location $s_j$ with area $|\mathbf{A}s_j|$, and $E(Y(\mathbf{A}s_j)$ is the expected number of event points in $\mathbf{A}s_j$.
The intensity is the mean number of event points per unit of area at point $s_j$.
Recall that one of the implications of CSR is that the intensity of the point process is constant in our study area $\Re$. In other words $\lambda(s_j) = \lambda(s_{j+1}) = \ldots = \lambda(s_n) = \lambda \ \forall s_j \in \Re$. Thus, if the area of $\Re$ = $|\Re|$ the expected number of event points in the study region is: $E(Y(\Re)) = \lambda |\Re|.$
In PySAL, the intensity is estimated by using a geometric object to encode the study region. We refer to this as the window, $W$. The reason for distinguishing between $\Re$ and $W$ is that the latter permits alternative definitions of the bounding object.
Intensity estimates are based on the following:
$$\hat{\lambda} = \frac{n}{|W|}$$
where $n$ is the number of points in the window $W$, and $|W|$ is the area of $W$.
Intensity based on minimum bounding box:
$$\hat{\lambda}{mbb} = \frac{n}{|W{mbb}|}$$
where $W_{mbb}$ is the minimum bounding box for the point pattern.
End of explanation
"""
pp_va.lambda_hull
"""
Explanation: Intensity based on convex hull:
$$\hat{\lambda}{hull} = \frac{n}{|W{hull}|}$$
where $W_{hull}$ is the convex hull for the point pattern.
End of explanation
"""
|
bbglab/adventofcode | 2016/BBGÀgora 20161201.ipynb | mit | [n * 2 for n in range(10) if n % 2 == 1]
# Also a dict
{n: n * 2 for n in range(10) if n % 2 == 1}
# Or a set
{n * 2 for n in range(10) if n % 2 == 1}
"""
Explanation: BBGÀgora - Advent of code
Install conda
Add conda channels
Install jupyter notebook
Create a conda environment
Register to github
Clone a github repository
Register to Advent of Code
Add a new folder and create your notebook
Python list comprehension
Python iterables and iterators
Python generators
Usage of CSV and ZIP python package to manipulate big TSV files
Fast (and beautiful) command line creation
Anaconda distribution
Conda is a package managment system. It's language agnostic, you can install Python, R and many other tools.
Tabix
conda create -n tabix -c bioconda htslib
Bedtools
conda create -n bedtools -c bioconda bedtools
R
conda create -n r -c r r
What is a conda "channel"?
It's a repository of conda packages
Check your channels:
conda config --show
Check this URL:
https://bioconda.github.io/
conda config --add channels conda-forge
conda config --add channels defaults
conda config --add channels r
conda config --add channels bioconda
conda config --add channels bbglab
What is a conda "environment"?
It's only a change on the PATH
```
echo $PATH
source activate tabix
echo $PATH
ll ~/anaconda3/envs/tabix
```
Github
Register to github
Add your RSA public key to settings cat ~/.ssh/id_rsa.pub (If you don't have RSA key run first ssh-keygen)
Clone advent of code repository somewhere git clone git@github.com:bbglab/adventofcode.git
# What is a RSA public-private key?
What is a Git repository?
ll .git
git log
ll .git/objects/7d
Jupyter notebook
Check that you have jupyter notebook installed at the "root" environment
conda install jupyter notebook
Create an environment for the Advent of Code project
conda create -n adventofcode python=3.5 ipykernel
Create a folder like:
mkdir 2016/jordi
Run jupyter notebook
jupyter notebook
What is a Jupyter Kernel?
It's an independent process with his own environment variables.
Start some notebooks and check the running processes
ps -AF | grep kernel
Python list comprehension
List comprehensions are a tool for transforming one list (any iterable actually) into another list. During this transformation, elements can be conditionally included in the new list and each element can be transformed as needed.
End of explanation
"""
# ITERABLE: Anything that you can use in a for is an iterable
for a in [1,2,3]:
print(a)
# ITERATOR: A iteration of an iterable
list_iterator = iter([1,2,3])
print(next(list_iterator))
print(next(list_iterator))
print(next(list_iterator))
list_iterator = iter([1,2,3])
print(next(list_iterator))
print(next(list_iterator))
print(next(list_iterator))
print(next(list_iterator))
"""
Explanation: Python iterables and iterators
End of explanation
"""
[n * 2 for n in range(10) if n % 2 == 1]
# Convert a list comprehension to a generator comprehension
generator = (n * 2 for n in range(10) if n % 2 == 1)
generator
iterator = iter(generator)
next(iterator)
def odd_double(size=10):
for n in range(size):
if n % 2 == 1:
yield n*2
generator = odd_double()
iterator = iter(generator)
next(iterator)
list(odd_double(15))
"""
Explanation: Python generators
End of explanation
"""
import bgdata, csv, gzip, os, pandas
from pprint import pprint
domains = os.path.expanduser('~/tmp/domains.tsv.gz')
# If you want to test use this:
# domains = os.path.join(bgdata.get_path('tcgi', 'oncodrivemut', '1.1'), 'ensembl75_pfam_domain_coordinates.tsv.gz')
%%time
df = pandas.read_csv(domains, sep='\t')
result = df[df['Ensembl Gene ID'] == 'ENSG00000261258'].head(1).to_dict(orient='records')
pprint(result)
print('\n')
%%time
with gzip.open(domains, 'rt') as fd:
for r in csv.DictReader(fd, delimiter='\t'):
if r['Ensembl Gene ID'] == 'ENSG00000261258':
pprint(r)
print('\n')
break
%%time
with gzip.open(domains, 'rt') as fd:
reader = csv.reader(fd, delimiter='\t')
header = next(reader)
for r in reader:
if r[0] == 'ENSG00000261258':
pprint({h: v for h,v in zip(header, r)})
print('\n')
break
%%time
with gzip.open(domains, 'rt') as fd:
header = next(fd).split('\t')
reader = csv.reader((l for l in fd if l.startswith('ENSG00000261258')), delimiter='\t')
for r in reader:
if r[0] == 'ENSG00000261258':
pprint({h: v for h,v in zip(header, r)})
print('\n')
break
"""
Explanation: Manage big tsv files
End of explanation
"""
|
vravishankar/Jupyter-Books | List+Comprehensions.ipynb | mit | # Simple List Comprehension
list = [x for x in range(5)]
print(list)
"""
Explanation: List Comprehensions
List comprehensions are quick and concise way to create lists. List comprehensions comprises of an expression, followed by a for clause and then zero or more for or if clauses. The result of the list comprehension returns a list.
It is generally in the form of
returned_list = [<expression> <for x in current_list> <if filter(x)>]
In other programming languages, this is generally equivalent to:
for <item> in <list>
if (<condition>):
<expression>
Example 1
End of explanation
"""
# Generate Squares for 10 numbers
list1 = [x**2 for x in range(10)]
print(list1)
"""
Explanation: Example 2
End of explanation
"""
# List comprehension with a filter condition
list2 = [x**2 for x in range(10) if x%2 == 0]
print(list2)
"""
Explanation: Example 3
End of explanation
"""
# Use list comprehension to filter out numbers
words = "Hello 12345 World".split()
numbers = [w for w in words if w.isdigit()]
print(numbers)
"""
Explanation: Example 4
End of explanation
"""
words = "An apple a day keeps the doctor away".split()
vowels = [w.upper() for w in words if w.lower().startswith(('a','e','i','o','u'))]
for vowel in vowels:
print(vowel)
"""
Explanation: Example 5
End of explanation
"""
list5 = [x + y for x in [1,2,3,4,5] for y in [10,11,12,13,14]]
print(list5)
"""
Explanation: Example 6
End of explanation
"""
# create 3 lists
list_1 = [1,2,3]
list_2 = [3,4,5]
list_3 = [7,8,9]
# create a matrix
matrix = [list_1,list_2,list_3]
# get the first column
first_col = [row[0] for row in matrix]
print(first_col)
"""
Explanation: Example 7
End of explanation
"""
|
open-hluttaw/notebooks | Open Hluttaw API Examples.ipynb | gpl-3.0 | #List all committees
query = 'classification:Committee'
r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query)
pages = r.json()['num_pages']
committees = []
for page in range(1,pages+1):
r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query+'&page='+str(page))
orgs = r.json()['results']
for org in orgs:
committees.append(org)
for committee in committees:
print committee['name']
import json
json_export = []
for committee in committees:
json_export.append({'name':committee['name'],
'id':committee['id']})
print json.dumps(json_export,sort_keys=True, indent=4)
#Looking up a specific committee will list down all members and persons details, a committee is just
#organization class same as party, same as upper house, lower house
#"name": "Amyotha Hluttaw Local and Overseas Employment Committee"
r = requests.get('http://api.openhluttaw.org/en/organizations/9f3448056d2b48e1805475a45a4ae1ed')
committee = r.json()['result']
#List committee members
# missing on behalf_of expanded for organizations https://github.com/Sinar/popit_ng/issues/200
for member in committee['memberships']:
print member['person']['id']
print member['person']['name']
print member['person']['image']
"""
Explanation: Committees
End of explanation
"""
#List all committees
query = 'classification:Party'
r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query)
pages = r.json()['num_pages']
parties = []
for page in range(1,pages+1):
r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query+'&page='+str(page))
orgs = r.json()['results']
for org in orgs:
parties.append(org)
# BUG in https://github.com/Sinar/popit_ng/issues/197
# use JSON party lookup below to lookup values directly on client side
for party in parties:
print party['name']
"""
Explanation: Party
End of explanation
"""
#Listing people by party in org Amyotha 897739b2831e41109713ac9d8a96c845
#Pyithu org id would be 7f162ebef80e4a4aba12361ea1151fce
#We list by membership and specific organization_id and on_behalf_of_id of parties above
#Amyotha Members represented by Arakan National Party
query = 'organization_id:897739b2831e41109713ac9d8a96c845 AND on_behalf_of_id:016a8ad7b40343ba96e0c03f47019680'
r = requests.get('http://api.openhluttaw.org/en/search/memberships?q='+query)
pages = r.json()['num_pages']
memberships = []
for page in range(1,pages+1):
r = requests.get('http://api.openhluttaw.org/en/search/memberships?q='+query+'&page='+str(page))
members = r.json()['results']
for member in members:
memberships.append(member)
for member in memberships:
print member['post']['label']
print member['person']['id']
print member['person']['name']
print member['person']['image']
"""
Explanation: [
{
"id": "fd24165b8e814a758cd1098dc7a9038a",
"name": "National League for Democracy"
},
{
"id": "9462adf5cffa41c386e621fee28c59eb",
"name": "Union Solidarity and Development Party"
},
{
"id": "7997379fe27c4e448af522c85e306bfb",
"name": "\"Wa\" Democratic Party"
},
{
"id": "90e4903937bf4b8ba9185157dde06345",
"name": "Kokang Democracy and Unity Party"
},
{
"id": "2d2c795149c74b6f91cdea8caf28e968",
"name": "Zomi Congress for Democracy"
},
{
"id": "b366273152a84d579c4e19b14d36c0b5",
"name": "Ta'Arng Palaung National Party"
},
{
"id": "f2189158953e4d9e9296efeeffe7cf35",
"name": "National Unity Party"
},
{
"id": "d53d27fef3ac4b2bb4b7bf346215f626",
"name": "Pao National Organization"
},
{
"id": "2f0c09d5eb05432d8fcf247b5cb1885f",
"name": "Mon National Party"
},
{
"id": "dc69205c7eb54a7aaf68b3d2e3d9c23e",
"name": "Rakhine National Party"
},
{
"id": "e67bf2cdb4ff4ce89167cba3a514a6df",
"name": "Shan Nationalities League for Democracy"
},
{
"id": "a7a1ac9d2f20470d87e556af41dfaa19",
"name": "Lisu National Development Party"
},
{
"id": "8cc2d69bed8743bbaa229b164afecf9a",
"name": "Independent"
},
{
"id": "016a8ad7b40343ba96e0c03f47019680",
"name": "Arakan National Party"
},
{
"id": "6e76561e385946e0a3761d4f25293912",
"name": "The Taaung (Palaung) National Party"
},
{
"id": "63ec5681df974c67b7a217873fa9cdf5",
"name": "Kachin Democratic Party"
}
]
End of explanation
"""
|
ocean-color-ac-challenge/evaluate-pearson | evaluation.ipynb | apache-2.0 | w_412 = 0.56
w_443 = 0.73
w_490 = 0.71
w_510 = 0.36
w_560 = 0.01
"""
Explanation: E-CEO Challenge #3 Evaluation
Weights
Define the weight of each wavelength
End of explanation
"""
run_id = '0000021-150601000007545-oozie-oozi-W'
run_meta = 'http://sb-10-16-10-53.dev.terradue.int:50075/streamFile/ciop/run/participant-a/0000021-150601000007545-oozie-oozi-W/results.metalink'
participant = 'participant-a'
"""
Explanation: Run
Provide the run information:
* run id
* run metalink containing the 3 by 3 kernel extractions
* participant
End of explanation
"""
import glob
import pandas as pd
from scipy.stats.stats import pearsonr
import numpy
import math
"""
Explanation: Define all imports in a single cell
End of explanation
"""
!curl http://sb-10-16-10-53.dev.terradue.int:50075/streamFile/ciop/run/participant-a/0000021-150601000007545-oozie-oozi-W/results.metalink | aria2c -d participant-a -M -
path = participant # use your path
allFiles = glob.glob(path + "/*.txt")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_,index_col=None, header=0)
list_.append(df)
frame = pd.concat(list_)
"""
Explanation: Manage run results
Download the results and aggregate them in a single Pandas dataframe
End of explanation
"""
len(frame.index)
"""
Explanation: Number of points extracted from MERIS level 2 products
End of explanation
"""
insitu_path = './insitu/AAOT.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "AAOT"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_aaot_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @412")
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_aaot_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @443")
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_aaot_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @490")
r_aaot_510 = 0
print("0 observations for band @510")
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_aaot_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @560")
insitu_path = './insitu/BOUSS.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "BOUS"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_bous_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_bous_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_bous_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()
r_bous_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_bous_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
insitu_path = './insitu/MOBY.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "MOBY"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_moby_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_moby_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_moby_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()
r_moby_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_moby_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
[r_aaot_412, r_aaot_443, r_aaot_490, r_aaot_510, r_aaot_560]
[r_bous_412, r_bous_443, r_bous_490, r_bous_510, r_bous_560]
[r_moby_412, r_moby_443, r_moby_490, r_moby_510, r_moby_560]
r_final = (numpy.mean([r_bous_412, r_moby_412, r_aaot_412]) * w_412 \
+ numpy.mean([r_bous_443, r_moby_443, r_aaot_443]) * w_443 \
+ numpy.mean([r_bous_490, r_moby_490, r_aaot_490]) * w_490 \
+ numpy.mean([r_bous_510, r_moby_510, r_aaot_510]) * w_510 \
+ numpy.mean([r_bous_560, r_moby_560, r_aaot_560]) * w_560) \
/ (w_412 + w_443 + w_490 + w_510 + w_560)
r_final
"""
Explanation: Calculate Pearson
For all three sites, AAOT, BOUSSOLE and MOBY, calculate the Pearson factor for each band.
Note AAOT does not have measurements for band @510
AAOT site
End of explanation
"""
|
gwu-libraries/notebooks | 20180511-pyspark-elasticsearch/PySpark-ElasticSearch.ipynb | mit | import os
import pyspark
# Add the elasticsearch-hadoop jar
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /home/jovyan/elasticsearch-hadoop-6.2.2.jar pyspark-shell'
conf = pyspark.SparkConf()
# Point to the master.
conf.setMaster("spark://tweetsets.library.gwu.edu:7101")
conf.setAppName("pyspark-elasticsearch-demo")
conf.set("spark.driver.bindAddress", "0.0.0.0")
# Don't hog all of the cores.
conf.set("spark.cores.max", "3")
# Specify a port for the block manager (which runs as part of the worker). The range 7003-7028 is set
# to be open in the Spark worker container.
conf.set("spark.blockManager.port", "7004")
# create the context
sc = pyspark.SparkContext(conf=conf)
"""
Explanation: This notebook demonstrates using PySpark to analyze tweets stored in ElasticSearch.
Setting up the ElasticSearch and Spark cluster is described in https://github.com/justinlittman/TweetSets.
To run pyspark-notebook:
1. Get a copy of the elasticsearch-hadoop jar (elasticsearch-hadoop-6.2.2.jar).
2. Run (adjusting linked directories and ports as necessary): docker run -it --rm -p 8888:8888 --net=host --pid=host -e TINI_SUBREAPER=true -v ~/notebooks:/home/jovyan/work -v ~/elasticsearch-hadoop-6.2.2.jar:/home/jovyan/elasticsearch-hadoop-6.2.2.jar jupyter/pyspark-notebook
A few notes:
* pyspark-notebook requires Python 3.6 and Spark 2.3. For the Spark cluster, gettyimages/spark was customized to be based on python:3.6-jessie (since by default, it uses Python 3.4.)
* The networking for Spark is hugely confusing and relies heavily on random ports. This doesn't play well with Docker, but I think I got it right.
Create the Spark Context.
End of explanation
"""
# Configure for ElasticSearch cluster and index.
es_conf = {"es.nodes": "tweetsets.library.gwu.edu",
"es.port": "9200",
"es.resource": "tweets-ba2157/doc"}
tweets_rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat",\
"org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=es_conf)
"""
Explanation: Using RDD
Create an RDD from the ElasticSearch index.
End of explanation
"""
tweets_rdd.first()
"""
Explanation: Retrieve the first element from the RDD.
End of explanation
"""
tweets_rdd.flatMap(lambda t: t[1]['hashtags']).map(lambda x: (x, 1)).reduceByKey(lambda x,y: x + y).sortBy(lambda x: x[1], ascending=False).take(10)
"""
Explanation: Get the top hashtags
End of explanation
"""
import json
parsed_tweets_rdd = tweets_rdd.map(lambda x: json.loads(x[1]['tweet'])).persist()
parsed_tweets_rdd.map(lambda t: (t['user']['lang'], 1)).reduceByKey(lambda x,y: x + y).sortBy(lambda x: x[1], ascending=False).take(10)
"""
Explanation: Get the top user languages
By parsing and extracting from each tweet since it is not already a field.
End of explanation
"""
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
es_conf['es.read.field.as.array.include'] = 'hashtags,text,urls'
tweets_df = sqlContext.read.format("org.elasticsearch.spark.sql").options(**es_conf).load()
tweets_df.createOrReplaceTempView("tweets")
"""
Explanation: Using a SQL table
Create SQL table
End of explanation
"""
tweets_df.printSchema()
"""
Explanation: Print schema
End of explanation
"""
tz_df = sqlContext.sql("SELECT user_time_zone, count(user_time_zone) FROM tweets group by user_time_zone order by count(user_time_zone) desc")
tz_df.show(10, truncate=False)
"""
Explanation: Get the time zone
End of explanation
"""
hashtags_df = sqlContext.sql("SELECT hashtag, count(hashtag) from (SELECT explode(hashtags) hashtag FROM tweets) group by hashtag order by count(hashtag) desc")
hashtags_df.show(10, truncate=False)
"""
Explanation: Get the top hashtags
End of explanation
"""
urls_df = sqlContext.sql("SELECT url, count(url) from (SELECT explode(urls) url FROM tweets) where not url like 'http://twitter.com%' group by url order by count(url) desc")
urls_df.show(10, truncate=False)
"""
Explanation: Get the top URLs
End of explanation
"""
rt_df = sqlContext.sql("SELECT CONCAT('https://twitter.com/', retweeted_quoted_screen_name, '/status/', retweet_quoted_status_id), count(retweet_quoted_status_id) FROM tweets group by retweet_quoted_status_id, retweeted_quoted_screen_name order by count(retweet_quoted_status_id) desc")
rt_df.show(10, truncate=False)
"""
Explanation: Get the top retweets
End of explanation
"""
from pyspark.ml.feature import RegexTokenizer, NGram, StopWordsRemover
from pyspark.sql.functions import sort_array, udf, explode
from pyspark.sql.types import ArrayType, StringType
# Text (using distinct)
text_df = tweets_df.select(explode("text").alias("text")).distinct()
# Tokenize
tokenizer = RegexTokenizer(pattern="([:\.!?,]|'s|’s)*\\s+[‘]*", inputCol="text", outputCol="words")
tokenized_df = tokenizer.transform(text_df)
# Stopwords
stop_words = StopWordsRemover.loadDefaultStopWords('english')
stop_words.extend(['rt', ' ', '-', '&', 'it’s', '', 'may', 'see', 'want', 'i’m', 'us', 'make', "we've", "you're", "you've", "don't", "i’ve", 'it', 'they’re', 'don’t', 'lets', 'add'])
remover = StopWordsRemover(inputCol="words", outputCol="filtered_words", stopWords=stop_words)
filtered_df = remover.transform(tokenized_df)
# Remove hashtags and URLs and dupes
def clean(arr):
new_arr = set()
for item in arr:
add_to_arr = True
for startswith in ('#', 'http'):
if item.startswith(startswith):
add_to_arr = False
if add_to_arr:
new_arr.add(item)
return list(new_arr)
clean_udf = udf(lambda arr: clean(arr), ArrayType(StringType()))
clean_df = filtered_df.withColumn("clean_words", clean_udf(filtered_df.filtered_words))
# Sort the words
sorted_df = clean_df.select(sort_array('clean_words').alias('sorted_words'))
ngram = NGram(n=3, inputCol="sorted_words", outputCol="ngrams")
ngram_df = ngram.transform(sorted_df).select(explode('ngrams').alias('ngrams'))
ngram_df.groupBy('ngrams').count().orderBy('count', ascending=False).show(20, truncate=False)
"""
Explanation: Get the top trigrams
End of explanation
"""
|
xdnian/pyml | code/ch11/ch11.ipynb | mit | %load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,scipy,sklearn
"""
Explanation: Copyright (c) 2015, 2016 Sebastian Raschka
<br>
2016 Li-Yi Wei
https://github.com/1iyiwei/pyml
MIT License
Python Machine Learning - Code Examples
Chapter 11 - Working with Unlabeled Data – Clustering Analysis
Supervised learning
* classification
<img src = "./images/01_03.png">
* regression
<img src = "./images/01_04.png" width=50%>
Unsupervised learning
* dimensionality reduction
<img src="./images/01_07.png">
* clustering
<img src="./images/01_06.png" width=50%>
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
from IPython.display import Image
%matplotlib inline
"""
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
Overview
Grouping objects by similarity using k-means
K-means++
Hard versus soft clustering
Using the elbow method to find the optimal number of clusters
Quantifying the quality of clustering via silhouette plots
Organizing clusters as a hierarchical tree
Performing hierarchical clustering on a distance matrix
Attaching dendrograms to a heat map
Applying agglomerative clustering via scikit-learn
Locating regions of high density via DBSCAN
Summary
End of explanation
"""
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=150,
n_features=2,
centers=3,
cluster_std=0.5,
shuffle=True,
random_state=0)
import matplotlib.pyplot as plt
plt.scatter(X[:, 0], X[:, 1], c='white', marker='o', s=50)
plt.grid()
plt.tight_layout()
#plt.savefig('./figures/spheres.png', dpi=300)
plt.show()
import matplotlib.pyplot as plt
import numpy as np
# global plot options for consistency
def plot_options():
markers = ('s', 'x', 'o', '^', 'v')
#colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
colors = ['lightgreen', 'orange', 'lightblue', 'gray', 'cyan']
center_marker = '*'
center_color = 'red'
return markers, colors, center_marker, center_color
def plot_clusters(X, y, centers):
markers, colors, center_marker, center_color = plot_options()
num_clusters = np.unique(y).shape[0]
for k in range(num_clusters):
color = colors[k]
marker = markers[k]
plt.scatter(X[y == k, 0], X[y == k, 1],
s=50, c=color, marker=marker,
label = 'cluster ' + str(k+1))
plt.scatter(centers[:, 0], centers[:, 1],
s=250, marker=center_marker, c=center_color,
label='centroids')
# same code from the classifiers
# applicable to any model with predict() method
# here, y is the cluster labels,
# instead of ground truth as in supervised classification
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, model, resolution=0.02):
# setup marker generator and color map
markers, colors, center_marker, center_color = plot_options()
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = model.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
"""
Explanation: Grouping objects by similarity using k-means
Cluster similar objects (data samples) $\mathbf{X}$ together
* documents, music, video
* users with similar profiles or interests
The number of cluster (k) is a hyper-parameter
* manual specification
* automatic determination
The distance measure $d(x^{(i)}, x^{(j)})$ can be another hyper-parameter
* Euclidean ($L_2$) is a common choice
<img src="./images/01_06.png" width=50%>
K-means clustering example
Create and visualize the data set
End of explanation
"""
from sklearn.cluster import KMeans
num_clusters = 3
num_iterations = 3
for iteration in range(num_iterations):
km = KMeans(n_clusters=num_clusters,
init='random',
n_init=1,
max_iter=iteration+1,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
plot_decision_regions(X, y_km, km)
plot_clusters(X, y_km, km.cluster_centers_)
plt.legend()
plt.title('# of iteration: ' + str(iteration))
plt.grid()
plt.tight_layout()
plt.show()
# estimate inertia, i.e. sum of distances from samples to nearest cluster centers
num_trials = 5
num_iterations = 10
inertia = np.zeros(num_iterations)
for iteration in range(num_iterations):
cost = 0
for trial in range(num_trials):
km = KMeans(n_clusters=num_clusters,
init='random',
n_init=1,
max_iter=iteration+1,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
cost = cost + km.inertia_
inertia[iteration] = cost/num_trials
#print(inertia)
plt.plot(range(1, len(inertia)+1), inertia, marker='o')
plt.xlabel('iterations')
plt.ylabel('sum of distances of samples to closest centroids')
plt.tight_layout()
plt.show()
"""
Explanation: First iteration
Not good clustering
More iterations
Better clustering results
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('S0sAnabdCLg')
"""
Explanation: K-means clustering algorithm
Input: a set of samples $\mathbf{X}$
* without labels $\mathbf{y}$ as in supervised classification
Distance measure: assume $L_2$ for simplicity and commonality
$$
\begin{align}
d\left(x^{(i)}, x^{(j)}\right)
&= \|x^{(i)} - x^{(j)} \|_2
\
&= \sqrt{\sum_k \left(x^{(i)}_k - x^{(j)}_k \right)^2}
\end{align}
$$
Steps:
* Initialization: pick (e.g. randomly) $k$ centroids $\mathbf{U} = {\mu^{(j)}}, j \in {1, \cdots, k }$ from $\mathbf{X}$ as initial cluster centers
Voronoi: assign each sample $x^{(i)} \in \mathbf{X}$ to the nearest centroid $\mu\left(x^{(i)}\right)$
$$
\begin{align}
\mu\left(x^{(i)}\right) = argmin_{\mu \in \mathbf{U}} d(x^{(i)}, \mu)
\end{align}
$$
Centroid: move each $\mu^{(j)} \in \mathbf{U}$ to the center of the samples that were assigned to it (i.e. the cluster of $\mu^{(j)}$)
$$
\begin{align}
\mu^{(j)} &= \frac{1}{\sum w_{ij}} \sum w_{ij} x^{(i)}
\
w_{ij} &=
\begin{cases}
1 \; if \mu\left( x^{(i)}\right) = \mu^{(j)}
\
0 \; else
\end{cases}
\end{align}
$$
Repeat the Vornoi and centroid steps until the cluster assignments do not change much or a maximum number of iterations is reached
Visualizing the clustering process
https://www.youtube.com/watch?v=S0sAnabdCLg
End of explanation
"""
from sklearn.cluster import KMeans
init_options = ['random', 'k-means++']
for init_option in init_options:
km = KMeans(n_clusters=3,
init=init_option,
n_init=1, # just one trial to help visualization
max_iter=1, # just 1 iteration to help visualize the initial condition
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
plot_decision_regions(X, y_km, km)
plot_clusters(X, y_km, km.cluster_centers_)
plt.title(init_option)
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
"""
Explanation: aka <a href="https://en.wikipedia.org/wiki/Lloyd%27s_algorithm">Lloyd's algorithm</a>
Maroon dot: previous centers
Black cross: next centers
Iteration 1:
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/LloydsMethod1.svg/200px-LloydsMethod1.svg.png">
Iteration 2:
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/7/77/LloydsMethod2.svg/200px-LloydsMethod2.svg.png">
Iteration 3:
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/fb/LloydsMethod3.svg/200px-LloydsMethod3.svg.png">
Iteration 15:
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/3b/LloydsMethod15.svg/200px-LloydsMethod15.svg.png">
Background and intuition
$\mathbf{X} = {x^{(i)}}$: data samples
$\mathbf{U} = {\mu^{(j)} }$: cluster centers
The goal is to minimize the following objective function:
$$
\begin{align}
E\left(\mathbf{X} , \mathbf{U} \right)
&=
\sum_i \sum_j w_{ij} d^2\left(x^{(i)}, \mu^{(j)}\right)
\end{align}
$$
Subject to the constraint that each sample is assigned to one cluster:
$$
\begin{align}
\sum_j w_{ij} &= 1
\
w_{ij} &\in {0, 1}
\end{align}
$$
Intuitively, we want to find a set of centroids (cluster centers) $\mathbf{U}$ to minimize the total distance of each sample $x^{(i)} \in \mathbf{X}$ with the nearest centroid $\mu^{(j)} \in \mathbf{U}$.
So that the clusters are as tight as possible.
For $L_2$ distance we have
$$
\begin{align}
E\left(\mathbf{X} , \mathbf{U} \right)
&=
\sum_i \sum_j w_{ij} \|x^{(i)} - \mu^{(j)}\|^2
\
&=
\sum_j \sum_i w_{ij} \|x^{(i)} - \mu^{(j)} \|^2
\end{align}
$$
<a href="https://en.wikipedia.org/wiki/Voronoi_diagram">
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/54/Euclidean_Voronoi_diagram.svg/220px-Euclidean_Voronoi_diagram.svg.png" align=right>
</a>
For the <b>Voronoi</b> step, we try to minimize each inner term of the $\sum_i \sum_j$ order:
$$
\begin{align}
\sum_j w_{ij} \| x^{(i)} - \mu^{(j)} \|^2
\end{align}
$$
Note that the constraints on $w_{ij}$ forces us to choose only one $\mu^{(j)}$.
For the <b>centroid</b> step, we try to minimize the inner term of the $\sum_j \sum_i$ order:
$$
\begin{align}
\sum_i w_{ij} \| x^{(i)} - \mu^{(j)} \|^2
\end{align}
$$
Only samples assign to cluster $j$ matters, i.e. $w_{ij} = 1$, so via calculus we know the optimal solution is to put $\mu_{(j)}$ at the center of all $x^{(i)}$ assigned to it.
$$
\begin{align}
\mu^{(j)} &= \frac{1}{\sum w_{ij}} \sum w_{ij} x^{(i)}
\end{align}
$$
K-means++ intialization
Random selection
* might not be the best way to initialize the cluster centers
* badly (unluckily) placed centroids can cause convergence issues
* multiple random initialization can help
More systematic approach
* choose initial cluster centers to be as far away from one another as possible
* sequential process, during which the next cluster center is chosen to be the data sample farthest away from all existing centers
This is just for initialization
* the rest, Vornoi and centroid steps, remain the same
Example differences for initialization
End of explanation
"""
# the inertia_ class variable stores the energy value
print('Distortion: %.2f' % km.inertia_)
distortions = []
for i in range(1, 11):
km = KMeans(n_clusters=i,
init='k-means++',
n_init=10,
max_iter=300,
random_state=0)
km.fit(X)
distortions.append(km.inertia_)
plt.plot(range(1, 11), distortions, marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Distortion')
plt.tight_layout()
#plt.savefig('./figures/elbow.png', dpi=300)
plt.show()
"""
Explanation: K-means++ initialization algorithm
$\mathbf{U}$: set of centroids, intially empty:
$\mathbf{U} \leftarrow \emptyset$
$\mathbf{U} \leftarrow x^{(i)} \in \mathbf{X}$ via random selection
While $\|\mathbf{U}\| < k$, the target number of clusters
* compute $d^2\left(x^{(i)}, \mathbf{U} \right)$ for each $x^{(i)} \in \mathbf{X}$ to the nearest member in $\mathbf{U}$
* $\mu^{(j)} \leftarrow$ random selection with probability
$\frac{d^2\left(x, \mathbf{U} \right)}{\sum_i d^2\left(x^{(i)}, \mathbf{U} \right)}$
for $x \in \mathbf{X}$
* $\mathbf{U} \leftarrow \mathbf{U} \bigcup \mu^{(j)}$
Challenges with k-means
The number of clusters k is a hyper-parameter
* not easy to pick
* especially in high dimensions when we cannot visualize easily like in 2D
The clusters:
* no overlap (hard instead of soft decision)
* not hierarchical
We can improve these via other variations of clustering algorithms.
Hard versus soft clustering
Hard clustering
* each sample is assigned to one cluster
Soft (fuzzy) clustering
* each sample has a probabilistic assignment to all clusters
Main difference: $w_{ij}$, the weight of assigning $x^{(i)}$ to $\mu^{(j)}$
* hard: $w_{ij} \in {0, 1}$, binary
* soft: $w_{ij} \in [0, 1]$, continuous
Objective
$\mathbf{X}$: samples
$\mathbf{U}$: cluster centers
$w_{ij}$: the weight of assigning $x^{(i)}$ to $\mu^{(j)}$
$$
\begin{align}
w_{ij} =
\left[ \sum_{p=1}^{k} \left( \frac{\| x^{(i)} - \mu^{(j)}\|_2}{\| x^{(i)} - \mu^{(p)} \|_2} \right)^{\frac{2}{m-1}} \right]^{-1}
\end{align}
$$
For example, if $k = 3$:
$$
\begin{align}
w_{ij} =
\left[
\left( \frac{\| x^{(i)} - \mu^{(j)}\|_2}{\| x^{(i)} - \mu^{(1)} \|_2} \right)^{\frac{2}{m-1}}
+
\left( \frac{\| x^{(i)} - \mu^{(j)}\|_2}{\| x^{(i)} - \mu^{(2)} \|_2} \right)^{\frac{2}{m-1}}
+
\left( \frac{\| x^{(i)} - \mu^{(j)}\|_2}{\| x^{(i)} - \mu^{(3)} \|_2} \right)^{\frac{2}{m-1}}
\right]^{-1}
\end{align}
$$
$m$: degree of fuzziness
* $m \in [1, \infty)$
* $m = 1$: hard clustering
* $m > 1$: soft clustering, larger $m$ indicates fuzzier membership
$$
\begin{align}
E\left(\mathbf{X} , \mathbf{U} \right)
&=
\sum_i \sum_j w^m_{ij} \|x^{(i)} - \mu^{(j)}\|_2^2
\end{align}
$$
Constraints
$$
\begin{align}
\sum_j w_{ij} &= 1
\end{align}
$$
Hard clustering - each sample can be assigned to one cluster
$$
\begin{align}
w_{ij} \in {0, 1}
\end{align}
$$
Soft clustering - no additional constraint aside from $[0, 1]$ and the sum to 1 above (probability)
Example
For 3 clusters
Hard clustering:
$
w =
\begin{pmatrix}
0 \
1 \
0
\end{pmatrix}
$
Read: sample belongs to cluster 2
Soft clustering:
$
w =
\begin{pmatrix}
0.10 \
0.85 \
0.05
\end{pmatrix}
$
Read: sample is mostly like to belong to cluster 2 with probability $0.85$, but also some chances of belonging to other clusters.
Fuzzy clustering algorithm
Initialization: pick (e.g. randomly) $k$ centroids $\mathbf{U} = {\mu^{(j)}}, j \in {1, \cdots, k }$ from $\mathbf{X}$ as initial cluster centers
Voronoi:
$$
\begin{align}
w_{ij} =
\left[ \sum_{p=1}^{k} \left( \frac{\| x^{(i)} - \mu^{(j)}\|_2}{\| x^{(i)} - \mu^{(p)} \|_2} \right)^{\frac{2}{m-1}} \right]^{-1}
\end{align}
$$
Centroid:
$$
\begin{align}
\mu^{(j)} =
\frac{\sum_i w^m_{ij} x^{(i)}}{ \sum_i w^m_{ij}}
\end{align}
$$
Repeat the Vornoi and centroid steps until the cluster assignments do not change much or a maximum number of iterations is reached
Note that when $m=1$, the above reduces to hard clustering.
What is the effect of $m$ on the Voronoi step?
* $m = 1$
* $m = \infty$
* $m \in (1, \infty)$
For example, if $k = 3$:
$$
\begin{align}
w_{ij} =
\left[
\left( \frac{\| x^{(i)} - \mu^{(j)}\|_2}{\| x^{(i)} - \mu^{(1)} \|_2} \right)^{\frac{2}{m-1}}
+
\left( \frac{\| x^{(i)} - \mu^{(j)}\|_2}{\| x^{(i)} - \mu^{(2)} \|_2} \right)^{\frac{2}{m-1}}
+
\left( \frac{\| x^{(i)} - \mu^{(j)}\|_2}{\| x^{(i)} - \mu^{(3)} \|_2} \right)^{\frac{2}{m-1}}
\right]^{-1}
\end{align}
$$
Let's say
\begin{align}
\| x^{(i)} - \mu^{(1)} \|_2 &= 1
\
\| x^{(i)} - \mu^{(2)} \|_2 &= 2
\
\| x^{(i)} - \mu^{(3)} \|_2 &= 3
\end{align}
Then
$$
\begin{align}
w_{i1} &=
\left[
\left( \frac{1}{1} \right)^{\frac{2}{m-1}}
+
\left( \frac{1}{2} \right)^{\frac{2}{m-1}}
+
\left( \frac{1}{3} \right)^{\frac{2}{m-1}}
\right]^{-1}
\
w_{i2} &=
\left[
\left( \frac{2}{1} \right)^{\frac{2}{m-1}}
+
\left( \frac{2}{2} \right)^{\frac{2}{m-1}}
+
\left( \frac{2}{3} \right)^{\frac{2}{m-1}}
\right]^{-1}
\
w_{i3} &=
\left[
\left( \frac{3}{1} \right)^{\frac{2}{m-1}}
+
\left( \frac{3}{2} \right)^{\frac{2}{m-1}}
+
\left( \frac{3}{3} \right)^{\frac{2}{m-1}}
\right]^{-1}
\end{align}
$$
When $m = 1$, hard clustering:
$$
\begin{align}
w_{i1} &= 1
\
w_{i2} &= 0
\
w_{i3} &= 0
\end{align}
$$
When $m = \infty$, equal fuzzy clustering:
$$
\begin{align}
w_{i1} &= \frac{1}{3}
\
w_{i2} &= \frac{1}{3}
\
w_{i3} &= \frac{1}{3}
\end{align}
$$
Using the elbow method to find the optimal number of clusters
The number of clusters is a hyper-parameter
How to select it?
How to evaluate our model in general?
* This is unsupervised learning, so no ground truth data to compare against
Solution
Run multiple experiments with different number of clusters
Measure the energy function, plot it against different number of clusters
$$
\begin{align}
E\left(\mathbf{X} , \mathbf{U} \right)
&=
\sum_i \sum_j w^m_{ij} d^2\left(x^{(i)}, \mu^{(j)}\right)
\end{align}
$$
Example
End of explanation
"""
import numpy as np
from matplotlib import cm
from sklearn.metrics import silhouette_samples
# clustering
km = KMeans(n_clusters=3,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
cluster_labels = np.unique(y_km)
n_clusters = cluster_labels.shape[0]
# the main part is just one functional call
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
# visualization
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(float(i) / n_clusters)
plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0,
edgecolor='none', color=color)
yticks.append((y_ax_lower + y_ax_upper) / 2.)
# y_ax_lower += len(c_silhouette_vals)
y_ax_lower = y_ax_upper # Li-yi: clearer meaning
silhouette_avg = np.mean(silhouette_vals)
plt.axvline(silhouette_avg, color="red", linestyle="--")
plt.yticks(yticks, cluster_labels + 1)
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.tight_layout()
# plt.savefig('./figures/silhouette.png', dpi=300)
plt.show()
"""
Explanation: Intuitively, we want to pick the "elbow" part, which achieves best bang for the buck.
* diminishing returns after that elbow/knee point
Quantifying the quality of clustering via silhouette plots
Silhouette analysis
* another way to evaluate the quality of clustering
Basic idea
* coherence of each cluster
* separation from other clusters
* high coherence, high separation means good clustering
Silhouette for clustering
https://en.wikipedia.org/wiki/Silhouette_(clustering)
For each data sample $x^{(i)} \in \mathbf{X}$, we can compute its average distance to a cluster $\mathbf{C} \in {\mathbf{C}_1, \cdots, \mathbf{C}_k}$:
$$
\begin{align}
\overline{d}\left(x^{(i)}, \mathbf{C} \right)
&=
\frac{1}{\| \mathbf{C} \|} \sum_{x^{(j)} \in \mathbf{C}} d\left(x^{(i)}, x^{(j)}\right)
\end{align}
$$
, where $d$ is the distance measure (e.g. $L_2$) used for clustering.
We usually skip $x^{(i)}$ comparing against itself, so if $x^{(i)} \in \mathbf{C}$, we have
$$
\begin{align}
\overline{d}\left(x^{(i)}, \mathbf{C} \right)
&=
\frac{1}{\| \mathbf{C} \| - 1} \sum_{x^{(j)} \in \mathbf{C}, j \neq i} d\left(x^{(i)}, x^{(j)}\right)
\end{align}
$$
Coherence
$a(i)$: the average distance (dis-similarity) to all other samples within the same cluster:
$$
\begin{align}
a(i) &= \overline{d}\left(x^{(i)}, \mathbf{C} \right)
\
x^{(i)} &\in \mathbf{C}
\end{align}
$$
Separation
$b(i)$: the average distance to the nearest cluster that $x^{(i)}$ does not belong to:
$$
\begin{align}
b(i) &= \min \overline{d}\left(x^{(i)}, \mathbf{C} \right)
\
\mathbf{C} &\in {\mathbf{C}_1, \cdots, \mathbf{C}_k }
\
x^{(i)} &\not\in \mathbf{C}
\end{align}
$$
Silhouette
$s(i)$: the silhouette value of $x^{(i)}$:
$$
\begin{align}
s(i) = \frac{b(i) - a(i)}{\max\left(b(i), a(i)\right)}
\end{align}
$$
Which can be spelled out as:
$$
\begin{align}
s(i) =
\begin{cases}
1 - \frac{a(i)}{b(i)}, & a(i) < b(i)
\
0, & a(i) = b(i)
\
\frac{b(i)}{a(i)} -1, & a(i) > b(i)
\end{cases}
\end{align}
$$
So
$
-1 \leq s(i) \leq 1
$
* $s(i)$ is close to 1 if we have $a(i) \ll b(i)$ $\rightarrow$ good clustering
* $s(i)$ is close to $-1$ $\rightarrow$ bad clustering
Silhouette profile
Plot silhouette values $s(i)$ for all samples $x^{(i)} \in \mathbf{X}$ to visualize the clustering quality.
Example
End of explanation
"""
km = KMeans(n_clusters=2,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
plot_decision_regions(X, y_km, km)
plot_clusters(X, y_km, km.cluster_centers_)
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
cluster_labels = np.unique(y_km)
n_clusters = cluster_labels.shape[0]
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(float(i) / n_clusters)
plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0,
edgecolor='none', color=color)
yticks.append((y_ax_lower + y_ax_upper) / 2.)
y_ax_lower += len(c_silhouette_vals)
silhouette_avg = np.mean(silhouette_vals)
plt.axvline(silhouette_avg, color="red", linestyle="--")
plt.yticks(yticks, cluster_labels + 1)
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.tight_layout()
# plt.savefig('./figures/silhouette_bad.png', dpi=300)
plt.show()
"""
Explanation: We want:
* the average silhouette value to be high
* each cluster to have good silhouette distribution
Comparison to "bad" clustering:
End of explanation
"""
import pandas as pd
import numpy as np
np.random.seed(123)
variables = ['X', 'Y', 'Z']
labels = ['ID_0', 'ID_1', 'ID_2', 'ID_3', 'ID_4']
X = np.random.random_sample([5, 3])*10
df = pd.DataFrame(X, columns=variables, index=labels)
df
"""
Explanation: Notice
* the low average silhouette value
* one cluster has bad silhouette distribution compared to another
Organizing clusters as a hierarchical tree
Instead of compute $k$ clusters at once, we can compute them gradually in a hierarchy.
We can decide $k$ later after having the complete hierarchy.
<img src="./images/hierarchical_clustering.svg" width=50% align=right>
Construction
Two ways to do this:
Bottom-to-top (agglomerative)
Start with each sample as one cluster, and merge the similar ones until we have one cluster (that contains all samples)
red $\bigcup$ green $\rightarrow$ yellow
yellow $\bigcup$ blue $\rightarrow$ cyan
Top-to-bottom (divisive)
Start with a single cluster (that contains all samples), divide the most diverse one until each sample is in a separate cluster.
cyan $\rightarrow$ yellow $\bigcup$ blue
yellow $\rightarrow$ red $\bigcup$ green
Criteria
How to decide the similarity between two clusters?
Single linkage
Measure the shortest distance between pairs of samples from the two clusters.
Complete linkage
Measure the longest distance between pairs of samples from the two clusters.
<img src='./images/11_05.png' width=80%>
Other possibilities
Ward's linkage: MSE of clusters
$$
\begin{align}
E\left(\mathbf{X} , \mathbf{U} \right)
&=
\sum_i \sum_j w^m_{ij} d\left(x^{(i)}, \mu^{(j)}\right)
\end{align}
$$
Agglomerative clustering
Example
End of explanation
"""
from scipy.spatial.distance import pdist, squareform
row_dist = pd.DataFrame(squareform(pdist(df, metric='euclidean')),
columns=labels,
index=labels)
row_dist
"""
Explanation: Performing hierarchical clustering on a distance matrix
End of explanation
"""
from scipy.cluster.hierarchy import linkage
help(linkage)
# 1. incorrect approach: Squareform distance matrix
from scipy.cluster.hierarchy import linkage
row_clusters = linkage(row_dist, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2',
'distance', 'no. of items in clust.'],
index=['cluster %d' % (i + 1)
for i in range(row_clusters.shape[0])])
# 2. correct approach: Condensed distance matrix
row_clusters = linkage(pdist(df, metric='euclidean'), method='complete')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2',
'distance', 'no. of items in clust.'],
index=['cluster %d' % (i + 1)
for i in range(row_clusters.shape[0])])
# 3. correct approach: Input sample matrix
row_clusters = linkage(df.values, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2',
'distance', 'no. of items in clust.'],
index=['cluster %d' % (i + 1)
for i in range(row_clusters.shape[0])])
"""
Explanation: Linkage usage
Either one of these will do:
* pass a condensed distance matrix (upper triangular) from the pdist function
* pass the "original" data array and define the metric='euclidean' argument in linkage
However, we should not pass the squareform distance matrix, which would yield different distance values although the overall clustering could be the same.
End of explanation
"""
from scipy.cluster.hierarchy import dendrogram
# make dendrogram black (part 1/2)
# from scipy.cluster.hierarchy import set_link_color_palette
# set_link_color_palette(['black'])
row_dendr = dendrogram(row_clusters,
labels=labels,
# make dendrogram black (part 2/2)
# color_threshold=np.inf
)
plt.tight_layout()
plt.ylabel('Euclidean distance')
#plt.savefig('./figures/dendrogram.png', dpi=300,
# bbox_inches='tight')
plt.show()
"""
Explanation: The ids in each row correspond to a leaf (data sample) or an internal node (cluster).
In this example, id 0 to 4 correspond to the original 5 samples.
Clusters are formed bottom up with id from 5 and increasing up.
The dendrogam can also help visualization.
End of explanation
"""
# plot row dendrogram
fig = plt.figure(figsize=(8, 8), facecolor='white')
axd = fig.add_axes([0.09, 0.1, 0.2, 0.6])
# note: for matplotlib < v1.5.1, please use orientation='right'
row_dendr = dendrogram(row_clusters, orientation='left')
# reorder data with respect to clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
axd.set_xticks([])
axd.set_yticks([])
# remove axes spines from dendrogram
for i in axd.spines.values():
i.set_visible(False)
# plot heatmap
axm = fig.add_axes([0.23, 0.1, 0.6, 0.6]) # x-pos, y-pos, width, height
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
# plt.savefig('./figures/heatmap.png', dpi=300)
plt.show()
"""
Explanation: Attaching dendrograms to a heat map
End of explanation
"""
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=2, # number of final clusters
affinity='euclidean',
linkage='complete')
labels = ac.fit_predict(X)
print('Cluster labels: %s' % labels)
print(ac.children_)
"""
Explanation: Applying agglomerative clustering via scikit-learn
As usual, coding in scikit-learn is pretty simple.
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('5E097ZLE9Sg')
"""
Explanation: The clustering result is consistent with the dendogram above.
Locating regions of high density via DBSCAN
Density-based Spatial Clustering of Applications with Noise
A form of clustering different from k-means and hierarhical clustering
* more resilient to noise
* allows general cluster shapes
<img src='./images/11_11.png' width=70%>
https://youtu.be/5E097ZLE9Sg?t=1m35s
End of explanation
"""
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=200, noise=0.05, random_state=0)
plt.scatter(X[:, 0], X[:, 1])
plt.tight_layout()
# plt.savefig('./figures/moons.png', dpi=300)
plt.show()
"""
Explanation: Parameters
$\epsilon$: neighborhood within radius $\epsilon$ of each sample point
MinPts: minimum number of points within neighborhood to be considered as "dense" enough
These two define sample density.
Definitions
Core point
at least MinPts of samples within radius $\epsilon$
Border point
fewer than MinPts within radius $\epsilon$
but within $\epsilon$ of a core point
Noise point
everything else
Algorithm
Form a separate cluster for each core point or a connected group of core points.
* Two core points are connected if they are within $\epsilon$ from each other.
Assign each border point to the cluster of its corresponding core point.
Ignore noise points.
Example
Comparing k-means, hierarchical, and density-based clustering.
End of explanation
"""
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 3))
km = KMeans(n_clusters=2, random_state=0)
y_km = km.fit_predict(X)
ax1.scatter(X[y_km == 0, 0], X[y_km == 0, 1],
c='lightblue', marker='o', s=40, label='cluster 1')
ax1.scatter(X[y_km == 1, 0], X[y_km == 1, 1],
c='red', marker='s', s=40, label='cluster 2')
ax1.set_title('K-means clustering')
ac = AgglomerativeClustering(n_clusters=2,
affinity='euclidean',
linkage='complete')
y_ac = ac.fit_predict(X)
ax2.scatter(X[y_ac == 0, 0], X[y_ac == 0, 1], c='lightblue',
marker='o', s=40, label='cluster 1')
ax2.scatter(X[y_ac == 1, 0], X[y_ac == 1, 1], c='red',
marker='s', s=40, label='cluster 2')
ax2.set_title('Agglomerative clustering')
plt.legend()
plt.tight_layout()
#plt.savefig('./figures/kmeans_and_ac.png', dpi=300)
plt.show()
"""
Explanation: K-means and hierarchical clustering:
End of explanation
"""
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.2, min_samples=5, metric='euclidean')
y_db = db.fit_predict(X)
plt.scatter(X[y_db == 0, 0], X[y_db == 0, 1],
c='lightblue', marker='o', s=40,
label='cluster 1')
plt.scatter(X[y_db == 1, 0], X[y_db == 1, 1],
c='red', marker='s', s=40,
label='cluster 2')
plt.legend()
plt.tight_layout()
#plt.savefig('./figures/moons_dbscan.png', dpi=300)
plt.show()
"""
Explanation: Intuitively, the two moons should form two separate clusters.
But geometric distance of k-means and hierarchical clustering might group the wrong parts together.
Density-based clustering considers the topology of the clusters:
End of explanation
"""
from sklearn.cluster import DBSCAN
eps_values = [0.1, 0.2, 0.5]
for eps_value in eps_values:
db = DBSCAN(eps=eps_value, min_samples=5, metric='euclidean')
y_db = db.fit_predict(X)
plt.scatter(X[y_db == 0, 0], X[y_db == 0, 1],
c='lightblue', marker='o', s=40,
label='cluster 1')
plt.scatter(X[y_db == 1, 0], X[y_db == 1, 1],
c='red', marker='s', s=40,
label='cluster 2')
plt.legend()
plt.tight_layout()
plt.title('eps = ' + str(eps_value))
plt.show()
"""
Explanation: Disadvantages of density-based clustering
Two hyper-parameters ($\epsilon$ and MinPts) to tune
* versus one $k$ for k-means and hierarhical clustering
* aside from distance measure as another hyper parameter
Does not have a predict() method
* non-parametric, applies to the current dataset only
End of explanation
"""
|
postBG/DL_project | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
used min-max normalization
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
max_value = 255
min_value = 0
return (x - min_value) / (max_value - min_value)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn import preprocessing
lb=preprocessing.LabelBinarizer()
lb.fit(range(10))
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return lb.transform(x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
shape = [x for x in image_shape]
shape.insert(0, None)
return tf.placeholder(tf.float32, shape=shape, name="x")
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32, shape=[None, n_classes], name="y")
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
x_tensor_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor_shape[-1], conv_num_outputs], stddev=0.05))
bias = tf.Variable(tf.truncated_normal([conv_num_outputs], stddev=0.05))
conv_layer = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias=bias)
conv_layer = tf.nn.relu(conv_layer)
conv_layer = tf.nn.max_pool(conv_layer,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
return tf.contrib.layers.flatten(x_tensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
x_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([x_shape[1], num_outputs], stddev=0.05))
bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.05))
return tf.nn.relu(tf.add(tf.matmul(x_tensor, weights), bias))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
x_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([x_shape[1], num_outputs], stddev=0.05))
bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.05))
return tf.add(tf.matmul(x_tensor, weights), bias)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
conv_output_depth = {
'layer1': 32,
'layer2': 64,
'layer3': 128
}
conv_ksize = (3, 3)
conv_strides = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer1 = conv2d_maxpool(x, conv_output_depth['layer1'], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer2 = conv2d_maxpool(conv_layer1, conv_output_depth['layer2'], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer2, conv_output_depth['layer3'], conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flattened_layer = flatten(conv_layer3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc_layer1 = fully_conn(flattened_layer, num_outputs=512)
fc_layer1 = tf.nn.dropout(fc_layer1, keep_prob=keep_prob)
fc_layer2 = fully_conn(fc_layer1, num_outputs=256)
fc_layer2 = tf.nn.dropout(fc_layer2, keep_prob=keep_prob)
fc_layer3 = fully_conn(fc_layer2, num_outputs=128)
fc_layer3 = tf.nn.dropout(fc_layer3, keep_prob=keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
logits = output(fc_layer3, 10)
# TODO: return output
return logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
session.run(optimizer, feed_dict={x: feature_batch, y:label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Traning Loss: {:>10.4f} Accuracy: {:.6f}'.format(loss, valid_accuracy))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 10
batch_size = 128
keep_probability = 0.5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
mediagestalt/Concordance-Output | Concordances.ipynb | mit | # This is where the modules are imported
import nltk
import sys
import codecs
from os import listdir
from os.path import splitext
from os.path import basename
# These functions extract the filename
def remove_ext(filename):
"Removes the file extension, such as .txt"
name, extension = splitext(filename)
return name
def remove_dir(filepath):
"Removes the path from the file name"
name = basename(filepath)
return name
def get_filename(filepath):
"Removes the path and file extension from the file name"
filename = remove_ext(filepath)
name = remove_dir(filename)
return name
# This function works on the contents of the file
def read_file(filename):
"Read the contents of FILENAME and return as a string."
infile = codecs.open(filename, 'r', 'utf-8')
contents = infile.read()
infile.close()
return contents
"""
Explanation: Concordance Output
A concordance is a method of text analysis that is somewhat similar to the generation of word frequency statistics, only the search is expanded to the words that appear on either side of the word under investigation. We call the main search word the 'node' and the words surrounding it the 'span'. A condordance is simply a printed list displaying the sentences or 'context' that the node word appears in. This list is traditonally organized in a 'Key Word in Context' (KWIC) format, which has the node word in the centre of the page. The span can be adjusted, but generally includes about five words on the left and five words on the right of the node.
The purpose of generating a concordance output is to allow for manual, but controlled, examination of the word in question. As we will see in this exercise, it becomes very easy to recognize patterns of language use when the text is organized in this way. Further investigation can be conducted by sorting the list of text alphabetically, either on the word just to the left or right of the node word.
Generating a concordance output in Python is fairly simple thanks to the NLTK module. In this exercise we will generate a concordance output for one of our files.
Once again we will import our modules and definitions first. Here we see some new modules: NLTK, codecs, and sys.
NLTK stands for <a href="http://www.nltk.org/book/ch00.html" target=blank><i>Natural Language Toolkit</i></a>, which facilitates natural language processing in <i>Python</i>. NLTK has many functions that support electronic text analysis, including tokenizing, word frequency counters, and for the purposes of this demonstration, concordancers.
codecs is a module that helps <i>Python</i> read and write text in <span style="cursor:help;" title="the industry standard for encoding special characters, like: æ, þ, ß"><b>Unicode</b></span>, which is a text encoding standard that includes non-alphanumeric characters. We will not be removing the capitalization or punctuation in this exercise, so we're using codecs to avoid any errors in reading and printing the file.
sys is a built-in <i>Python</i> module that allows for the manipulation of the <i>Python</i> <span style="cursor:help;" title="the infrastructure required to run programs"><b>runtime environment</b></span>. Here we will use it to write the output of a program to a text file.
End of explanation
"""
#this is the path to the file we want to read
file = '../Counting Word Frequencies/data/2013.txt'
#this calls on a definition from above: it stores the filename as a variable to use later
name = get_filename(file)
#reads the file
text = read_file(file)
#splits the text into a list of individual words
words = text.split()
#assigns NLTK functionality to the text
text = nltk.Text(words)
"""
Explanation: For this demonstration we will focus only on one file, the 2013 section of the corpus. As evidenced in the last exercise, <i>Adding Context to Word Frequency Counts</i>, there was a significant increase in the usage of the word privacy between 2012 and 2013, which amounted to an increase of about 40%. Here we will take a closer look at 2013 in an attempt to identify any patterns of word use
This is a case where cleaning the text may also destroy some of the context. While it is nice to have the numbers line up (in terms of word frequencies vs. number of concordance lines), removing the punctuation and capitalization makes the text harder to read and understand.
End of explanation
"""
print(text.concordance('privacy', lines=25))
"""
Explanation: Here we will call the function, listing 25 lines from the text.
Any other single word could be subsituted here for privacy. It can only be one word though, as the last piece of code split the text into single words, so a phrase will break the code. More or less lines can be shown by changing the number beside lines=.
End of explanation
"""
#creates a new file that can be written by the print queue
fileconcord = codecs.open(name + '_collocates.txt', 'w', 'utf-8')
#makes a copy of the empty print queue, so that we can return to it at the end of the function
tmpout = sys.stdout
#stores the text in the print queue
sys.stdout = fileconcord
#generates and prints the concordance, the number pertains to the total number of bytes per line
text.concordance("privacy", 79, sys.maxsize)
#closes the file
fileconcord.close()
#returns the print queue to an empty state
sys.stdout = tmpout
"""
Explanation: The NLTK module is limited in the amount of processing it can conduct on concordances. It is more useful to output the entire concordance to a text file, which can then be sorted and manipulated in many ways. The following code prints the entire concordance to file. The '79' on line 8 refers to the total number of characters contained in each span, including all letters, punctuation and spaces.
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session07/Day1/Code repositories.ipynb | mit | ! #complete
! #complete
"""
Explanation: Code Repositories
The notebook contains problems oriented around building a basic Python code repository and making it public via Github. Of course there are other places to put code repositories, with complexity ranging from services comparable to github to simple hosting a git server on your local machine. But this focuses on git and github as a ready-to-use example with plenty of additional resources to be found online.
Note that these problems assume you are using the Anaconda Python distribution. This is particular useful for these problems because it makes it very easy to install testing packages in virtual environments quickly and with little wasted disk space. If you are not using anaconda, you can either use an alternative virtual environment scheme (e.g. pyenv or virtualenv), or just install pacakges directly into your default python (and hope for the best...).
For git interaction, this notebook also uses the git command line tools directly. There are a variety of GUI tools that make working with git more visually intuitive (e.g. SourceTree, gitkraken, or the github desktop client), but this notebook uses the command line tools as the lowest common denominator. You are welcome to try to reproduce the steps with your client, however - feel free to ask your neighbors or instructors if you run into trouble there.
As a final note, this notebook's examples assume you are using a system with a unix-like shell (e.g. macOS, Linux, or Windows with git-bash or the Linux subsystem shell).
Original by E Tollerud 2017 for LSSTC DSFP Session3 and AstroHackWeek, modified by B Sipocz
Problem 0: Using Jupyter as a shell
As an initial step before diving into code repositories, it's important to understand how you can use Jupyter as a shell. Most of the steps in this notebook require interaction with the system that's easier done with a shell or editor rather than using Python code in a notebook. While this could be done by opening up a terminal beside this notebook, to keep most of your work in the notebook itself, you can use the capabilities Jupyter + IPython offer for shell interaction.
0a: Figure out your base shell path and what's in it
The critical trick here is the ! magic in IPython. Anything after a leading ! in IPython gets run by the shell instead of as python code. Run the shell command pwd and ls to see where IPython thinks you are on your system, and the contents of the directory.
hint: Be sure to remove the "#complete"s below when you've done so. IPython will interpret that as part of the shell command if you don't
End of explanation
"""
%%sh
#complete
"""
Explanation: 0b: Try a multi-line shell command
IPython magics often support "cell" magics by having %%<command> at the top of a cell. Use that to cd into the directory below this one ("..") and then ls inside that directory.
Hint: if you need syntax tips, run the magic() function and look for the ! or !! commands
End of explanation
"""
! #complete
"""
Explanation: 0c: Create a new directory from Jupyter
While you can do this almost as easily with os.mkdir in Python, for this case try to do it using shell magics instead. Make a new directory in the directory you are currently in. Use your system file browser to ensure you were sucessful.
End of explanation
"""
%cd -0 #complete
"""
Explanation: 0d: Change directory to your new directory
One thing about shell commands is that they always start wherever you started your IPython instance. So doing cd as a shell command only changes things temporarily (i.e. within that shell command). IPython provides a %cd magic that makes this change last, though. Use this to %cd into the directory you just created, and then use the pwd shell command to ensure this cd "stuck" (You can also try doing cd as a shell command to prove to yourself that it's different from the %cd magic.)
End of explanation
"""
!mkdir #complete only if you didn't do 0c, or want a different name for your code directory
%%file <yourdirectory>/code.py
def do_something():
# complete
print(something)# this will make it much easier in future problems to see that something is actually happening
"""
Explanation: Final note: %cd -0 is a convenient shorthand to switch back to the initial directory.
Problem 1: Creating a bare-bones repo and getting it on Github
Here we'll create a simple (public) code repository with a minimal set of content, and publish it in github.
1a: Create a basic repository locally
Start by creating the simplest possible code repository, composed of a single code file. Create a directory (or use the one from 0c), and place a code.py file in it, with a bit of Python code of your choosing. (Bonus points for witty or sarcastic code...) You could even use non-Python code if you desired, although Problems 3 & 4 feature Python-specific bits so I wouldn't recommend it.
To make the file from the notebook, the %%file <filename> magic is a convenient way to write the contents of a notebook cell to a file.
End of explanation
"""
%run <yourdirectory>/code.py # complete
do_something()
"""
Explanation: If you want to test-run your code:
End of explanation
"""
%cd # complete
!git init
!git add code.py
!git commit -m #complete
"""
Explanation: 1b: Convert the directory into a git repo
Make that code into a git repository by doing git init in the directory you created, then git add and git commit.
End of explanation
"""
!git remote add <yourgithubusername> <the url github shows you on the repo web page> #complete
!git push <yourgithubusername> master -u
"""
Explanation: 1c: Create a repository for your code in Github
Go to github's web site in your web browser. If you do not have a github account, you'll need to create one (follow the prompts on the github site).
Once you've got an account, you'll need to make sure your git client can authenticate with github. If you're using a GUI, you'll have to figure it out (usually it's pretty easy). On the command line you have two options:
* The simplest way is to connect to github using HTTPS. This requires no initial setup, but git will prompt you for your github username and password every so often.
* If you find that annoying, you can set up your system to use SSH to talk to github. Look for the "SSH and GPG keys" section of your settings on github's site, or if you're not sure how to work with SSH keys, check out github's help on the subject.
Once you've got github set up to talk to your computer, you'll need to create a new repository for the code you created. Hit the "+" in the upper-right, create a "new repository" and fill out the appropriate details (don't create a README just yet).
To be consistent, it's recommended using the same name for your repository as the local directory name you used. But that is not a requirement, just a recommendation.
Once you've created the repository, connect your local repository to github and push your changes up to github.
End of explanation
"""
%%file README.md
# complete
"""
Explanation: The -u is a convenience that means from then on you can use just git push and git pull to send your code to and from github.
1e: Modify the code and send it back up to github
Proper documentation is important. But for now make sure to add a README to your code repository. Always add a README with basic documentation. Always. Even if only you are going to use this code, trust me, your future self will be very happy you did it.
You can just call it README, but to get it to get rendered nicely on the github repository, you can call it README.md and write it using markdown syntax, REAMDE.rst in ReST or various other similar markup languages github understands. If you don't know/care, just use README.md, as that's pretty standard at this point.
End of explanation
"""
!git #complete
"""
Explanation: Now add it to the repository via git commit, and push up to github...
End of explanation
"""
!git #complete
"""
Explanation: 1f: Choose a License
A bet you didn't expect to be reading legalese today... but it turns out this is important. If you do not explicitly license your code, in most countries (including the US and EU) it is technically illegal for anyone to use your code for any purpose other than just looking at it.
(Un?)Fortunately, there are a lot of possible open source licenses out there. Assuming you want an open license, the best resources is to use the "Choose a License" website. Have a look over the options there and decide which you think is appropriate for your code.
Once you've chosen a License, grab a copy of the license text, and place it in your repository as a file called LICENSE (or LICENSE.md or the like). Some licenses might also suggest you place the license text or just a copyright notice in the source code as well, but that's up to you.
Once you've done that, do as we've done before: push all your additions up to github. If you've done it right, github will automatically figure out your license and show it in the upper-right corner of your repo's github page.
End of explanation
"""
# Don't forget to do this cd or something like it... otherwise you'll clone *inside* your repo
%cd -0
!git clone <url from github>#complete
%cd <reponame>#complete
"""
Explanation: Problem 2: Collaborating with others' repos
One very important advantages of working in repositories is that sharing the code becomes much easier, others (and your future self) can have a look at it, use it, and contribute to it. So now we'll have you try modify your neighbors' project using github's Pull Request feature.
2a: Get (git?) your neighbor's code repo
Find someone sitting near you who has gotten through Problem 1. Ask them their github user name and the name of their repository.
Once you've got the name of their repo, navigate to it on github. The URL pattern is always "https://www.github.com/username/reponame". Use the github interface to "fork" that repo, yielding a "yourusername/reponame" repository. Go to that one, take note of the URL needed to clone it (you'll need to grab it from the repo web page, either in "HTTPS" or "SSH" form, depending on your choice in 1a). Then clone that onto your local machine.
End of explanation
"""
!git branch <name-of-branch>#complete
"""
Explanation: 2c: create a branch for your change
You're going to make some changes to their code, but who knows... maybe they'll spend so long reviewing it that you want to do another. So it's always best to make changes in a specific "branch" for that change. So to do this we need to make a github branch.
A super useful site to learn more about branching and praticing scenarios, feel free to check it out now, and also ask:
https://learngitbranching.js.org/
End of explanation
"""
!git add <files modified>#complete
!git commit -m ""#complete
"""
Explanation: 2c: modify the code
Make some change to their code repo. Usually this would be a new feature or a bug fix or documentation clarification or the like... But it's up to you.
Once you've done that, be sure to commit the change locally.
End of explanation
"""
!git push origin <name-of-branch>#complete
"""
Explanation: and push it up (to a branch on your github fork).
End of explanation
"""
!git #complete
"""
Explanation: 2d: Issue a pull request
Now use the github interface to create a new "pull request". Once you've pushed your new branch up, you'll see a prompt to do this automatically appear on your fork's web page. But if you don't, use the "branches" drop-down to navigate to the new branch, and then hit the "pull request" button. That should show you an interface that you can use to leave a title and description (in github markdown), and then submit the PR. Go ahead and do this.
2e: Have them review the PR
Tell your neighbor that you've issued the PR. They should be able to go to their repo, and see that a new pull request has been created. There they'll review the PR, possibly leaving comments for you to change. If so, go to 2f, but if not, they should hit the "Merge" button, and you can jump to 2g.
2f: (If necessary) make changes and update the code
If they left you some comments that require changing prior to merging, you'll need to make those changes in your local copy, commit those changes, and then push them up to your branch on your fork.
End of explanation
"""
!git remote add <neighbors-username> <url-from-neighbors-github-repo> #complete
!git fetch <neighbors-username> #complete
!git branch --set-upstream-to=<neighbors-username>/master master
!git checkout master
!git pull
"""
Explanation: Hopefully they are now satisfied and are willing to hit the merge button.
2g: Get the updated version
Now you should get the up-to-date version from the original owner of the repo, because that way you'll have both your changes and any other changes they might have made in the meantime. To do this you'll need to connect your local copy to your nieghbor's github repo (not your fork).
End of explanation
"""
!mkdir <yourpkgname>#complete
!git mv code.py <yourpkgname>#complete
#The "touch" unix command simply creates an empty file if there isn't one already.
#You could also use an editor to create an empty file if you prefer.
!touch <yourpkgname>/__init__.py#complete
"""
Explanation: Now if you look at the local repo, it should include your changes.
Suggestion You mauy want to change the "origin" remote to your username. E.g. git remote rename origin <yourusername>. To go further, you might even delete your fork's master branch, so that only your neighbor's master exists. That might save you headaches in the long run if you were to ever access this repo again in the future.
2h: Have them reciprocate
Science (Data or otherwise) and open source code is a social enterprise built on shared effort, mutual respect, and trust. So ask them to issue a PR aginst your code, too. The more we can stand on each others' shoulders, the farther we will all see.
Hint: Ask them nicely. Maybe offer a cookie or something?
Problem 3: Setting up a bare-bones Python Package
Up to this point we've been working on the simplest possible shared code: a single file with all the content. But for most substantial use cases this isn't going to cut it. After all, Python was designed around the idea of namespaces that let you hide away or show code to make writing, maintaining, and versioning code much easier. But to make use of these, we need to deploy the installational tools that Python provides. This is typically called "packaging". In this problem we will take the code you just made it and build it into a proper python package that can be installed and then used anywhere.
For more background and detail (and the most up-to-date recommendations) see the Python Packaging Guide.
3a: Set up a Python package structure for your code
First we adjust the structure of your code from Problem 1 to allow it to live in a package structure rather than as a stand-alone .py file. All you need to do is create a directory, move the code.py file into that directory, and add a file (can be empty) called __init__.py into the directory.
You'll have to pick a name for the package, which is usually the same as the repo name (although that's not strictly required, notable exemption is e.g. scikit-learn vs sklearn).
Hint: don't forget to switch back to your code repo directory, if you are doing this immediately after Problem 2.
End of explanation
"""
from <yourpkgname> import code#complete
#if your code.py has a function called `do_something` as in the example above, you can now run it like:
code.do_something()
"""
Explanation: 3b: Test your package
You should now be able to import your package and the code inside it as though it were some installed package like numpy, astropy, pandas, etc.
End of explanation
"""
%%file <yourpkgname>/__init__.py
#complete
"""
Explanation: 3c: Apply packaging tricks
One of the nice things about packages is that they let you hide the implementation of some part of your code in one place while exposing a "cleaner" namespace to the users of your package. To see a (trivial) example, of this, lets pull a function from your code.py into the base namespace of the package. In the below make the __init__.py have one line: from .code import do_something. That places the do_something() function into the package's root namespace.
End of explanation
"""
import <yourpkgname>#complete
<yourpkgname>.do_something()#complete
"""
Explanation: Now the following should work.
End of explanation
"""
from importlib import reload
reload(<yourpkgname>)#complete
<yourpkgname>.do_something()#complete
"""
Explanation: BUT you will probably get an error here. That's because Python is smart about imports: once it's imported a package it won't re-import it later. Usually that saves time, but here it's a hassle. Fortunately, we can use the reload function to get around this:
End of explanation
"""
%%file setup.py
#!/usr/bin/env python
from distutils.core import setup
setup(name='<yourpkgname>',
version='0.1dev',
description='<a description>',
author='<your name>',
author_email='<youremail>',
packages=['<yourpkgname>'],
) #complete
"""
Explanation: 3d: Create a setup.py file
Ok, that's great in a pinch, but what if you want your package to be available from other directories? If you open a new terminal somewhere else and try to import <yourpkgname> you'll see that it will fail, because Python doesn't know where to find your package. Fortunately, Python (both the language and the larger ecosystem) provide built-in tools to install packages. These are built around creating a setup.py script that controls installation of a python packages into a shared location on your machine. Essentially all Python packages are installed this way, even if it happens silently behind-the-scenes.
Below is a template bare-bones setup.py file. Fill it in with the relevant details for your package.
End of explanation
"""
!python setup.py build
"""
Explanation: 3e: Build the package
Now you should be able to "build" the package. In complex packages this will involve more involved steps like linking against C or FORTRAN code, but for pure-python packages like yours, it simply involves filtering out some extraneous files and copying the essential pieces into a build directory.
End of explanation
"""
%%sh
cd build/lib.X-Y-Z #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
"""
Explanation: To test that it built sucessfully, the easiest thing to do is cd into the build/lib.X-Y-Z directory ("X-Y-Z" here is OS and machine-specific). Then you should be able to import <yourpkgname>. It's usually best to do this as a completely independent process in python. That way you can be sure you aren't accidentally using an old import as we saw above.
End of explanation
"""
%%sh
conda create -n test_<yourpkgname> anaconda #complete
source activate test_<yourpkgname> #complete
python setup.py install
"""
Explanation: 3f: Install the package
Alright, now that it looks like it's all working as expected, we can install the package. Note that if we do this willy-nilly, we'll end up with lots of packages, perhaps with the wrong versions, and it's easy to get confused about what's installed (there's no reliable uninstall command...) So before installing we first create a virtual environment using Anaconda, and install into that. If you don't have anaconda or a similar virtual environment scheme, you can just do python setup.py install. But just remember that this will be difficult to back out (hence the reason for Python environments in the first place!)
End of explanation
"""
%%sh
cd $HOME
source activate test_<yourpkgname> #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
"""
Explanation: Now we can try running the package from anywhere (not just the source code directory), as long as we're in the same environment that we installed the package in.
End of explanation
"""
!git #complete
"""
Explanation: 3g: Update the package on github
OK, it's now installable. You'll now want to make sure to update the github version to reflect these improvements. You'll need to add and commit all the files. You'll also want to update the README to instruct users that they should use python setup.py install to install the package.
End of explanation
"""
%%file -a ~/.pypirc
[distutils]
index-servers = pypi
[pypi]
repository = https://test.pypi.org/legacy/
username = <your user name goes here>
password = <your password goes here>
"""
Explanation: Problem 4: Publishing your package on (fake) PyPI
Now that your package can be installed by anyone who comes across it on github. But it tends to scare some people that they need to download the source code and know git to use your code. The Python Package Index (PyPI), combined with the pip tool (now standard in Python) provides a much simpler way to distribute code. Here we will publish your code to a testing version of PyPI.
4a: Create a PyPI account
First you'll need an account on PyPI to register new packages. Go to the testing PyPI, and register. You'll also need to supply your login details in the .pypirc directory in your home directory as shown below. (If it were the real PyPI you'd want to be more secure and not have your password in plain text. But for the testing server that's not really an issue.)
Note that if you've ever done something like this before and hence already have a .pypirc file, you might get unexpected results if you run this without moving/renaming the old version temporarily.
End of explanation
"""
!python setup.py sdist
"""
Explanation: 4b: Build a "source" version of your package
Use distutils to create the source distribution of your package.
Hint: You'll want to make sure your package version is something you want to release before executing the upload command. Released versions can't be duplicates of existing versions, and shouldn't end in "dev" or "b" or the like."
End of explanation
"""
!twine upload dist/<yourpackage>-<version>
"""
Explanation: Verify that there is a <yourpkg>-<version>.tar.gz file in the dist directory. It should have all of the source code necessary for your package.
4c: Upload your package to PyPI
Once you have an account on PyPI (or testPyPI in our case) you can upload your distributions to PyPI using twine. If this is your first time uploading a distribution for a new project, twine will handle registering the project automatically filling out the details you provided in your setup.py.
End of explanation
"""
%%sh
conda create -n test_pypi_<yourpkgname> anaconda #complete
source activate test_pypi_<yourpkgname> #complete
pip install -i https://testpypi.python.org/pypi <yourpkgname>
%%sh
cd $HOME
source activate test_pypi_<yourpkgname> #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
"""
Explanation: 4d: Install your package with pip
The pip tool is a convenient way to install packages on PyPI. Again, we use Anaconda to create a testing environment to make sure everything worked correctly.
(Normally the -i wouldn't be necessary - we're using it here only because we're using the "testing" PyPI)
End of explanation
"""
|
bjodah/chempy | examples/protein_binding_unfolding_4state_model.ipynb | bsd-2-clause | import logging; logger = logging.getLogger('matplotlib'); logger.setLevel(logging.INFO) # or notebook filled with logging
from collections import OrderedDict, defaultdict
import math
import re
import time
from IPython.display import Image, Latex, display
import matplotlib.pyplot as plt
import sympy
from pyodesys.symbolic import ScaledSys
from pyodesys.native.cvode import NativeCvodeSys
from chempy import Substance, Equilibrium, Reaction, ReactionSystem
from chempy.kinetics.ode import get_odesys
from chempy.kinetics.rates import MassAction
from chempy.printing.tables import UnimolecularTable, BimolecularTable
from chempy.thermodynamics.expressions import EqExpr
from chempy.util.graph import rsys2graph
from chempy.util.pyutil import defaultkeydict
%matplotlib inline
"""
Explanation: Protein binding & undfolding – a four-state model
In this notebook we will look into a the kinetics of a model system describing competing protein folding, aggregation and ligand binding. Using ChemPy we can define thermodynamic and kinetic parameters, and obtain
a representation of a system of ODEs which may be integrated efficiently. Since we use SymPy we can also
generate publication quality latex-expressions of our mathematical model directly from our source code. No need to write the equations multiple times in Python/Latex (or even C++ if the integration is to be performed a large number of times such as during parameter estimation).
First we will perform our imports:
End of explanation
"""
substances = OrderedDict([
('N', Substance('N', composition={'protein': 1}, latex_name='[N]')),
('U', Substance('U', composition={'protein': 1}, latex_name='[U]')),
('A', Substance('A', composition={'protein': 1}, latex_name='[A]')),
('L', Substance('L', composition={'ligand': 1}, latex_name='[L]')),
('NL', Substance('NL', composition={'protein': 1, 'ligand': 1}, latex_name='[NL]')),
])
"""
Explanation: Next we will define our substances. Note how we specify the composition, this will allow ChemPy to raise an error if any of our reactions we enter later would violate mass-conservation. It will also allow us to reduce the number of unknowns in our ODE-system by using the linear invariants from the mass-conservation.
End of explanation
"""
def _gibbs(args, T, R, backend, **kwargs):
H, S, Cp, Tref = args
H2 = H + Cp*(T - Tref)
S2 = S + Cp*backend.log(T/Tref)
return backend.exp(-(H2 - T*S2)/(R*T))
def _eyring(args, T, R, k_B, h, backend, **kwargs):
H, S = args
return k_B/h*T*backend.exp(-(H - T*S)/(R*T))
Gibbs = EqExpr.from_callback(_gibbs, parameter_keys=('temperature', 'R'), argument_names=('H', 'S', 'Cp', 'Tref'))
Eyring = MassAction.from_callback(_eyring, parameter_keys=('temperature', 'R', 'k_B', 'h'), argument_names=('H', 'S'))
"""
Explanation: We will model thermodynamic properties using enthalpy (H), entropy (S) and heat capacity (Cp). Kinetic paramaters (rate constants) are assumed to follow the Eyring equation:
End of explanation
"""
thermo_dis = Gibbs(unique_keys=('He_dis', 'Se_dis', 'Cp_dis', 'Tref_dis'))
thermo_u = Gibbs(unique_keys=('He_u', 'Se_u', 'Cp_u', 'Tref_u')) # ([He_u_R, Se_u_R, Cp_u_R, Tref])
kinetics_agg = Eyring(unique_keys=('Ha_agg', 'Sa_agg')) # EyringMassAction([Ha_agg, Sa_agg])
kinetics_as = Eyring(unique_keys=('Ha_as', 'Sa_as'))
kinetics_f = Eyring(unique_keys=('Ha_f', 'Sa_f'))
"""
Explanation: Next we define our free parameters:
End of explanation
"""
eq_dis = Equilibrium({'NL'}, {'N', 'L'}, thermo_dis, name='ligand-protein dissociation')
eq_u = Equilibrium({'N'}, {'U'}, thermo_u, {'L'}, {'L'}, name='protein unfolding')
r_agg = Reaction({'U'}, {'A'}, kinetics_agg, {'L'}, {'L'}, name='protein aggregation')
"""
Explanation: We will have two reversible reactions, and one irreversible reaction:
End of explanation
"""
rsys = ReactionSystem(
eq_dis.as_reactions(kb=kinetics_as, new_name='ligand-protein association') +
eq_u.as_reactions(kb=kinetics_f, new_name='protein folding') +
(r_agg,), substances, name='4-state CETSA system')
"""
Explanation: We formulate a system of 5 reactions honoring our reversible equilibria and our irreversible reaction:
End of explanation
"""
vecs, comp = rsys.composition_balance_vectors()
names = rsys.substance_names()
dict(zip(comp, [dict(zip(names, v)) for v in vecs]))
"""
Explanation: We can query the ReactionSystem instance for what substances contain what components:
End of explanation
"""
rsys2graph(rsys, '4state.png', save='.', include_inactive=False)
Image('4state.png')
"""
Explanation: We can look at our ReactionSystem as a graph if we wish:
End of explanation
"""
rsys
"""
Explanation: ...or as a Table if that suits us better (note that "A" ha green highlighting, denoting it's a terminal product)
End of explanation
"""
uni, not_uni = UnimolecularTable.from_ReactionSystem(rsys)
bi, not_bi = BimolecularTable.from_ReactionSystem(rsys)
assert not (not_bi & not_uni), "Only uni- & bi-molecular reactions expected"
uni
bi
"""
Explanation: Try hovering over the names to have them highlighted (this is particularly useful when working with large reaction sets).
We ca also generate tables representing the unimolecular reactions involing each substance, or the matrix showing the bimolecular reactions:
End of explanation
"""
def pretty_replace(s, subs=None):
if subs is None:
subs = {
'Ha_(\w+)': r'\\Delta_{\1}H^{\\neq}',
'Sa_(\w+)': r'\\Delta_{\1}S^{\\neq}',
'He_(\w+)': r'\\Delta_{\1}H^\\circ',
'Se_(\w+)': r'\\Delta_{\1}S^\\circ',
'Cp_(\w+)': r'\\Delta_{\1}\,C_p',
'Tref_(\w+)': r'T^{\\circ}_{\1}',
}
for pattern, repl in subs.items():
s = re.sub(pattern, repl, s)
return s
def mk_Symbol(key):
if key in substances:
arg = substances[key].latex_name
else:
arg = pretty_replace(key.replace('temperature', 'T'))
return sympy.Symbol(arg)
autosymbols = defaultkeydict(mk_Symbol)
rnames = {}
for rxn in rsys.rxns:
rnames[rxn.name] = rxn.name.replace(' ', '~').replace('-','-')
rate_expr_str = sympy.latex(rxn.rate_expr()(autosymbols, backend=sympy, reaction=rxn))
lstr = r'$r(\mathrm{%s}) = %s$' % (rnames[rxn.name], rate_expr_str)
display(Latex(lstr))
ratexs = [autosymbols['r(\mathrm{%s})' % rnames[rxn.name]] for rxn in rsys.rxns]
rates = rsys.rates(autosymbols, backend=sympy, ratexs=ratexs)
for k, v in rates.items():
display(Latex(r'$\frac{[%s]}{dt} = %s$' % (k, sympy.latex(v))))
default_c0 = defaultdict(float, {'N': 1e-9, 'L': 1e-8})
params = dict(
R=8.314472, # or N_A & k_B
k_B=1.3806504e-23,
h=6.62606896e-34, # k_B/h == 2.083664399411865e10 K**-1 * s**-1
He_dis=-45e3,
Se_dis=-400,
Cp_dis=1.78e3,
Tref_dis=298.15,
He_u=60e3,
Cp_u=20.5e3,
Tref_u=298.15,
Ha_agg=106e3,
Sa_agg=70,
Ha_as=4e3,
Sa_as=-10,
Ha_f=90e3,
Sa_f=50,
temperature=50 + 273.15
)
"""
Explanation: Exporting expressions as LaTeX is quite straightforward:
End of explanation
"""
def Se0_from_Tm(Tm, token):
dH0, T0, dCp = params['He_'+token], params['Tref_'+token], params['Cp_'+token]
return dH0/Tm + (Tm-T0)*dCp/Tm - dCp*math.log(Tm/T0)
params['Se_u'] = Se0_from_Tm(48.2+273.15, 'u')
params['Se_u']
"""
Explanation: We have the melting temperature $T_m$ as a free parameter, however, the model is expressed in terms of $\Delta_u S ^\circ$ so will need to derive the latter from the former:
$$
\begin{cases}
\Delta G = 0 \
\Delta G = \Delta H - T_m\Delta_u S
\end{cases}
$$
$$
\begin{cases}
\Delta H = \Delta H^\circ + \Delta C_p \left( T_m - T^\circ \right) \
\Delta S = \Delta S^\circ + \Delta C_p \ln\left( \frac{T_m}{T^\circ} \right)
\end{cases}
$$
this gives us the following equation:
$$
\Delta H^\circ + \Delta C_p \left( T_m - T^\circ \right) = T_m \left( \Delta S^\circ + \Delta C_p \ln\left( \frac{T_m}{T^\circ} \right) \right)
$$
Solving for $\Delta S^\circ$:
$$
\Delta S^\circ = T_m^{-1}\left( \Delta H^\circ + \Delta C_p \left( T_m - T^\circ \right) \right) - \Delta C_p \ln\left( \frac{T_m}{T^\circ} \right)
$$
End of explanation
"""
params_c0 = default_c0.copy()
params_c0.update(params)
for rxn in rsys.rxns:
print('%s: %.5g' % (rxn.name, rxn.rate_expr()(params_c0, reaction=rxn)))
"""
Explanation: If we want to see the numerical values for the rate of the individual reactions it is quite easy:
End of explanation
"""
odesys, extra = get_odesys(rsys, include_params=False, SymbolicSys=ScaledSys, dep_scaling=1e9)
len(odesys.exprs) # how many (symbolic) expressions are there in this representation?
"""
Explanation: By using pyodesys we can generate a system of ordinary differential equations:
End of explanation
"""
h0max = extra['max_euler_step_cb'](0, default_c0, params)
h0max
"""
Explanation: Numerical integration of ODE systems require a guess for the inital step-size. We can derive an upper bound for an "Euler-forward step" from initial concentrations and restrictions on mass-conservation:
End of explanation
"""
def integrate_and_plot(system, c0=None, first_step=None, t0=0, stiffness=False, nsteps=9000, **kwargs):
if c0 is None:
c0 = default_c0
if first_step is None:
first_step = h0max*1e-11
tend = 3600*24
t_py = time.time()
kwargs['atol'] = kwargs.get('atol', 1e-11)
kwargs['rtol'] = kwargs.get('rtol', 1e-11)
res = system.integrate([t0, tend], c0, params, integrator='cvode', nsteps=nsteps,
first_step=first_step, **kwargs)
t_py = time.time() - t_py
if stiffness:
plt.subplot(1, 2, 1)
_ = system.plot_result(xscale='log', yscale='log')
_ = plt.legend(loc='best')
plt.gca().set_ylim([1e-16, 1e-7])
plt.gca().set_xlim([1e-11, tend])
if stiffness:
if stiffness is True:
stiffness = 0
ratios = odesys.stiffness()
plt.subplot(1, 2, 2)
plt.yscale('linear')
plt.plot(odesys._internal[0][stiffness:], ratios[stiffness:])
for k in ('time_wall', 'time_cpu'):
print('%s = %.3g' % (k, res[2][k]), end=', ')
print('time_python = %.3g' % t_py)
return res
_, _, info = integrate_and_plot(odesys)
assert info['internal_yout'].shape[1] == 5
{k: v for k, v in info.items() if not k.startswith('internal')}
"""
Explanation: Now let's put our ODE-system to work:
End of explanation
"""
native = NativeCvodeSys.from_other(odesys, first_step_expr=0*odesys.indep)
_, _, info_native = integrate_and_plot(native)
{k: v for k, v in info_native.items() if not k.startswith('internal')}
"""
Explanation: pyodesys even allows us to generate C++ code which is compiled to a fast native extension module:
End of explanation
"""
info['time_wall']/info_native['time_wall']
from chempy.kinetics._native import get_native
native2 = get_native(rsys, odesys, 'cvode')
_, _, info_native2 = integrate_and_plot(native2, first_step=0.0)
{k: v for k, v in info_native2.items() if not k.startswith('internal')}
"""
Explanation: Note how much smaller "time_cpu" was here
End of explanation
"""
cses, (jac_in_cse,) = odesys.be.cse(odesys.get_jac())
jac_in_cse
odesys.jacobian_singular()
"""
Explanation: We have one complication, due to linear dependencies in our formulation of the system of ODEs our jacobian is singular:
End of explanation
"""
A, comp_names = rsys.composition_balance_vectors()
A, comp_names, list(rsys.substances.keys())
"""
Explanation: Since implicit methods (which are required for stiff cases often encountered in kinetic modelling) uses the Jacboian (or rather I - γJ) in the modified Newton's method we may get failures during integration (depending on step size and scaling). What we can do is to identify linear dependencies based on composition of the materials and exploit the invariants to reduce the dimensionality of the system of ODEs:
End of explanation
"""
y0 = {odesys[k]: sympy.Symbol(k+'0') for k in rsys.substances.keys()}
analytic_L_N = extra['linear_dependencies'](['L', 'N'])
analytic_L_N(None, y0, None, sympy)
assert len(analytic_L_N(None, y0, None, sympy)) > 0 # ensure the callback is idempotent
analytic_L_N(None, y0, None, sympy), list(enumerate(odesys.names))
"""
Explanation: That made sense: two different components can give us (up to) two linear invariants.
Let's look what those invariants looks like symbolically:
End of explanation
"""
from pyodesys.symbolic import PartiallySolvedSystem
no_invar = dict(linear_invariants=None, linear_invariant_names=None)
psysLN = PartiallySolvedSystem(odesys, analytic_L_N, **no_invar)
print(psysLN.be.cse(psysLN.get_jac())[1][0])
psysLN['L'], psysLN.jacobian_singular(), len(psysLN.exprs)
"""
Explanation: one can appreciate that one does not need to enter such expressions manually (at least for larger systems). That is both tedious and error prone.
Let's see how we can use pyodesys to leverage this information on redundancy:
End of explanation
"""
psysLA = PartiallySolvedSystem(odesys, extra['linear_dependencies'](['L', 'A']), **no_invar)
print(psysLA.be.cse(psysLA.get_jac())[1][0])
psysLA['L'], psysLA.jacobian_singular()
plt.figure(figsize=(12,4))
plt.subplot(1, 2, 1)
_, _, info_LN = integrate_and_plot(psysLN, first_step=0.0)
assert info_LN['internal_yout'].shape[1] == 3
plt.subplot(1, 2, 2)
_, _, info_LA = integrate_and_plot(psysLA, first_step=0.0)
assert info_LA['internal_yout'].shape[1] == 3
({k: v for k, v in info_LN.items() if not k.startswith('internal')},
{k: v for k, v in info_LA.items() if not k.startswith('internal')})
"""
Explanation: above we chose to get rid of 'L' and 'N', but we could also have removed 'A' instead of 'N':
End of explanation
"""
from pyodesys.symbolic import SymbolicSys
psys_root = SymbolicSys.from_other(psysLN, roots=[psysLN['N'] - psysLN['A']])
psys_root.roots
psysLN['N']
psysLN.analytic_exprs
psysLN.names
psysLN.dep
tout1, Cout1, info_root = integrate_and_plot(psys_root, first_step=0.0, return_on_root=True)
print('Time at which concnetrations of N & A are equal: %.4g' % (tout1[-1]))
"""
Explanation: We can also have the solver return to use when some precondition is fulfilled, e.g. when the concentraion of 'N' and 'A' are equal:
End of explanation
"""
xout2, yout2, info_LA = integrate_and_plot(psysLA, first_step=0.0, t0=tout1[-1], c0=dict(zip(odesys.names, Cout1[-1, :])))
"""
Explanation: From this point in time onwards we could for example choose to continue our integration using another formulation of the ODE-system:
End of explanation
"""
print('\troot\tLA\troot+LA\tLN')
for k in 'n_steps nfev njev'.split():
print('\t'.join(map(str, (k, info_root[k], info_LA[k], info_root[k] + info_LA[k], info_LN[k]))))
"""
Explanation: Let's compare the total number steps needed for our different approaches:
End of explanation
"""
from pyodesys.symbolic import symmetricsys
logexp = lambda x: sympy.log(x + 1e-20), lambda x: sympy.exp(x) - 1e-20
def psimp(exprs):
return [sympy.powsimp(expr.expand(), force=True) for expr in exprs]
LogLogSys = symmetricsys(logexp, logexp, exprs_process_cb=psimp)
unscaled_odesys, unscaled_extra = get_odesys(rsys, include_params=False)
tsys = LogLogSys.from_other(unscaled_odesys)
unscaledLN = PartiallySolvedSystem(unscaled_odesys, unscaled_extra['linear_dependencies'](['L', 'N']), **no_invar)
unscaledLA = PartiallySolvedSystem(unscaled_odesys, unscaled_extra['linear_dependencies'](['L', 'A']), **no_invar)
assert sorted(unscaledLN.free_names) == sorted(['U', 'A', 'NL'])
assert sorted(unscaledLA.free_names) == sorted(['U', 'N', 'NL'])
tsysLN = LogLogSys.from_other(unscaledLN)
tsysLA = LogLogSys.from_other(unscaledLA)
_, _, info_t = integrate_and_plot(tsys, first_step=0.0)
{k: info_t[k] for k in ('nfev', 'njev', 'n_steps')}
"""
Explanation: In this case it did not earn us much, one reason is that we actually don't need to find the root with as high accuracy as we do. But having the option is still useful.
Using pyodesys and SymPy we can perform a variable transformation and solve the transformed system if we so wish:
End of explanation
"""
native_tLN = NativeCvodeSys.from_other(tsysLN)
_, _, info_tLN = integrate_and_plot(native_tLN, first_step=1e-9, nsteps=18000, atol=1e-9, rtol=1e-9)
{k: info_tLN[k] for k in ('nfev', 'njev', 'n_steps')}
_, _, info_tLN = integrate_and_plot(tsysLN, first_step=1e-9, nsteps=18000, atol=1e-8, rtol=1e-8)
{k: info_tLN[k] for k in ('nfev', 'njev', 'n_steps')}
_, _, info_tLA = integrate_and_plot(tsysLA, first_step=0.0)
{k: info_tLA[k] for k in ('nfev', 'njev', 'n_steps')}
"""
Explanation: We can even apply the transformation our reduced systems (doing so by hand is excessively painful and error prone):
End of explanation
"""
print(open(next(filter(lambda s: s.endswith('.cpp'), native2._native._written_files))).read())
"""
Explanation: Finally, let's take a look at the C++ code which was generated for us:
End of explanation
"""
|
FESOM/pyfesom | notebooks/plot_simple_diagnostics.ipynb | mit | import sys
sys.path.append("../")
import pyfesom as pf
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from matplotlib.colors import LinearSegmentedColormap
import numpy as np
#%matplotlib notebook
%matplotlib inline
from matplotlib import cm
from netCDF4 import Dataset, MFDataset
"""
Explanation: Getting the data
Please look at the information in the get_data.ipynb notebook. You have to end up with swift.dkrz.de folder located somwere in your system. All data used in this examples are located in this folder.
End of explanation
"""
meshpath ='../../swift.dkrz.de/COREII'
mesh = pf.load_mesh(meshpath, usepickle=True)
"""
Explanation: First, as usuall load the mesh:
End of explanation
"""
fl = Dataset('../../swift.dkrz.de/COREII_data/fesom.1951.oce.mean.nc')
fl.variables['temp'].shape
"""
Explanation: Load data for one year:
End of explanation
"""
%%time
temp_mean = fl.variables['temp'][:,:].mean(axis=0)
"""
Explanation: Make a mean over all timesteps:
End of explanation
"""
m = Basemap(projection='robin',lon_0=0, resolution='c')
x, y = m(mesh.x2, mesh.y2)
%%time
level_data, elem_no_nan = pf.get_data(temp_mean,mesh,100)
plt.figure(figsize=(10,7))
m.drawmapboundary(fill_color='0.9')
m.drawcoastlines()
levels = np.arange(-3., 30., 1)
plt.tricontourf(x, y, elem_no_nan[::], level_data, levels = levels, \
cmap=cm.Spectral_r, extend='both')
cbar = plt.colorbar(orientation='horizontal', pad=0.03);
cbar.set_label("Temperature, $^{\circ}$C")
plt.title('Temperature at 100m depth')
plt.tight_layout()
"""
Explanation: And plot the data
End of explanation
"""
%%time
temp_std = fl.variables['temp'][:,:].std(axis=0)
%%time
level_data, elem_no_nan = pf.get_data(temp_std,mesh,100)
plt.figure(figsize=(10,7))
m.drawmapboundary(fill_color='0.9')
m.drawcoastlines()
levels = np.arange(0, 3., 0.2)
eps=(levels.max()-levels.min())/50.
level_data[level_data<=levels.min()]=levels.min()+eps
level_data[level_data>=levels.max()]=levels.max()-eps
plt.tricontourf(x, y, elem_no_nan[::], level_data, levels = levels, \
cmap=cm.magma_r, extend='both')
cbar = plt.colorbar(orientation='horizontal', pad=0.03);
cbar.set_label("Temperature, $^{\circ}$C")
plt.title('Temperature at 100m depth')
plt.tight_layout()
"""
Explanation: Do STD instead of mean
End of explanation
"""
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
def sp(mon=0, depth = 100):
level_data, elem_no_nan = pf.get_data(fl.variables['temp'][mon-1,:]-temp_mean,\
mesh, depth)
plt.figure(figsize=(10,7))
m.drawmapboundary(fill_color='0.9')
m.drawcoastlines()
levels = np.arange(-2, 2., 0.2)
eps=(levels.max()-levels.min())/50.
level_data[level_data<=levels.min()]=levels.min()+eps
level_data[level_data>=levels.max()]=levels.max()-eps
plt.tricontourf(x, y, elem_no_nan[::], level_data, levels = levels, \
cmap=cm.coolwarm, extend='both')
cbar = plt.colorbar(orientation='horizontal', pad=0.03);
cbar.set_label("Temperature, $^{\circ}$C")
plt.title('Temperature at {}m depth, month {}'.format(str(depth),str(mon)))
plt.tight_layout()
interact(sp, mon =(1,24), depth = (0,5000,100));
"""
Explanation: Or make an interactive plotting interface:
End of explanation
"""
|
seniosh/StatisticalMethods | notes/LMreview4.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8.0, 8.0)
# the model parameters
a = np.pi
b = 1.6818
# my arbitrary constants
mu_x = np.exp(1.0) # see definitions above
tau_x = 1.0
s = 1.0
N = 50 # number of data points
# get some x's and y's
x = mu_x + tau_x*np.random.randn(N)
y = a + b*x + s*np.random.randn(N)
plt.plot(x, y, 'o');
"""
Explanation: Review of the linear model
To begin playing with the guts of MCMC sampling, we'll use a distribution that we already know analytically, namely the posterior from fitting a linear model to data with Gaussian uncertainites. First, let's review the generative model for this case. This can be expressed as
$y_i \sim \mathrm{Normal}\left(a + b x_i, \sigma_i^2\right)$
for each data point, $(x_i,y_i)$.
Note: this is an alternative and more compact notation for writing that $P(y_i|x_i,a,b)$ is Gaussian with mean $a+b x_i$ and width $\sigma$.
In words, the $y_i$ associated with each $x_i$ has an expectation value that lies along a line ($a+bx$), but actual values $y_i$ are scattered from this line according to a Guassian of width $\sigma_i$. To keep things simple, we'll assume that the $x_i$ and $\sigma_i$ are known precisely. The equation above doesn't say anything about how the $x_i$ and $\sigma_i$ are distributed; for this case let's just assume
$x_i \sim \mathrm{Normal}(\mu_x, \tau_x^2)$
$\sigma_i = s \mathrm{~(constant)}$
for some $\mu_x$ ,$\tau_x$ and $s$ that we choose.
Let's now generate a fake data set.
End of explanation
"""
X = np.matrix(np.vstack([np.ones(len(x)), x]).T)
Y = np.matrix(y).T
"""
Explanation: Provided we claim to know the scatter perfectly(!), the likelihood function is both simple and familiar:
$\ln L = \sum_i -\frac{1}{2}\left(a+b x_i - y_i\right)^2$ + constant
On Tuesday, you saw that this problem has an exact solution if we take uniform, improper priors, namely the classical Ordinary Least Squares solution. In (matrix) equations,
$\left(\begin{array}{c}a\b\end{array}\right) \sim \mathrm{Normal}\left[\left(X^\mathrm{T}X\right)^{-1}X^\mathrm{T}y, \left(X^\mathrm{T}X\right)^{-1}\right]$, where $X = \left(1 ~~ x\right)$.
Let's spell this out in code, although note that direct inversion as done below is NOT the most numerically stable/preferable way to evaluate these expressions in the real world, but it will do for our purposes. First, we define $X$ and $y$ as matrices of the appropriate shape.
End of explanation
"""
np.linalg.inv(X.T*X)*X.T*Y
"""
Explanation: Now evaluate the mean of the distribution above:
End of explanation
"""
np.linalg.lstsq(X, y)[0]
"""
Explanation: Compare to the built-in numpy least squares result:
End of explanation
"""
np.linalg.inv(X.T*X)
"""
Explanation: We also care about the posterior covariance of $a$ and $b$:
End of explanation
"""
|
analysiscenter/dataset | examples/experiments/freezeout/FreezeOut.ipynb | apache-2.0 | import os
import sys
import blosc
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqn
from collections import OrderedDict
%matplotlib inline
sys.path.append('../../..')
from batch import ResBatch, ax_draw
from batchflow import Dataset, DatasetIndex
sys.path.append('../../utils')
import utils
"""
Explanation: Compatible with batchflow @346efe0
Implementation of FreezeOut method with ResNet on TensorFlow and BatchFlow
FreezeOut
ResNet
In this notebook, we research how FreezeOut method works with a small version of ResNet. The main idea of FreezeOut is very simple. The early blocks of a deep neural net have the fewest parameters, but take up the most computation.
For this reason, let's get layers freezing one-by-one and excluding them from the backward pass.
Every block (In original paper - every layer) has its own learning rate and turn off after a certain number of iterations. Blocks are excluded from the beginning to the end of the net.
For each block learning rate given as:
$$\alpha_{i}(t) = 0.5 * \alpha_{i}(0)\left(1 + \cos\left(\frac{\pit}{t_i}\right)\right)$$
where:
i - Number of layer.
$\alpha_{i}(t)$ - Learning rate of i-th layer on interation t.
$\alpha_i(0)$ - The initial learning rate of the layer. If you use scaled method, then $\alpha_i(0) = \frac{\alpha}{t_i}$ else, $\alpha$. $\alpha$ - is the base learning rate.
$t$ - The number of the current iteration.
$t_i$ - The number of last iteration. After this iteration layer learning rate will be zero.
For an experiment, after the last layer will be disabled, we set learning rate for all layers = $1e-2$ and continue learning.
The paper suggest two ways of choosing the last iteration to disable layer:
* Linear - choose the total number of iterations (t) and percentage, after which layer will be disabled. ${t_i}$ - all remaining layers will linearly space between t and ${t_0}$.
* Cube - All same, but a percentage will space id cube dependencies.
End of explanation
"""
optimal_params = {
'iteration': [300] * 4,
'learning_rate': [0.04, 0.06, 10, 14],
'degree': [3, 1, 3, 1],
'scaled': [False] * 2 + [True] * 2
}
optimal_params = OrderedDict(sorted(optimal_params.items(), key=lambda x: x[0]))
"""
Explanation: We will train the model with the following parameters
End of explanation
"""
plt.style.use('seaborn-poster')
plt.style.use('ggplot')
iteration = 300
_, axarr = plt.subplots(2, 2)
axarr=axarr.reshape(-1)
for params in range(4):
graph = []
for i in range(1, 6):
gefault_learning = optimal_params['learning_rate'][params]
last = int(iteration*(i/10 + 0.5) ** optimal_params['degree'][params])
if optimal_params['scaled'][params] == True:
graph.append([0.5 * gefault_learning/last * (1 + np.cos(np.pi * i / last)) for i in range(2, last+1)])
else:
graph.append([0.5 * gefault_learning * (1 + np.cos(np.pi * i / last)) for i in range(2, last+1)])
for i in range(len(graph)):
axarr[params].set_title('Changing the value of learning rate with params: \n \
lr={} degree={} it={} scaled={}'.format(gefault_learning, optimal_params['degree'][params], \
300, optimal_params['scaled'][params] ))
axarr[params].plot(graph[i], label='{} layer'.format(i))
axarr[params].set_xlabel('Iteration', fontsize=15)
axarr[params].set_ylabel('Learning rate', fontsize=15)
axarr[params].legend(fontsize=12)
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
"""
Explanation: About parameters:
Iteration - Number of the last iteration for the model with FreezeOut, after this iteration, learning rates in all layers will be zero.
learning rate - Initial learning rate for all layers. In the scaled method, it will be divided by a number of the last iteration for this layer.
degree - Linear or Cube, or Square dependency for disable layers.
scaled - scaled(True) of unscaled(False) method of disable layers.
The pictures below show how the learning rate will change.
End of explanation
"""
src = './../MNIST_data'
with open(os.path.join(src, 'mnist_pics.blk'), 'rb') as file:
images = blosc.unpack_array(file.read()).reshape(-1, 28, 28)
with open(os.path.join(src, 'mnist_labels.blk'), 'rb') as file:
labels = blosc.unpack_array(file.read())
global_freeze_loss = []
pipelines = []
"""
Explanation: We'll compare ResNet model with FreezeOut vs classic ResNet model.
First we load MNIST data
End of explanation
"""
res_loss=[]
ix = DatasetIndex(range(50000))
sess = tf.Session()
sess.run(tf.global_variables_initializer())
test_dset = Dataset(ix, ResBatch)
test_pipeline = (test_dset.p
.train_res(res_loss, images[:50000], labels[:50000]))
for i in tqn(range(500)):
test_pipeline.next_batch(300,n_epochs=None, shuffle=2)
"""
Explanation: Then create dataset and pipeline
End of explanation
"""
params_list = pd.DataFrame(optimal_params).values
for params in tqn(params_list):
freeze_loss = []
config = {
'freeznet':{'iteration': params[1],
'degree': params[0],
'learning_rate': params[2],
'scaled': params[3]}
}
dataset = Dataset(ix, batch_class=ResBatch)
train_pipeline = (dataset
.pipeline(config=config)
.train_freez(freeze_loss, images[:50000], labels[:50000]))
for i in tqn(range(1, 501)):
train_pipeline.next_batch(300, n_epochs=None, shuffle=2)
global_freeze_loss.append(freeze_loss)
"""
Explanation: The config allows to easily change the configuration of the model
End of explanation
"""
_, ax = plt.subplots(2,2)
ax = ax.reshape(-1)
for i in range(4):
utils.ax_draw(global_freeze_loss[i][:300], res_loss[:300], params_list[i], ax[i])
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
"""
Explanation: Plots below show the losses of the models with different learning rate parameters
End of explanation
"""
_, ax = plt.subplots(2,2)
ax = ax.reshape(-1)
for i in range(4):
utils.ax_draw(global_freeze_loss[i], res_loss, params_list[i], ax[i])
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
"""
Explanation: You can see models that use scaled method are more unstable.
The spikes in plots can be explained by large momentum. It is easy to see, what loss of the model with freezeout decrease faster than loss of the classic model.
If we continue to train the models, you can see an effect of the new learning rate(1e-2 is the same for all layers) on loss value.
End of explanation
"""
|
davidthomas5412/PanglossNotebooks | MassLuminosityProject/SummerResearch/ValidatingBigmaliAtScale_20170626.ipynb | mit | %matplotlib inline
import corner
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from matplotlib import rc
from bigmali.hyperparameter import get
from scipy.stats import lognorm
rc('text', usetex=True)
orig = np.loadtxt('bigmaliorig.out', delimiter=' ')
prior = np.loadtxt('bigmaliprior.out', delimiter=' ')
truths = get()
truths.remove(235000000000000.0)
"""
Explanation: Validating Bigmali At Scale
We have generated the bigmali inference with synthetic datasets with masses from the Millennium Simulation ('orig' ndarray) and from the prior ('prior' ndarray). In this notebook we analyze these inferences.
Contents:
- Original Masses from Millennium Simulation
- Masses Drawn From Prior
- Highest Weight Hyperpoint
- Loglikelihood of Hyperseed
- Analyzing Weights In Each Hyperparameter Variable
- Discussion
End of explanation
"""
corner.corner(orig[:,:4],
labels=[r'$\alpha_1$',r'$\alpha_2$',r'$\alpha_4$',r'$S$'],
weights=(orig[:,4] - orig[:,4].min()) / abs(orig[:,4].min() - orig[:,4].max()),
quantiles=[0.16, 0.5, 0.84],
show_titles=True,
truths=truths,
smooth1d=True
);
"""
Explanation: Original Masses from Millennium Simulation
Note that the weights in these corner plots are relative log-likelihood weights, as opposed to likelihood weights.
End of explanation
"""
corner.corner(prior[:,:4],
labels=[r'$\alpha_1$',r'$\alpha_2$',r'$\alpha_4$',r'$S$'],
weights=(prior[:,4] - prior[:,4].min()) / abs(prior[:,4].min() - prior[:,4].max()),
quantiles=[0.16, 0.5, 0.84],
show_titles=True,
truths=truths,
smooth1d=True
);
"""
Explanation: Masses Drawn From Prior
Again, note that the weights in these corner plots are relative log-likelihood weights, as opposed to likelihood weights.
End of explanation
"""
ma1, ma2, ma4, maS, l = prior[prior[:,4].argmax()]
best = [ma1, ma2, ma4, maS]
labels = [r'$\alpha_1$', r'$\alpha_2$', r'$\alpha_4$', r'$S$']
for i in xrange(4):
for j in xrange(4):
plt.subplot(4,4,i*4+j+1)
plt.scatter(prior[:500,i], prior[:500,j], alpha=.6)
plt.scatter(best[i], best[j], color='gold')
plt.scatter(truths[i], truths[j], color='red')
plt.xlabel(labels[i])
plt.ylabel(labels[j])
plt.gcf().set_size_inches((12,12))
plt.tight_layout()
"""
Explanation: Highest Weight Hyperpoint
Again, it is not close to true hyper-point.
End of explanation
"""
plt.title('Distribution of Log-Likelihood Weights')
plt.xlabel('Log-Likelihood')
plt.ylabel('Density')
plt.hist(prior[:,4],bins=500, normed=True, label='samples', alpha=0.5)
plt.gca().axvline(-987878.65344367269, label='hyperseed', color='r') # the likelihood of true hyper-point
plt.legend(loc=2);
"""
Explanation: Failure Mode 1: The posterior is not accurate, precise, or meaningful.
Loglikelihood of Hyperseed
The point with highest likelihood does not appear to be close to the hyper-seeds (fixed hyper-parameters that generated the mock dataset). Next we compute the likelihood of the hyper-seed. We see that its log-likelihood does not fall to the right of the distribution, as we would hope.
End of explanation
"""
plt.title('Distribution of Log-Likelihood Weights')
plt.xlabel('Log-Likelihood')
plt.ylabel('Density')
plt.hist(prior[:,4],bins=500, normed=True, label='samples', alpha=0.5)
plt.gca().axvline(-987878.65344367269, label='hyperseed', color='r') # the likelihood of true hyper-point
plt.xlim([-1000000, -975000])
plt.legend(loc=2);
"""
Explanation: Zooming in on the far right of this distribution ...
End of explanation
"""
low = prior[np.where(prior[:,4] <= np.percentile(prior[:,4], 33))]
medium = prior[np.where((np.percentile(prior[:,4], 33) < prior[:,4]) & (prior[:,4] <= np.percentile(prior[:,4], 66)))]
high = prior[np.where(np.percentile(prior[:,4], 66) < prior[:,4])]
plt.title(r'$\alpha_1$ Histograms')
plt.hist(low[:,0], color='blue', alpha=0.5, normed=True, label='low')
plt.hist(medium[:,0], color='grey', alpha=0.5, normed=True, label='medium')
plt.hist(high[:,0], color='red', alpha=0.5, normed=True, label='high')
plt.gca().axvline(10.747809151611289, color='k', linewidth=2, label='hyper-seed')
plt.legend(loc=2)
plt.xlabel(r'$\alpha_1$')
plt.ylabel('Density');
plt.title(r'$\alpha_2$ Histograms')
plt.hist(low[:,1], color='blue', alpha=0.5, normed=True, label='low')
plt.hist(medium[:,1], color='grey', alpha=0.5, normed=True, label='medium')
plt.hist(high[:,1], color='red', alpha=0.5, normed=True, label='high')
plt.gca().axvline(0.36260141487530501, color='k', linewidth=2, label='hyper-seed')
plt.legend(loc=2)
plt.xlabel(r'$\alpha_2$')
plt.ylabel('Density');
plt.title(r'$\alpha_4$ Histograms')
plt.hist(low[:,2], color='blue', alpha=0.5, normed=True, label='low')
plt.hist(medium[:,2], color='grey', alpha=0.5, normed=True, label='medium')
plt.hist(high[:,2], color='red', alpha=0.5, normed=True, label='high')
plt.gca().axvline(1.1587242790463443, color='k', linewidth=2, label='hyper-seed')
plt.legend(loc=2)
plt.xlabel(r'$\alpha_4$')
plt.ylabel('Density');
plt.title('S Histograms')
plt.hist(low[:,3], color='blue', alpha=0.5, normed=True, label='low')
plt.hist(medium[:,3], color='grey', alpha=0.5, normed=True, label='medium')
plt.hist(high[:,3], color='red', alpha=0.5, normed=True, label='high')
plt.gca().axvline(0.1570168038792813, color='k', linewidth=2, label='hyper-seed')
plt.legend(loc=2)
plt.xlabel('S')
plt.ylabel('Density');
"""
Explanation: Failure Mode 2: The weight discrepencies are enormous and hence we end up with one sample dominating the inference.
Analyzing Weights In Each Hyperparameter Variable
Next we break up the inference samples into three equally sized buckets (low, medium, high) based on their weights. Then we plot histograms of these buckets for each hyper-parameter. Ideally we would see the good bucket adjacent to the hyper-seed and the bad bucket furthest away. This is not what we observe. We see limited difference between buckets.
It also is worth recalling the Mass-Luminosity relationship is:
\begin{align}
P(\overline{L}|\overline{M},\alpha,S,\overline{z}) &= \frac{1}{\overline{L}S\sqrt{2\pi}}\exp\left(-\frac{(\ln \overline{L} - \ln \overline{\mu_L})^2}{2S^{2}}\right)\
\mu_L &= \exp(\alpha_1) \cdot \left(\frac{M}{\alpha_3}\right)^{\alpha_2} \cdot (1+z)^{\alpha_4}\
\end{align}
End of explanation
"""
print orig[:,4].mean()
print prior[:,4].mean()
"""
Explanation: At least drawing masses from the prior has higher mean log-likelihood than from using the masses from the Millennium Simulation.
End of explanation
"""
|
GoogleCloudPlatform/vertex-pipelines-end-to-end-samples | pipelines/tfma_metrics_visualisations.ipynb | apache-2.0 | !pip install tensorflow_model_analysis==0.37.0 pandas==1.3.5 google_cloud_storage==1.43.0
"""
Explanation: TFMA Model Evaluation Visualisations
This Notebook will guide the user as to how to obtain embedded HTML visualisations of TFMA model evaluation metrics used and created during training. This Notebook must be run after the completion of the training pipeline.
The following steps are required:
1. Install necessary packages
2. Define path to predictions file in GCS and desired metrics to evaluate
3. Run TFMA evaluation
4. Obtain HTML files to visualise
<span style="color:red">Disclaimer:</span> This Notebook is meant to be run as a Vertex AI Workbench within the GCP environment. If you wish to run this Notebook locally you would need to:
1. Download the predictions file you wish to evaluate from GCS into your local machine
2. Replace the csv_file variable to point to the local path instead
3. Download the <custom_metric_name>.py custom metric you wish to use from GCS into your local machine. Save these files in the same folder as this Notebook.
4. Comment out the Custom Metrics section of the Notebook.
5. Run the rest of the Notebook as normal
Install Packages
End of explanation
"""
# Visualisation-specific imports
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.view import render_slicing_metrics
from ipywidgets.embed import embed_minimal_html
import os
from google.cloud import storage
# TFMA Evaluation
import pandas as pd
from google.protobuf import text_format
import tensorflow_model_analysis as tfma
"""
Explanation: Import Packages
End of explanation
"""
"""This is the link to the predictions generated during the training pipeline, which are stored in GCS. These are the output of the "Predict Test Data"
component, and are saved in an Dataset Artefact called "predictions", which then act as the input to the "Evaluate test metrics for <challenger>/<champion> model"
component
"""
csv_file = 'gs://alvaro-sandbox/pipeline_root/805011877165/tensorflow-train-pipeline-20220223132851/predict-tensorflow-model_-2494514806493544448/predictions'
label_column_name = "total_fare" # Label column name (this is the ground truth)
pred_column_name = "predictions" # Model prediction column name
metrics_names = ["MeanSquaredError"] # Metric used to evaluate the model. Could be more than one (["MeanSquaredError", "<metric_name>"]
custom_metrics = {"SquaredPearson": "squared_pearson"} # Custom metric used to evaluate the model. If None used, leave it as custom_metrics = {}. If more
# than use used, then custom_metrics = {"SquaredPearson": "squared_pearson", <"MetricName">:<"module_name">}
# Slicing types used during evaluation. If no slicing used, leave it as slicing_specs = []
slicing_specs=[
'feature_keys: ["payment_type"]',
'feature_keys: ["payment_type", "company"]',
'feature_values: [{key: "payment_type", value: "Cash"}]',
'feature_keys: ["company", "dayofweek"] feature_values: [{key: "payment_type", value: "Cash"}]',
]
# Location to pipeline assets. Used only if custom metrics are available
PIPELINE_FILES_GCS_PATH='gs://alvaro-sandbox/pipelines'
VERTEX_PROJECT_ID='datatonic-vertex-pipeline-dev'
"""
Explanation: User Inputs
End of explanation
"""
# The custom metric module must be downloaded from GCS where it is being stored.
# If no custom metrics are used, this cell won't run anything.
if custom_metrics:
custom_metrics_path = f"{PIPELINE_FILES_GCS_PATH}/training/assets/tfma_custom_metrics"
storage_client = storage.Client(project=VERTEX_PROJECT_ID)
for custom_metric in custom_metrics.values():
with open(f"{custom_metric}.py", "wb") as fp:
storage_client.download_blob_to_file(f"{custom_metrics_path}/{custom_metric}.py", fp)
for custom_metric in custom_metrics.values():
assert f"{custom_metric}.py" in os.listdir(), f"Custom Metric module {custom_metric}.py could not be found at {custom_metrics_path}"
print(f"Downloaded custom metric module {custom_metric}.py to Notebook storage")
else:
print("No custom metrics were specified by the user")
"""
Explanation: Custom Metrics
End of explanation
"""
df = pd.read_csv(csv_file) # Read predictions and convert to dataframe
# Iterate through all metrics
metrics_specs = ""
for metric in metrics_names:
metrics_specs += f'metrics {{ class_name: "{metric}" }}\n'
# Adding custom metrics if specified
if custom_metrics:
for class_name, module_name in custom_metrics.items():
metric_spec = f' {{ class_name: "{class_name}" module: "{module_name}" }}'
metrics_specs += f"metrics {metric_spec}\n"
# Iterate through all slices
slicing_spec_proto = "slicing_specs {}\n"
if slicing_specs:
for single_slice in slicing_specs:
slicing_spec_proto += f"slicing_specs {{ {single_slice} }}\n"
# Create evaluation configuration
protobuf = """
## Model information
model_specs {{
label_key: "{0}"
prediction_key: "{1}"
}}
## Post export metric information
metrics_specs {{
{2}
}}
## Slicing information inc. overall
{3}
"""
eval_config = text_format.Parse(
protobuf.format(
label_column_name, pred_column_name, metrics_specs, slicing_spec_proto
),
tfma.EvalConfig(),
)
print(eval_config)
"""
Explanation: Define TFMA model evaluation specs
End of explanation
"""
eval_result = tfma.analyze_raw_data(df, eval_config=eval_config, output_path="eval_outputs/")
evaluation = eval_result.get_metrics_for_all_slices()
"""
Explanation: Run Evaluation
This will save the results of the TFMA evaluation under a file called eval_outputs which is created by TFMA itself.
End of explanation
"""
def get_key_value_pair(key_value_string):
"""String manipulation to obtain the key-value pair from the slicing specification. Currently TFMA only
supports having a single key-value pair as part of a slicing specification. If this changes, this
function must also change.
Args:
key_value_string (str): String containing the key-value pair. This string has the following naming convention:
'feature_keys: ["<feature_key>"] feature_values: [{key: "<key>", value: "<value>"}]'. The string
manipulation aims to obtain the <key> and <value> names.
Returns:
key (str): Key name given in slicing spec.
value (str): Value name given in slicing spec.
"""
# Get key name
key = key_value_string\
.split("key:")[1]\
.split(",")[0]\
.replace('"',"")\
.replace("'","")\
.strip()
# Get value name
value = key_value_string\
.split("value:")[1]\
.split("}")[0]\
.replace('"',"")\
.replace("'","")\
.strip()
return key, value
def get_feature_keys(keys_string):
"""String manipulation to obtain all feature keys from a single slicing specification returned as a single list
Args:
keys_string (str): String containing the feature keys. This string has the following naming convention:
'feature_keys: ["<feature_one>", "<feature_two>"]'. The string manipulation aims to obtain
all of the <feature_XX> keys in a single list
Returns:
feature_keys (list): List containing all feature keys in the given slice
"""
feature_keys = [] # Initialise empty list
# Get all keys as list of string
"""
Need to convert string 'feature_keys: ["<feature_one>", "<feature_two>"]'
into list of strings ["<feature_one>", "<feature_two>"]
"""
keys_list = keys_string\
.split("feature_keys:")[1]\
.lstrip()\
.split("[")[1]\
.split("]")[0]\
.split(",")
# Clean every string item in list
for onekey in keys_list:
keyname = onekey.replace('"',"").replace("'","").strip()
feature_keys.append(keyname)
return feature_keys
os.makedirs("html_outputs/", exist_ok=True) # Save files in this local folder
# Create an output file fore very slice type
for onespec in slicing_specs:
# If only feature keys are specified
if "feature_keys:" in onespec and "feature_values: " not in onespec:
spec_keys = get_feature_keys(onespec) # Get all keys as list of strings
specs = tfma.SlicingSpec(feature_keys=spec_keys) # Create slicing spec
plots_tfma = render_slicing_metrics(eval_result, slicing_spec=specs) # Plot metrics
embed_minimal_html(f'html_outputs/plots_{"_&_".join(spec_keys)}.html', views=[plots_tfma], title='Slicing Metrics')
# If only feature values are specified
elif "feature_values: " in onespec and "feature_keys:" not in onespec:
keyname, valname = get_key_value_pair(onespec) # Get key-value pair names
specs = tfma.SlicingSpec(feature_values={keyname:valname}) # Create slicing spec
plots_tfma = render_slicing_metrics(eval_result, slicing_spec=specs) # Plot metrics
embed_minimal_html(f'html_outputs/plots_{keyname}_-->_{valname}.html', views=[plots_tfma], title='Slicing Metrics')
# If a combination of feature keys and values are specified
elif "feature_keys:" in onespec and "feature_values: " in onespec:
keyname, valname = get_key_value_pair(onespec) # Get key-value pair names
spec_keys = get_feature_keys(onespec) # Get all keys as list of strings
specs = tfma.SlicingSpec(feature_keys=spec_keys,
feature_values={keyname:valname}) # Create slicing spec
plots_tfma = render_slicing_metrics(eval_result, slicing_spec=specs) # Plot metrics
embed_minimal_html(f'html_outputs/plots_{"_&_".join(spec_keys)}_<>_{keyname}_-->_{valname}.html', views=[plots_tfma], title='Slicing Metrics')
# Create a final plot without any slice, just for the overall metric
plots_tfma = render_slicing_metrics(eval_result)
embed_minimal_html(f'html_outputs/plots_overall.html', views=[plots_tfma], title='Slicing Metrics')
"""
Explanation: Save Evaluation in HTML Visualisations
This will save the HTML plots under a file called html_outputs. It will create one file for every slice specified in slicing_specs, as well as a plot with the overall metrics, without any slice specified. For example, if
slicing_specs=[
'feature_keys: ["payment_type"]',
'feature_keys: ["payment_type", "company"]',
'feature_values: [{key: "payment_type", value: "Cash"}]',
'feature_keys: ["company", "dayofweek"] feature_values: [{key: "payment_type", value: "Cash"}]',
]
four HTML files will be created, as follows:
1. feature_keys: ["payment_type"] will show the metrics for all of the different payment_type values available
2. feature_keys: ["payment_type", "company"] will show the metrics for every unique combination of payment_type and company values available
3. feature_values: [{key: "payment_type", value: "Cash"}] will show the metrics only for the cases where the payment_type is Cash
4. feature_keys: ["company", "dayofweek"] feature_values: [{key: "payment_type", value: "Cash"}] will show the metrics for every unique combination of company and dayofweek wherever the payment_type is Cash.
Additionally, a fifth plot would be created, which contains the metrics with no slice applied.
Once the plots are created, to view and interact with them, double click on the file you wish to open. This will open a new tab with the name of the plot. Then click on Trust HTML and wait for a few seconds to see the plot.
End of explanation
"""
|
steinam/teacher | jup_notebooks/data-science-ipython-notebooks-master/scikit-learn/scikit-learn-pca.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn;
from sklearn import neighbors, datasets
import pylab as pl
seaborn.set()
iris = datasets.load_iris()
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
X_reduced = pca.transform(X)
print("Reduced dataset shape:", X_reduced.shape)
import pylab as pl
pl.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y,
cmap='RdYlBu')
print("Meaning of the 2 components:")
for component in pca.components_:
print(" + ".join("%.3f x %s" % (value, name)
for value, name in zip(component,
iris.feature_names)))
"""
Explanation: scikit-learn-pca
Credits: Forked from PyCon 2015 Scikit-learn Tutorial by Jake VanderPlas
Dimensionality Reduction: PCA
End of explanation
"""
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
"""
Explanation: Dimensionality Reduction: Principal Component Analysis in-depth
Here we'll explore Principal Component Analysis, which is an extremely useful linear dimensionality reduction technique. Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. Look for directions in the data with the most variance.
Useful to explore data, visualize data and relationships.
It's easiest to visualize by looking at a two-dimensional dataset:
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_ratio_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
"""
Explanation: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution:
End of explanation
"""
clf = PCA(0.95) # keep 95% of variance
X_trans = clf.fit_transform(X)
print(X.shape)
print(X_trans.shape)
"""
Explanation: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
The explained variance quantifies this measure of "importance" in direction.
Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance:
End of explanation
"""
X_new = clf.inverse_transform(X_trans)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)
plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)
plt.axis('equal');
"""
Explanation: Isomap: manifold learning, good when PCA doesn't work like in a loop. Large number of datasets, can use randomized PCA.
By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression:
End of explanation
"""
|
theandygross/CancerData | Notebooks/get_all_MAFs.ipynb | mit | import NotebookImport
from Imports import *
from bs4 import BeautifulSoup
from urllib2 import HTTPError
"""
Explanation: <h1 class="alert alert-info">Download Data <small> <i class="icon-download"></i> Get All Available MAF Files from TCGA Data Portal</small></h1>
End of explanation
"""
PATH_TO_CACERT = '/cellar/users/agross/cacert.pem'
"""
Explanation: <div class='alert alert-warning' style='width:600px; font-size:16px'>
<h1>GLOBAL VARIABLE WARNING</h1>
Here I download updated clinical data from the TCGA Data Portal.
This is a secure site which uses HTTPS. I had to give it a path
to my ca-cert for the download to work.
Download a copy of a generic cacert.pem [here](http://curl.haxx.se/ca/cacert.pem).
</div>
End of explanation
"""
out_path = OUT_PATH + '/MAFs_new_2/'
if not os.path.isdir(out_path):
os.makedirs(out_path)
maf_dashboard = 'https://confluence.broadinstitute.org/display/GDAC/MAF+Dashboard'
!curl --cacert $PATH_TO_CACERT $maf_dashboard -o tmp.html
"""
Explanation: Download most recent files from MAF dashboard
End of explanation
"""
f = open('tmp.html', 'rb').read()
soup = BeautifulSoup(f)
r = [l.get('href') for l in soup.find_all('a')
if l.get('href') != None
and '.maf' in l.get('href')]
"""
Explanation: Use BeutifulSoup to parse out all of the links in the table
End of explanation
"""
t = pd.read_table(f, nrows=10, sep='not_real_term', header=None, squeeze=True,
engine='python')
cols = ['Hugo_Symbol', 'NCBI_Build', 'Chromosome', 'Start_position',
'End_position', 'Strand', 'Reference_Allele',
'Tumor_Seq_Allele1', 'Tumor_Seq_Allele2',
'Tumor_Sample_Barcode', 'Protein_Change',
'Variant_Classification','Variant_Type']
maf = {}
for f in r:
try:
t = pd.read_table(f, nrows=10, sep='not_real_term', header=None,
squeeze=True,
engine='python')
skip = t.apply(lambda s: s.startswith('#'))
skip = list(skip[skip==True].index)
h = pd.read_table(f, header=0, index_col=None, skiprows=skip,
engine='python', nrows=0)
cc = list(h.columns.intersection(cols))
maf[f] = pd.read_table(f, header=0, index_col=None,
skiprows=skip,
engine='c',
usecols=cc)
except HTTPError:
print f
m2 = pd.concat(maf)
m3 = m2.dropna(axis=1, how='all')
"""
Explanation: Download all of the MAFs by following the links
This takes a while, as I'm downloading all of the data.
I read in the table first to count the number of comment lines and a second time to actuall load the data.
Yes there is likely a more efficient way to do this, but I'm waiting on https://github.com/pydata/pandas/issues/2685
End of explanation
"""
m4 = m3[cols]
m4 = m4.reset_index()
#m4.index = map(lambda s: s.split('/')[-1], m4.index)
m4 = m4.drop_duplicates(subset=['Hugo_Symbol','Tumor_Sample_Barcode','Start_position'])
m4 = m4.reset_index()
m4.to_csv(out_path + 'mega_maf.csv')
"""
Explanation: Reduce MAF down to most usefull columns
End of explanation
"""
m5 = m4.ix[m4.Variant_Classification != 'Silent']
cc = m5.groupby(['Hugo_Symbol','Tumor_Sample_Barcode']).size()
cc = cc.reset_index()
cc.to_csv(out_path + 'meta.csv')
cc.shape
"""
Explanation: Get gene by patient mutation count matrix and save
End of explanation
"""
|
alexgorban/models | research/object_detection/object_detection_tutorial.ipynb | apache-2.0 | !pip install -U --pre tensorflow=="2.*"
"""
Explanation: Object Detection API Demo
<table align="left"><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab
</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td></table>
Welcome to the Object Detection API. This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image.
Important: This tutorial is to help you through the first step towards using Object Detection API to build models. If you just just need an off the shelf model that does the job, see the TFHub object detection example.
Setup
Important: If you're running on a local machine, be sure to follow the installation instructions. This notebook includes only what's necessary to run in Colab.
Install
End of explanation
"""
!pip install pycocotools
"""
Explanation: Make sure you have pycocotools installed
End of explanation
"""
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
"""
Explanation: Get tensorflow/models or cd to parent directory of the repository.
End of explanation
"""
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
"""
Explanation: Compile protobufs and install the object_detection package
End of explanation
"""
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
"""
Explanation: Imports
End of explanation
"""
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
"""
Explanation: Import the object detection module.
End of explanation
"""
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
"""
Explanation: Patches:
End of explanation
"""
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
"""
Explanation: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing the path.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
Loader
End of explanation
"""
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
"""
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
"""
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
"""
Explanation: For the sake of simplicity we will test on 2 images:
End of explanation
"""
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
detection_model = load_model(model_name)
"""
Explanation: Detection
Load an object detection model:
End of explanation
"""
print(detection_model.inputs)
"""
Explanation: Check the model's input signature, it expects a batch of 3-color images of type uint8:
End of explanation
"""
detection_model.output_dtypes
detection_model.output_shapes
"""
Explanation: And retuns several outputs:
End of explanation
"""
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
output_dict = model(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
"""
Explanation: Add a wrapper function to call the model, and cleanup the outputs:
End of explanation
"""
def show_inference(model, image_path):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = np.array(Image.open(image_path))
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
display(Image.fromarray(image_np))
for image_path in TEST_IMAGE_PATHS:
show_inference(detection_model, image_path)
"""
Explanation: Run it on each test image and show the results:
End of explanation
"""
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model("mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28")
"""
Explanation: Instance Segmentation
End of explanation
"""
masking_model.output_shapes
for image_path in TEST_IMAGE_PATHS:
show_inference(masking_model, image_path)
"""
Explanation: The instance segmentation model includes a detection_masks output:
End of explanation
"""
|
Nathx/think_stats | resolved/survival.ipynb | gpl-3.0 | from __future__ import print_function, division
import nsfg
import survival
import thinkstats2
import thinkplot
import pandas
import numpy
from lifelines import KaplanMeierFitter
from collections import defaultdict
import matplotlib.pyplot as pyplot
%matplotlib inline
"""
Explanation: This notebook contains examples related to survival analysis, based on Chapter 13 of<br>
Think Stats, 2nd Edition<br>
by Allen Downey<br>
available from thinkstats2.com
End of explanation
"""
preg = nsfg.ReadFemPreg()
complete = preg.query('outcome in [1, 3, 4]').prglngth
cdf = thinkstats2.Cdf(complete, label='cdf')
sf = survival.SurvivalFunction(cdf, label='survival')
thinkplot.Plot(sf)
thinkplot.Config(xlabel='duration (weeks)', ylabel='survival function')
#thinkplot.Save(root='survival_talk1', formats=['png'])
"""
Explanation: The first example looks at pregnancy lengths for respondents in the National Survey of Family Growth (NSFG). This is the easy case, because we can directly compute the CDF of pregnancy length; from that we can get the survival function:
End of explanation
"""
hf = sf.MakeHazard(label='hazard')
thinkplot.Plot(hf)
thinkplot.Config(xlabel='duration (weeks)', ylabel='hazard function', ylim=[0, 0.75], loc='upper left')
#thinkplot.Save(root='survival_talk2', formats=['png'])
"""
Explanation: About 17% of pregnancies end in the first trimester, but a large majorty pregnancies that exceed 13 weeks go to full term.
Next we can use the survival function to compute the hazard function.
End of explanation
"""
rem_life = sf.RemainingLifetime()
thinkplot.Plot(rem_life)
thinkplot.Config(xlabel='weeks', ylabel='mean remaining weeks', legend=False)
#thinkplot.Save(root='survival_talk3', formats=['png'])
"""
Explanation: The hazard function shows the same pattern: the lowest hazard in the second semester, and by far the highest hazard around 30 weeks.
We can also use the survival curve to compute mean remaining lifetime as a function of how long the pregnancy has gone.
End of explanation
"""
resp = survival.ReadFemResp2002()
len(resp)
"""
Explanation: For 38 weeks, the finish line approaches nearly linearly. But at 39 weeks, the expected remaining time levels off abruptly. After that, each week that passes brings the finish line no closer.
I started with pregnancy lengths because they represent the easy case where the distribution of lifetimes is known. But often in observational studies we have a combination of complete cases, where the lifetime is known, and ongoing cases where we have a lower bound on the lifetime.
As an example, we'll look at the time until first marriage for women in the NSFG.
End of explanation
"""
complete = resp[resp.evrmarry == 1].agemarry
ongoing = resp[resp.evrmarry == 0].age
"""
Explanation: For complete cases, we know the respondent's age at first marriage. For ongoing cases, we have the respondent's age when interviewed.
End of explanation
"""
nan = complete[numpy.isnan(complete)]
len(nan)
"""
Explanation: There are only a few cases with unknown marriage dates.
End of explanation
"""
hf = survival.EstimateHazardFunction(complete, ongoing)
sf = hf.MakeSurvival()
"""
Explanation: EstimateHazardFunction is an implementation of Kaplan-Meier estimation.
With an estimated hazard function, we can compute a survival function.
End of explanation
"""
thinkplot.Plot(hf)
thinkplot.Config(xlabel='age (years)', ylabel='hazard function', legend=False)
#thinkplot.Save(root='survival_talk4', formats=['png'])
"""
Explanation: Here's the hazard function:
End of explanation
"""
thinkplot.Plot(sf)
thinkplot.Config(xlabel='age (years)',
ylabel='prob unmarried',
ylim=[0, 1],
legend=False)
#thinkplot.Save(root='survival_talk5', formats=['png'])
"""
Explanation: As expected, the hazard function is highest in the mid-20s. The function increases again after 35, but that is an artifact of the estimation process and a misleading visualization. Making a better representation of the hazard function is on my TODO list.
Here's the survival function:
End of explanation
"""
ss = sf.ss
end_ss = ss[-1]
prob_marry44 = (ss - end_ss) / ss
thinkplot.Plot(sf.ts, prob_marry44)
thinkplot.Config(xlabel='age (years)', ylabel='prob marry before 44', ylim=[0, 1], legend=False)
#thinkplot.Save(root='survival_talk6', formats=['png'])
"""
Explanation: The survival function naturally smooths out the noisiness in the hazard function.
With the survival curve, we can also compute the probability of getting married before age 44, as a function of current age.
End of explanation
"""
func = lambda pmf: pmf.Percentile(50)
rem_life = sf.RemainingLifetime(filler=numpy.inf, func=func)
thinkplot.Plot(rem_life)
thinkplot.Config(ylim=[0, 15],
xlim=[11, 31],
xlabel='age (years)',
ylabel='median remaining years')
#thinkplot.Save(root='survival_talk7', formats=['png'])
"""
Explanation: After age 20, the probability of getting married drops off nearly linearly.
We can also compute the median time until first marriage as a function of age.
End of explanation
"""
resp['event_times'] = resp.age
resp['event_times'][resp.evrmarry == 1] = resp.agemarry
len(resp)
"""
Explanation: At age 11, young women are a median of 14 years away from their first marriage. At age 23, the median has fallen to 7 years. But an never-married woman at 30 is back to a median remaining time of 14 years.
I also want to demonstrate lifelines, which is a Python module that provides Kaplan-Meier estimation and other tools related to survival analysis.
To use lifelines, we have to get the data into a different format. First I'll add a column to the respondent DataFrame with "event times", meaning either age at first marriage OR age at time of interview.
End of explanation
"""
cleaned = resp.dropna(subset=['event_times'])
len(cleaned)
"""
Explanation: Lifelines doesn't like NaNs, so let's get rid of them:
End of explanation
"""
kmf = KaplanMeierFitter()
kmf.fit(cleaned.event_times, cleaned.evrmarry)
"""
Explanation: Now we can use the KaplanMeierFitter, passing the series of event times and a series of booleans indicating which events are complete and which are ongoing:
End of explanation
"""
thinkplot.Plot(sf)
thinkplot.Config(xlim=[0, 45], legend=False)
pyplot.grid()
kmf.survival_function_.plot()
"""
Explanation: Here are the results from my implementation compared with the results from Lifelines.
End of explanation
"""
complete = [1, 2, 3]
ongoing = [2.5, 3.5]
"""
Explanation: They are at least visually similar. Just to double check, I ran a small example:
End of explanation
"""
hf = survival.EstimateHazardFunction(complete, ongoing)
hf.series
"""
Explanation: Here's the hazard function:
End of explanation
"""
sf = hf.MakeSurvival()
sf.ts, sf.ss
"""
Explanation: And the survival function.
End of explanation
"""
T = pandas.Series(complete + ongoing)
E = [1, 1, 1, 0, 0]
"""
Explanation: My implementation only evaluate the survival function at times when a completed event occurred.
Next I'll reformat the data for lifelines:
End of explanation
"""
kmf = KaplanMeierFitter()
kmf.fit(T, E)
kmf.survival_function_
"""
Explanation: And run the KaplanMeier Fitter:
End of explanation
"""
resp5 = survival.ReadFemResp1995()
resp6 = survival.ReadFemResp2002()
resp7 = survival.ReadFemResp2010()
resp8 = survival.ReadFemResp2013()
"""
Explanation: The results are the same, except that the Lifelines implementation evaluates the survival function at all event times, complete or not.
Next, I'll use additional data from the NSFG to investigate "marriage curves" for successive generations of women.
Here's data from the last 4 cycles of the NSFG:
End of explanation
"""
def EstimateSurvival(resp):
"""Estimates the survival curve.
resp: DataFrame of respondents
returns: pair of HazardFunction, SurvivalFunction
"""
complete = resp[resp.evrmarry == 1].agemarry
ongoing = resp[resp.evrmarry == 0].age
hf = survival.EstimateHazardFunction(complete, ongoing)
sf = hf.MakeSurvival()
return hf, sf
"""
Explanation: This function takes a respondent DataFrame and estimates survival curves:
End of explanation
"""
def ResampleSurvivalByDecade(resps, iters=101, predict_flag=False, omit=[]):
"""Makes survival curves for resampled data.
resps: list of DataFrames
iters: number of resamples to plot
predict_flag: whether to also plot predictions
returns: map from group name to list of survival functions
"""
sf_map = defaultdict(list)
# iters is the number of resampling runs to make
for i in range(iters):
# we have to resample the data from each cycles separately
samples = [thinkstats2.ResampleRowsWeighted(resp)
for resp in resps]
# then join the cycles into one big sample
sample = pandas.concat(samples, ignore_index=True)
for decade in omit:
sample = sample[sample.decade != decade]
# group by decade
grouped = sample.groupby('decade')
# and estimate (hf, sf) for each group
hf_map = grouped.apply(lambda group: EstimateSurvival(group))
if predict_flag:
MakePredictionsByDecade(hf_map)
# extract the sf from each pair and acculumulate the results
for name, (hf, sf) in hf_map.iteritems():
sf_map[name].append(sf)
return sf_map
"""
Explanation: This function takes a list of respondent files, resamples them, groups by decade, optionally generates predictions, and returns a map from group name to a list of survival functions (each based on a different resampling):
End of explanation
"""
def MakePredictionsByDecade(hf_map, **options):
"""Extends a set of hazard functions and recomputes survival functions.
For each group in hf_map, we extend hf and recompute sf.
hf_map: map from group name to (HazardFunction, SurvivalFunction)
"""
# TODO: this only works if the names and values are in increasing order,
# which is true when hf_map is a GroupBy object, but not generally
# true for maps.
names = hf_map.index.values
hfs = [hf for (hf, sf) in hf_map.values]
# extend each hazard function using data from the previous cohort,
# and update the survival function
for i, hf in enumerate(hfs):
if i > 0:
hf.Extend(hfs[i-1])
sf = hf.MakeSurvival()
hf_map[names[i]] = hf, sf
"""
Explanation: And here's how the predictions work:
End of explanation
"""
def MakeSurvivalCI(sf_seq, percents):
# find the union of all ts where the sfs are evaluated
ts = set()
for sf in sf_seq:
ts |= set(sf.ts)
ts = list(ts)
ts.sort()
# evaluate each sf at all times
ss_seq = [sf.Probs(ts) for sf in sf_seq]
# return the requested percentiles from each column
rows = thinkstats2.PercentileRows(ss_seq, percents)
return ts, rows
"""
Explanation: This function takes a list of survival functions and returns a confidence interval:
End of explanation
"""
resps = [resp5, resp6, resp7, resp8]
sf_map = ResampleSurvivalByDecade(resps)
"""
Explanation: Make survival curves without predictions:
End of explanation
"""
resps = [resp5, resp6, resp7, resp8]
sf_map_pred = ResampleSurvivalByDecade(resps, predict_flag=True)
"""
Explanation: Make survival curves with predictions:
End of explanation
"""
def PlotSurvivalFunctionByDecade(sf_map, predict_flag=False):
thinkplot.PrePlot(len(sf_map))
for name, sf_seq in sorted(sf_map.iteritems(), reverse=True):
ts, rows = MakeSurvivalCI(sf_seq, [10, 50, 90])
thinkplot.FillBetween(ts, rows[0], rows[2], color='gray')
if predict_flag:
thinkplot.Plot(ts, rows[1], color='gray')
else:
thinkplot.Plot(ts, rows[1], label='%d0s'%name)
thinkplot.Config(xlabel='age(years)', ylabel='prob unmarried',
xlim=[15, 45], ylim=[0, 1], legend=True, loc='upper right')
"""
Explanation: This function plots survival curves:
End of explanation
"""
PlotSurvivalFunctionByDecade(sf_map)
#thinkplot.Save(root='survival_talk8', formats=['png'])
"""
Explanation: Now we can plot results without predictions:
End of explanation
"""
PlotSurvivalFunctionByDecade(sf_map_pred, predict_flag=True)
PlotSurvivalFunctionByDecade(sf_map)
#thinkplot.Save(root='survival_talk9', formats=['png'])
"""
Explanation: And plot again with predictions:
End of explanation
"""
|
dwhswenson/openpathsampling | examples/misc/sshooting-example.ipynb | mit | # Set simulation details.
D = 1.0 # Diffusion constant.
beta = 4.0 # Beta.
dt = 0.001 # Timestep delta t [time units]
tau = 0.5 # One-way trajectory length tau [time units] (= maximum correlation function time).
n_corr_points = 501 # Number of correlation function points (t = 0 counts also).
A_max = -0.4 # Upper boundary for region A.
S_min = -0.1 # Lower boundary for region S.
S_max = 0.1 # Upper boundary for region S.
B_min = 0.4 # Lower boundary for region B.
n_snapshots = 100 # Number of shooting points (= initial snapshots).
n_per_snapshot = 1 # Number of trajectories generated per shooting point.
temperature = 1 / beta
L = int(tau / dt)
n_steps_per_frame = int(L / (n_corr_points - 1))
trajectory_length = n_corr_points - 1
# Sanity checks
if L * dt != tau:
raise ValueError("Total simulation length is not a multiple of time step length:\n"
"L * dt = {0:d} * {1:f} = {2:f} != {3:f} = tau.".format(L, dt, L * dt, tau))
if n_steps_per_frame * (n_corr_points - 1) != L:
raise ValueError("Trajectory length is not a multiple of steps per frame and"
" number of requested correlation function points:\n"
"n_steps_per_frame * (n_corr_points - 1) = {0:d} * {1:d} = {2:d} != "
"{3:d} = L.".format(n_steps_per_frame, (n_corr_points - 1),
n_steps_per_frame * (n_corr_points - 1), L))
print("Temperature = ", temperature)
print("L = ", L)
print("n_steps_per_frame = ", n_steps_per_frame)
print("trajectory_length = ", trajectory_length)
pes = DoubleWell([1.0], [1.0])
topology = toys.Topology(
n_atoms = 1,
n_spatial = 1,
masses = [1.0],
pes = pes
)
# D = kB * T / gamma
#gamma = temperature / D
#integ = toys.LangevinBAOABIntegrator(dt, temperature, gamma)
integ = OverdampedLangevinIntegrator(dt, temperature, D)
options = {
'integ' : integ,
'n_frames_max' : 5000,
'n_steps_per_frame' : n_steps_per_frame
}
toy_eng = toys.Engine(
options = options,
topology = topology
)
toy_eng.initialized = True
paths.PathMover.engine = toy_eng
"""
Explanation: S-shooting
This example demonstrates how an S-shooting simulation is carried out and analyzed in OPS. The theoretical background for S-shooting is provided in the following publication:
Menzl, G., Singraber, A. & Dellago, C. S-shooting: a Bennett–Chandler-like method for the computation of rate constants from committor trajectories. Faraday Discuss. 195, 345–364 (2017), https://doi.org/10.1039/C6FD00124F
Here, we reproduce with OPS the example of a Brownian walker in a double-well potential given in the above publication. Please refer to the paper for a detailed description of nomenclature and simulation setup.
Note: For satisfactory results with good statistics a high number of initial snapshots should be used (~1000+). Unfortunately, with the current performance this will then NOT be a quick example. In order to keep the execution time within a few minutes the number of snapshots was set to a low number (see n_snapshots parameter below) but the results will probably not be matching with the reference publication.
Trajectory sampling
First, we set up our simple test system, a single particle in a 2-dimensional potential energy surface consisting of two gaussian wells and outer walls. The particle is propagated with an overdamped Langevin integrator.
End of explanation
"""
def Epot(x):
toy_eng.positions = np.array([x])
return pes.V(toy_eng)
x0 = 0.0
k = 50
#k = 0
def Ebias(x):
return 0.5 * k * (x - x0)**2
# Choose initial point.
#x = np.random.uniform(S_min, S_max)
x = 0.0
points = [np.array([[x]])]
# Set displacement and number of MC steps for each new snapshot position.
mcdisp = 0.1
mcsteps = 100
# Initialize some counters.
mcacc = 0
mcrej = 0
mcmov = 0
E_S = 0.0
# Loop to create snapshots positions.
for i in range(n_snapshots - 1):
x = points[-1][0][0]
# Perform MC steps.
for m in range(mcsteps):
xnew = x + mcdisp * np.random.normal()
Eold = Epot(x) + Ebias(x)
Enew = Epot(xnew) + Ebias(xnew)
# Stay always in S.
if S_min < xnew and xnew < S_max:
# Metropolis MC scheme.
if Eold > Enew:
x = xnew
E = Enew
mcacc += 1
else:
if np.random.uniform() < np.exp(-beta * (Enew - Eold)):
x = xnew
E = Enew
mcacc += 1
else:
E = Eold
mcrej += 1
else:
E = Eold
mcrej += 1
# Add new energy to average.
E_S += E
mcmov += 1
# Append new position to list.
points.append(np.array([[x]]))
# Create snapshot template and list of initial snapshots.
template = toys.Snapshot(
coordinates = np.array([[0.0]]),
velocities = np.array([[0.0]]),
engine = toy_eng
)
snapshots = [template.copy_with_replacement(coordinates=point) for point in points]
print("-----------------------------------------------------")
print("Generated", n_snapshots, "initial snapshot positions.")
print( "-----------------------------------------------------")
print( "Average energy : ", E_S / (n_snapshots * mcsteps))
print( "Acceptance ratio : ", float(mcacc) / mcmov)
print( "-----------------------------------------------------")
"""
Explanation: Next the initial snapshots are randomly chosen within the region $S$ according to their Boltzmann weight $e^{-\beta E(x_0)}$ (rejection sampling). Usually these configurations are available from a preceding simulation, e.g. umbrella sampling. For a biased sampling select $k \neq 0$ below.
End of explanation
"""
# Plot settings.
delta = 0.01
x_min = -1.5
x_max = 1.5
n_histo = 50
fig = plt.figure(figsize=plt.figaspect(0.4))
# First plot.
ax = fig.add_subplot(1, 2, 1)
ax.set_xlim(x_min, x_max)
ax.set_xlabel("Position")
ax.set_ylim(0.0, 1.7)
ax.set_ylabel("Energy")
# Epot and Ebias functions.
X = np.linspace(x_min, x_max, num=500)
Y_Epot = np.array([Epot(x) for x in X])
Y_Ebias = np.array([Ebias(x) for x in X])
ax.plot(X, Y_Epot)
ax.plot(X, Y_Ebias)
ax.plot(X, Y_Epot + Y_Ebias)
# Initial snapshot positions.
X_sn = np.array([s.xyz[0][0] for s in snapshots])
Y_sn = np.array([Epot(x) + Ebias(x) for x in X_sn])
ax.plot(X_sn, Y_sn, "ro")
# Region background colors.
ax.axvspan(x_min, A_max, alpha=0.5, color='#FEEBA9')
ax.axvspan(S_min, S_max, alpha=0.5, color='#F9AC7E')
ax.axvspan(B_min, x_max, alpha=0.5, color='#B4E2F1')
# Second plot.
ax = fig.add_subplot(1, 2, 2)
w = S_max - S_min
ax.set_xlim(S_min - 0.1 * w, S_max + 0.1 * w)
ax.set_xlabel("Position")
ax.set_ylabel("Number of snapshots")
# Calculate and plot expected histogram distribution.
X = np.linspace(S_min, S_max, num=500)
Y = np.array([np.exp(-beta * (Epot(x) + Ebias(x))) for x in X])
Y_factor = n_snapshots * w / n_histo / np.trapz(Y, X)
Y_distr = np.array([Y_factor * np.exp(-beta * (Epot(x) + Ebias(x))) for x in X])
ax.plot(X, Y_distr)
# Add histogram of initial snapshot positions.
X_sn = np.array([s.xyz[0][0] for s in snapshots])
ax.hist(X_sn, np.linspace(S_min, S_max, num=n_histo+1), facecolor="r", alpha=0.5)
ax.axvspan(S_min, S_max, alpha=0.5, color='#F9AC7E')
"""
Explanation: Let's plot the potential energy and the corresponding weight in the $S$ region:
End of explanation
"""
def pos1D(snapshot):
return snapshot.xyz[0][0]
cv_x = paths.CoordinateFunctionCV(name="cv_x", f=pos1D)
state_A = paths.CVDefinedVolume(cv_x, -10000, A_max)
state_S = paths.CVDefinedVolume(cv_x, S_min, S_max)
state_B = paths.CVDefinedVolume(cv_x, B_min, 10000)
"""
Explanation: Furthermore we need a method to define states $A$, $S$ and $B$, we use the $x$-coordinate:
End of explanation
"""
randomizer = paths.NoModification()
storage = paths.Storage("sshooting-1d.nc", mode="w", template=template)
"""
Explanation: Finally, we need a randomizer and a storage object:
End of explanation
"""
simulation = SShootingSimulation(
storage = storage,
engine = toy_eng,
state_S = state_S,
randomizer = randomizer,
initial_snapshots = snapshots,
trajectory_length = trajectory_length
)
"""
Explanation: Now all ingredients are combined to form the S-shooting simulation:
End of explanation
"""
%%time
simulation.run(n_per_snapshot=n_per_snapshot)
"""
Explanation: Upon calling the run method the trajectories are harvested and saved.
End of explanation
"""
# Plot settings.
y_min = -1.5
y_max = 1.5
max_traj = 10
fig, ax = plt.subplots()
# Set axis limits.
ax.set_xlim(0, 2 * tau)
ax.set_xlabel("Time")
ax.set_ylim(y_min, y_max)
ax.set_ylabel("Position")
# Loop over all trajectories and print "max_traj" (or less) of them.
for traj in storage.steps[0::max([1, int(len(storage.steps) / float(max_traj))])]:
# Calculate frame times.
X = [n_steps_per_frame * dt * i for i in range(0, len(traj.change.trials[-1]))]
# Get coordinates from trajectories.
Y = []
for snapshot in traj.change.trials[-1]:
for point in snapshot.coordinates:
Y.append(float(point))
# Plot trajectories.
ax.plot(X, Y, "-")
# Region background colors.
ax.axhspan(y_min, A_max, alpha=0.5, color='#FEEBA9')
ax.axhspan(S_min, S_max, alpha=0.5, color='#F9AC7E')
ax.axhspan(B_min, y_max, alpha=0.5, color='#B4E2F1')
storage.close()
"""
Explanation: Let's plot some of the obtained trajectories:
End of explanation
"""
storage = paths.Storage("sshooting-1d.nc", "r")
"""
Explanation: S-shooting analysis
Given the data from the S-shooting simulation we can now analyze the harvested trajectories. First, open the data file with the previously generated trajectories:
End of explanation
"""
def b(snapshot):
Ebias = 0.5 * k * (cv_x(snapshot) - x0)**2
return np.exp(-beta * Ebias)
cv_b = paths.CoordinateFunctionCV(name="cv_b", f=b)
"""
Explanation: Define bias function, required as argument for S-shooting analysis:
End of explanation
"""
%%time
results = SShootingAnalysis(steps=storage.steps,
states=[state_A, state_B, state_S],
bias=cv_b)
"""
Explanation: Now start the S-shooting analysis, this may take a while. If a bias was selected above ($k \neq 0$), add the bias CV as function argument.
End of explanation
"""
M, Ns, I_Bt, Ns_Bt, hAhB_Bt, results_per_snap = results.calculate_averages()
print("M = ", M)
print("Ns = ", Ns)
print("I_Bt = ", I_Bt)
print("Ns_Bt = ", Ns_Bt)
"""
Explanation: The calculate_averages function returns a number of important results, such as these numbers globally averaged over all snapshots and trials:
M ... Total number of harvested trajectory segments: $M$
Ns ... Average number of trajectory points in $S$: $\left<N_S[x(\tau)]\right>_B$
I_Bt ... Average of inverse bias sum over all trajectories: $\left<\frac{1}{\tilde{B}[x(\tau)]}\right>_B$
Ns_Bt ... Average of $N_S$ divided by bias sum over all trajectories: $\left<\frac{N_S[x(\tau)]}{\tilde{B}[x(\tau)]}\right>_B$
hAhB_Bt ... Array with average of $h_A(0)h_B(t)$ divided by bias sum over all trajectories: $\left<\frac{h_A(0)h_B(t)}{\tilde{B}[x(\tau)]}\right>_B$
In addition the same quantities are available for each snapshot individually via the returned dictionary (results_per_snap):
End of explanation
"""
snap = list(results_per_snap.keys())[0]
snap_result = results_per_snap[snap]
print("M = ", snap_result["M"])
print("Ns = ", snap_result["Ns"])
print("I_Bt = ", snap_result["I_Bt"])
print("Ns_Bt = ", snap_result["Ns_Bt"])
"""
Explanation: The per-snapshot results may be accessed like this:
End of explanation
"""
fig, ax = plt.subplots()
fig.set_size_inches(8, 6)
# Prepare and plot reference data.
ref_data = np.loadtxt("reference-sshooting.data")
ref_gradient = np.gradient(ref_data.T[1], ref_data.T[0][1]-ref_data.T[0][0])
ax.set_xlim([0.0, 0.5])
ax.set_ylim([0.0, 0.1375])
ax.set_xlabel(r't', fontsize=16)
ax.set_ylabel(r'$\left<h_A(0)h_B(t)\right>_S$', fontsize=16)
ax.plot(ref_data.T[0], ref_data.T[1], "C0-")
ax2 = ax.twinx()
ax2.set_xlim([0.0, 0.5])
ax2.set_ylim([0.0, 0.5])
ax2.set_ylabel(r'$\frac{d \left<h_A(0)h_B(t)\right>_S}{d t}$', fontsize=16)
ax2.plot(ref_data.T[0], ref_gradient, "C1-")
# Prepare and plot S-shooting data.
delta = n_steps_per_frame * dt
X = np.array([delta * i for i in range(trajectory_length + 1)])
Y = np.gradient(hAhB_Bt/I_Bt, delta)
ax.plot(X, hAhB_Bt/I_Bt, "C0x", ms=5)
ax2.plot(X, Y, "C1x", ms=5)
"""
Explanation: The hAhB_Bt array allows us to plot the time correlation function $\left<h_A(0)h_B(t)\right>_S$ in the $S$-ensemble and its derivative. We also included reference data from Brownian dynamcis from the reference paper.
End of explanation
"""
Ns_Bt / I_Bt
"""
Explanation: The average number of points in the $S$-region in the $S$-ensemble $\left<N_S[x(\tau)]\right>_S$ can be easily computed:
End of explanation
"""
fig, ax = plt.subplots()
fig.set_size_inches(8, 6)
X = np.array([delta * i for i in range(trajectory_length + 1)])
C_AB = results.C_AB(hA=0.487, hS=0.00407)
dC_ABdt = np.gradient(C_AB, delta)
ax.set_xlabel(r't', fontsize=14)
ax.set_ylabel(r'$\frac{d C_{AB}(t)}{d t}$', fontsize=16)
ax.plot(X, dC_ABdt, 'C3-')
"""
Explanation: Finally, if the averages $\left<h_A\right>$ and $\left<h_S\right>$ are provided (see the reference paper) we can plot the derivative of the time correlation function and thus estimate the rate $k_{AB}$ as the plateau value ($k_{AB} = 0.056$).
End of explanation
"""
|
ioos/notebooks_demos | notebooks/2018-03-01-erddapy.ipynb | mit | server = "https://data.ioos.us/gliders/erddap"
protocol = "tabledap"
dataset_id = "whoi_406-20160902T1700"
response = "mat"
variables = [
"depth",
"latitude",
"longitude",
"salinity",
"temperature",
"time",
]
constraints = {
"time>=": "2016-07-10T00:00:00Z",
"time<=": "2017-02-10T00:00:00Z",
"latitude>=": 38.0,
"latitude<=": 41.0,
"longitude>=": -72.0,
"longitude<=": -69.0,
}
from erddapy import ERDDAP
e = ERDDAP(server=server, protocol=protocol,)
e.dataset_id = dataset_id
e.variables = variables
e.constraints = constraints
print(e.get_download_url())
"""
Explanation: erddapy: a python client/URL builder for ERDDAP
ERDDAP has RESTful API that is very convenient for creating web apps, data portals, etc. However, writing those URLs manually can be tedious and error prone
This notebook walks through an easy to set up ERDDAP RESTful URL by using the python client, erddapy.
A typical ERDDAP RESTful URL looks like:
https://data.ioos.us/gliders/erddap/tabledap/whoi_406-20160902T1700.mat?depth,latitude,longitude,salinity,temperature,time&time>=2016-07-10T00:00:00Z&time<=2017-02-10T00:00:00Z &latitude>=38.0&latitude<=41.0&longitude>=-72.0&longitude<=-69.0
Let's break it down to smaller parts:
server: https://data.ioos.us/gliders/erddap/
protocol: tabledap
dataset_id: whoi_406-20160902T1700
response: .mat
variables: depth,latitude,longitude,temperature,time
constraints:
time>=2016-07-10T00:00:00Z
time<=2017-02-10T00:00:00Z
latitude>=38.0
latitude<=41.0
longitude>=-72.0
longitude<=-69.0
We can represent that easily in Python like in the cell below.
Feeding these variables in the erddapy.ERDDAP class we will create the URL builder object.
End of explanation
"""
def show_iframe(src):
from IPython.display import HTML
iframe = '<iframe src="{src}" width="100%" height="950"></iframe>'.format
return HTML(iframe(src=src))
show_iframe(e.get_download_url(response="html"))
"""
Explanation: If we change the response to html we can visualize the page.
End of explanation
"""
show_iframe(e.get_info_url(response="html"))
show_iframe(e.get_search_url(response="html"))
"""
Explanation: Additionally, the object has .get_info_url() and .get_search_url() that can be used to obtain the info and search URLs respectively
End of explanation
"""
df = e.to_pandas(index_col="time (UTC)", parse_dates=True,).dropna()
df.head()
ds = e.to_xarray(decode_times=False)
ds["temperature"]
"""
Explanation: erddapy also brings some simple methods to download the data in some common data formats, like pandas.DataFrame and xarray.Dataset.
End of explanation
"""
%matplotlib inline
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(17, 5))
kw = dict(s=15, c=df["temperature (Celsius)"], marker="o", edgecolor="none")
cs = ax.scatter(df.index, df["depth (m)"], **kw)
ax.invert_yaxis()
ax.set_xlim(df.index[0], df.index[-1])
xfmt = mdates.DateFormatter("%H:%Mh\n%d-%b")
ax.xaxis.set_major_formatter(xfmt)
cbar = fig.colorbar(cs, orientation="vertical", extend="both")
cbar.ax.set_ylabel(r"Temperature ($^\circ$C)")
ax.set_ylabel("Depth (m)")
"""
Explanation: Here is a simple plot using the data from xarray.
End of explanation
"""
import pandas as pd
from erddapy import ERDDAP
e = ERDDAP(server="https://data.ioos.us/gliders/erddap")
df = pd.read_csv(e.get_search_url(response="csv", search_for="all"))
"We have {} tabledap, {} griddap, and {} wms endpoints.".format(
len(set(df["tabledap"].dropna())),
len(set(df["griddap"].dropna())),
len(set(df["wms"].dropna())),
)
"""
Explanation: One can build the proper variables programmatically, feed them in erddapy, and then build a service like this notebook. However, erddapy is also designed for interactive work. One can explore interactively the ERDDAP server from Python.
PS: Note that in this example below we did not feed any variables other than the server URL
End of explanation
"""
kw = {
"standard_name": "sea_water_temperature",
"min_lon": -72.0,
"max_lon": -69.0,
"min_lat": 38.0,
"max_lat": 41.0,
"min_time": "2016-07-10T00:00:00Z",
"max_time": "2017-02-10T00:00:00Z",
"cdm_data_type": "trajectoryprofile",
}
search_url = e.get_search_url(response="csv", **kw)
search = pd.read_csv(search_url)
gliders = search["Dataset ID"].values
msg = "Found {} Glider Datasets:\n\n{}".format
print(msg(len(gliders), "\n".join(gliders)))
"""
Explanation: We can refine our search by adding some constraints.
End of explanation
"""
info_url = e.get_info_url(dataset_id=gliders[0], response="csv")
info = pd.read_csv(info_url)
info.head()
"""
Explanation: Last but not least we can inspect a specific dataset_id.
End of explanation
"""
cdm_profile_variables = info.loc[
info["Attribute Name"] == "cdm_profile_variables", "Variable Name"
]
print("".join(cdm_profile_variables))
"""
Explanation: With the info URL we can filter the data using attributes.
End of explanation
"""
e.get_var_by_attr(
dataset_id=gliders[0], cdm_profile_variables=lambda v: v is not None,
)
e.get_var_by_attr(
dataset_id="whoi_406-20160902T1700", standard_name="sea_water_temperature",
)
axis = e.get_var_by_attr(
dataset_id="whoi_406-20160902T1700", axis=lambda v: v in ["X", "Y", "Z", "T"],
)
axis
"""
Explanation: In fact, that is such a common operation that erddapy brings its own method for filtering data by attributes. In the next three cells we request the variables names that has a cdm_profile_variables, a standard_name of sea_water_temperature, and an axis respectively.
End of explanation
"""
def get_cf_vars(
e,
dataset_id,
standard_names=["sea_water_temperature", "sea_water_practical_salinity"],
):
"""Return the axis of a dataset_id the variable with the `standard_name`."""
variables = e.get_var_by_attr(
dataset_id=dataset_id, axis=lambda v: v in ["X", "Y", "Z", "T"]
)
if len(variables) < 4:
raise Exception("Expected at least 4 axis, found {!r}".format(variables))
var = e.get_var_by_attr(
dataset_id=dataset_id, standard_name=lambda v: v in standard_names
)
if len(var) > 2:
raise Exception(
"Found more than 1 variable with `standard_names` {}\n{!r}".format(
standard_names, var
)
)
variables.extend(var)
return variables
from requests.exceptions import HTTPError
def download_csv(url):
return pd.read_csv(url, index_col="time", parse_dates=True, skiprows=[1])
dfs = {}
for glider in gliders:
variables = get_cf_vars(
e,
dataset_id=glider,
standard_names=["sea_water_temperature", "sea_water_practical_salinity"],
)
try:
download_url = e.get_download_url(
dataset_id=glider,
protocol="tabledap",
variables=variables,
response="csv",
constraints=constraints,
)
except HTTPError:
continue
dfs.update({glider: download_csv(download_url)})
"""
Explanation: With this method one can, for example, request data from multiple datasets using the standard_name.
End of explanation
"""
k = 0
tiles = (
"http://services.arcgisonline.com/arcgis/rest/services/"
"World_Topo_Map/MapServer/MapServer/tile/{z}/{y}/{x}"
)
def plot_track(df, name, color="orange"):
df = df.reset_index().drop_duplicates("time", keep="first").sort_values("time")
locations = list(zip(df["latitude"].values, df["longitude"].values))
folium.PolyLine(
locations=locations,
color=color,
weight=8,
opacity=0.7,
tooltip=name,
popup=name,
).add_to(m)
from palettable import cubehelix
colors = cubehelix.Cubehelix.make(
n=len(dfs),
start_hue=240,
end_hue=-300,
min_sat=1,
max_sat=2.5,
min_light=0.3,
max_light=0.8,
gamma=0.9,
).hex_colors
import folium
m = folium.Map(location=(40.3052, -70.8833), zoom_start=7, tiles=tiles, attr="ESRI")
for name, df in list(dfs.items()):
plot_track(df, name, color=colors[k])
k += 1
m
def glider_scatter(df, ax, glider):
ax.scatter(df["temperature"], df["salinity"], s=10, alpha=0.5, label=glider)
fig, ax = plt.subplots(figsize=(7, 7))
ax.set_ylabel("salinity")
ax.set_xlabel("temperature")
ax.grid(True)
for glider, df in dfs.items():
glider_scatter(df, ax, glider)
ax.set_ylim(20, 41)
ax.set_xlim(2.5, 26)
ax.legend(bbox_to_anchor=(1.5, 0.5), loc="right")
"""
Explanation: To close this notebook, let's plot the tracks and a TS diagram for all the gliders found in that search.
End of explanation
"""
|
computational-class/cjc2016 | code/09.04-Feature-Engineering.ipynb | mit | data = [
{'price': 850000, 'rooms': 4, 'neighborhood': 'Queen Anne'},
{'price': 700000, 'rooms': 3, 'neighborhood': 'Fremont'},
{'price': 650000, 'rooms': 3, 'neighborhood': 'Wallingford'},
{'price': 600000, 'rooms': 2, 'neighborhood': 'Fremont'}
]
"""
Explanation: Feature Engineering
<!--BOOK_INFORMATION-->
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
<!--NAVIGATION-->
< Hyperparameters and Model Validation | Contents | In Depth: Naive Bayes Classification >
numerical data in a tidy, [n_samples, n_features] format VS. Real world.
Feature engineering taking whatever information you have about your problem and turning it into numbers that you can use to build your feature matrix.
In this section, we will cover a few common examples of feature engineering tasks:
- features for representing categorical data,
- features for representing text, and
- features for representing images.
- derived features for increasing model complexity
- imputation of missing data.
Often this process is known as vectorization
- as it involves converting arbitrary data into well-behaved vectors.
Categorical Features
One common type of non-numerical data is categorical data.
Housing prices,
- "price" and "rooms"
- "neighborhood" information.
End of explanation
"""
{'Queen Anne': 1, 'Fremont': 2, 'Wallingford': 3};
# It turns out that this is not generally a useful approach
"""
Explanation: You might be tempted to encode this data with a straightforward numerical mapping:
End of explanation
"""
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer(sparse=False, dtype=int )
vec.fit_transform(data)
"""
Explanation: A fundamental assumption: numerical features reflect algebraic quantities.
Queen Anne < Fremont < Wallingford
Wallingford - Queen Anne = Fremont
It does not make much sense.
One-hot encoding (Dummy coding) effectively creates extra columns indicating the presence or absence of a category with a value of 1 or 0, respectively.
- When your data comes as a list of dictionaries
- Scikit-Learn's DictVectorizer will do this for you:
End of explanation
"""
vec.get_feature_names()
"""
Explanation: Notice
the 'neighborhood' column has been expanded into three separate columns (why not four?)
representing the three neighborhood labels, and that each row has a 1 in the column associated with its neighborhood.
To see the meaning of each column, you can inspect the feature names:
End of explanation
"""
vec = DictVectorizer(sparse=True, dtype=int)
vec.fit_transform(data)
"""
Explanation: There is one clear disadvantage of this approach:
- if your category has many possible values, this can greatly increase the size of your dataset.
- However, because the encoded data contains mostly zeros, a sparse output can be a very efficient solution:
End of explanation
"""
sample = ['problem of evil',
'evil queen',
'horizon problem']
"""
Explanation: Many (though not yet all) of the Scikit-Learn estimators accept such sparse inputs when fitting and evaluating models.
two additional tools that Scikit-Learn includes to support this type of encoding:
- sklearn.preprocessing.OneHotEncoder
- sklearn.feature_extraction.FeatureHasher
Text Features
Another common need in feature engineering is to convert text to a set of representative numerical values.
Most automatic mining of social media data relies on some form of encoding the text as numbers.
- One of the simplest methods of encoding data is by word counts:
- you take each snippet of text, count the occurrences of each word within it, and put the results in a table.
For example, consider the following set of three phrases:
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer()
X = vec.fit_transform(sample)
X
"""
Explanation: For a vectorization of this data based on word count, we could construct a column representing the word "problem," the word "evil," the word "horizon," and so on.
While doing this by hand would be possible, the tedium can be avoided by using Scikit-Learn's CountVectorizer:
End of explanation
"""
import pandas as pd
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
"""
Explanation: The result is a sparse matrix recording the number of times each word appears;
it is easier to inspect if we convert this to a DataFrame with labeled columns:
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer()
X = vec.fit_transform(sample)
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
"""
Explanation: Problem: The raw word counts put too much weight on words that appear very frequently.
term frequency-inverse document frequency (TF–IDF) weights the word counts by a measure of how often they appear in the documents.
The syntax for computing these features is similar to the previous example:
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1, 2, 3, 4, 5])
y = np.array([4, 2, 1, 3, 7])
plt.scatter(x, y);
"""
Explanation: For an example of using TF-IDF in a classification problem, see In Depth: Naive Bayes Classification.
Image Features
The simplest approach is what we used for the digits data in Introducing Scikit-Learn: simply using the pixel values themselves.
- But depending on the application, such approaches may not be optimal.
- A comprehensive summary of feature extraction techniques for images in the Scikit-Image project.
For one example of using Scikit-Learn and Scikit-Image together, see Feature Engineering: Working with Images.
Derived Features
Another useful type of feature is one that is mathematically derived from some input features.
We saw an example of this in Hyperparameters and Model Validation when we constructed polynomial features from our input data.
To convert a linear regression into a polynomial regression
- not by changing the model
- but by transforming the input!
- basis function regression, and is explored further in In Depth: Linear Regression.
For example, this data clearly cannot be well described by a straight line:
End of explanation
"""
from sklearn.linear_model import LinearRegression
X = x[:, np.newaxis]
model = LinearRegression().fit(X, y)
yfit = model.predict(X)
plt.scatter(x, y)
plt.plot(x, yfit);
"""
Explanation: Still, we can fit a line to the data using LinearRegression and get the optimal result:
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=3, include_bias=False)
X2 = poly.fit_transform(X)
print(X2)
"""
Explanation: We need a more sophisticated model to describe the relationship between $x$ and $y$.
- One approach to this is to transform the data,
- adding extra columns of features to drive more flexibility in the model.
For example, we can add polynomial features to the data this way:
End of explanation
"""
model = LinearRegression().fit(X2, y)
yfit = model.predict(X2)
plt.scatter(x, y)
plt.plot(x, yfit);
"""
Explanation: The derived feature matrix has one column representing $x$, and a second column representing $x^2$, and a third column representing $x^3$.
Computing a linear regression on this expanded input gives a much closer fit to our data:
End of explanation
"""
from numpy import nan
X = np.array([[ nan, 0, 3 ],
[ 3, 7, 9 ],
[ 3, 5, 2 ],
[ 4, nan, 6 ],
[ 8, 8, 1 ]])
y = np.array([14, 16, -1, 8, -5])
"""
Explanation: This idea of improving a model not by changing the model, but by transforming the inputs, is fundamental to many of the more powerful machine learning methods.
We explore this idea further in In Depth: Linear Regression in the context of basis function regression.
More generally, this is one motivational path to the powerful set of techniques known as kernel methods, which we will explore in In-Depth: Support Vector Machines.
Imputation of Missing Data
Another common need in feature engineering is handling of missing data.
Handling Missing Data
NaN value is used to mark missing values.
For example, we might have a dataset that looks like this:
End of explanation
"""
from sklearn.preprocessing import Imputer
imp = Imputer(strategy='mean')
X2 = imp.fit_transform(X)
X2
"""
Explanation: When applying a typical machine learning model to such data, we will need to first replace such missing data with some appropriate fill value.
This is known as imputation of missing values
- simple method, e.g., replacing missing values with the mean of the column
- sophisticated method, e.g., using matrix completion or a robust model to handle such data
- It tends to be very application-specific, and we won't dive into them here.
For a baseline imputation approach, using the mean, median, or most frequent value, Scikit-Learn provides the Imputer class:
End of explanation
"""
model = LinearRegression().fit(X2, y)
model.predict(X2)
"""
Explanation: We see that in the resulting data, the two missing values have been replaced with the mean of the remaining values in the column.
This imputed data can then be fed directly into, for example, a LinearRegression estimator:
End of explanation
"""
from sklearn.pipeline import make_pipeline
model = make_pipeline(Imputer(strategy='mean'),
PolynomialFeatures(degree=2),
LinearRegression())
"""
Explanation: Feature Pipelines
With any of the preceding examples, it can quickly become tedious to do the transformations by hand, especially if you wish to string together multiple steps.
For example, we might want a processing pipeline that looks something like this:
Impute missing values using the mean
Transform features to quadratic
Fit a linear regression
To streamline this type of processing pipeline, Scikit-Learn provides a Pipeline object, which can be used as follows:
End of explanation
"""
model.fit(X, y) # X with missing values, from above
print(y)
print(model.predict(X))
"""
Explanation: This pipeline looks and acts like a standard Scikit-Learn object, and will apply all the specified steps to any input data.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/08_image/mnist_linear.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import numpy as np
import shutil
import os
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
print(tf.__version__)
"""
Explanation: MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image models on MNIST using tf.keras.
<hr/>
This <a href="mnist_models.ipynb">companion notebook</a> extends the basic harness of this notebook to a variety of models including DNN, CNN, dropout, pooling etc.
End of explanation
"""
HEIGHT = 28
WIDTH = 28
NCLASSES = 10
# Get mnist data
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Scale our features between 0 and 1
x_train, x_test = x_train / 255.0, x_test / 255.0
# Convert labels to categorical one-hot encoding
y_train = tf.keras.utils.to_categorical(y = y_train, num_classes = NCLASSES)
y_test = tf.keras.utils.to_categorical(y = y_test, num_classes = NCLASSES)
print("x_train.shape = {}".format(x_train.shape))
print("y_train.shape = {}".format(y_train.shape))
print("x_test.shape = {}".format(x_test.shape))
print("y_test.shape = {}".format(y_test.shape))
import matplotlib.pyplot as plt
IMGNO = 12
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
"""
Explanation: Exploring the data
Let's download MNIST data and examine the shape. We will need these numbers ...
End of explanation
"""
# Build Keras Model Using Keras Sequential API
def linear_model():
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape = [HEIGHT, WIDTH], name = "image"))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units = NCLASSES, activation = tf.nn.softmax, name = "probabilities"))
return model
"""
Explanation: Define the model.
Let's start with a very simple linear classifier. All our models will have this basic interface -- they will take an image and return probabilities.
End of explanation
"""
# Create training input function
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x = {"image": x_train},
y = y_train,
batch_size = 100,
num_epochs = None,
shuffle = True,
queue_capacity = 5000
)
# Create evaluation input function
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x = {"image": x_test},
y = y_test,
batch_size = 100,
num_epochs = 1,
shuffle = False,
queue_capacity = 5000
)
# Create serving input function for inference
def serving_input_fn():
placeholders = {"image": tf.placeholder(dtype = tf.float32, shape = [None, HEIGHT, WIDTH])}
features = placeholders # as-is
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = placeholders)
"""
Explanation: Write Input Functions
As usual, we need to specify input functions for training, evaluation, and predicition.
End of explanation
"""
def train_and_evaluate(output_dir, hparams):
# Build Keras model
model = linear_model()
# Compile Keras model with optimizer, loss function, and eval metrics
model.compile(
optimizer = "adam",
loss = "categorical_crossentropy",
metrics = ["accuracy"])
# Convert Keras model to an Estimator
estimator = tf.keras.estimator.model_to_estimator(
keras_model = model,
model_dir = output_dir)
# Set estimator's train_spec to use train_input_fn and train for so many steps
train_spec = tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = hparams["train_steps"])
# Create exporter that uses serving_input_fn to create saved_model for serving
exporter = tf.estimator.LatestExporter(
name = "exporter",
serving_input_receiver_fn = serving_input_fn)
# Set estimator's eval_spec to use eval_input_fn and export saved_model
eval_spec = tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
exporters = exporter)
# Run train_and_evaluate loop
tf.estimator.train_and_evaluate(
estimator = estimator,
train_spec = train_spec,
eval_spec = eval_spec)
"""
Explanation: Create train_and_evaluate function
tf.estimator.train_and_evaluate does distributed training.
End of explanation
"""
OUTDIR = "mnist/learned"
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
hparams = {"train_steps": 1000, "learning_rate": 0.01}
train_and_evaluate(OUTDIR, hparams)
"""
Explanation: This is the main() function
End of explanation
"""
|
moranconnorj/code_guild | wk0/notebooks/challenges/primes/primes_challenge.ipynb | mit | from math import sqrt, floor
def list_primes(n):
"""
Returns a list of prime numbers
takes an int and returns the primes up to and including that int
Parameters
----------
Input:
n: int
output:
prime: list
a list of integers
"""
prime = []
for i in range(2, n + 1):
for j in range(2, floor(sqrt(i) + 1)):
if i % j == 0:
break
else:
prime.append(i)
return prime
list_primes(11)
from math import sqrt, floor, ceil
def list(n):
for i in range(2, n + 1):
for j in range(2, floor(sqrt(i + 1))):
print(i, j)
list(15)
"""
Explanation: <small><i>This notebook was prepared by Thunder Shiviah. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement list_primes(n), which returns a list of primes up to n (inclusive).
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Does list_primes do anything else?
No
Test Cases
list_primes(1) -> [] # 1 is not prime.
list_primes(2) -> [2]
list_primes(12) -> [2, 3, 5, 7 , 11]
Algorithm
Primes are numbers which are only divisible by 1 and themselves.
5 is a prime since it can only be divided by itself and 1.
9 is not a prime since it can be divided by 3 (3*3 = 9).
1 is not a prime for reasons that only mathematicians care about.
To check if a number is prime, we can implement a basic algorithm, namely: check if a given number can be divided by any numbers smaller than the given number (note: you really only need to test numbers up to the square root of a given number, but it doesn't really matter for this assignment).
Code
End of explanation
"""
# %load test_list_primes.py
from nose.tools import assert_equal
class Test_list_primes(object):
def test_list_primes(self):
assert_equal(list_primes(1), [])
assert_equal(list_primes(2), [2])
assert_equal(list_primes(7), [2, 3, 5, 7])
assert_equal(list_primes(9), list_primes(7))
print('Success: test_list_primes')
def main():
test = Test_list_primes()
test.test_list_primes()
if __name__ == '__main__':
main()
"""
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation
"""
|
bert9bert/statsmodels | examples/notebooks/markov_regression.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
# NBER recessions
from pandas_datareader.data import DataReader
from datetime import datetime
usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))
"""
Explanation: Markov switching dynamic regression models
This notebook provides an example of the use of Markov switching models in Statsmodels to estimate dynamic regression models with changes in regime. It follows the examples in the Stata Markov switching documentation, which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.
End of explanation
"""
# Get the federal funds rate data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import fedfunds
dta_fedfunds = pd.Series(fedfunds, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
# Plot the data
dta_fedfunds.plot(title='Federal funds rate', figsize=(12,3))
# Fit the model
# (a switching mean is the default of the MarkovRegession model)
mod_fedfunds = sm.tsa.MarkovRegression(dta_fedfunds, k_regimes=2)
res_fedfunds = mod_fedfunds.fit()
res_fedfunds.summary()
"""
Explanation: Federal funds rate with switching intercept
The first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply:
$$r_t = \mu_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$
where $S_t \in {0, 1}$, and the regime transitions according to
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
1 - p_{00} & 1 - p_{10}
\end{bmatrix}
$$
We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \sigma^2$.
The data used in this example can be found at http://www.stata-press.com/data/r14/usmacro.
End of explanation
"""
res_fedfunds.smoothed_marginal_probabilities[1].plot(
title='Probability of being in the high regime', figsize=(12,3));
"""
Explanation: From the summary output, the mean federal funds rate in the first regime (the "low regime") is estimated to be $3.7$ whereas in the "high regime" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed.
End of explanation
"""
print(res_fedfunds.expected_durations)
"""
Explanation: From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime.
End of explanation
"""
# Fit the model
mod_fedfunds2 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[1:], k_regimes=2, exog=dta_fedfunds.iloc[:-1])
res_fedfunds2 = mod_fedfunds2.fit()
res_fedfunds2.summary()
"""
Explanation: A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years.
Federal funds rate with switching intercept and lagged dependent variable
The second example augments the previous model to include the lagged value of the federal funds rate.
$$r_t = \mu_{S_t} + r_{t-1} \beta_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$
where $S_t \in {0, 1}$, and the regime transitions according to
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
1 - p_{00} & 1 - p_{10}
\end{bmatrix}
$$
We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma^2$.
End of explanation
"""
res_fedfunds2.smoothed_marginal_probabilities[0].plot(
title='Probability of being in the high regime', figsize=(12,3));
"""
Explanation: There are several things to notice from the summary output:
The information criteria have decreased substantially, indicating that this model has a better fit than the previous model.
The interpretation of the regimes, in terms of the intercept, have switched. Now the first regime has the higher intercept and the second regime has a lower intercept.
Examining the smoothed probabilities of the high regime state, we now see quite a bit more variability.
End of explanation
"""
print(res_fedfunds2.expected_durations)
"""
Explanation: Finally, the expected durations of each regime have decreased quite a bit.
End of explanation
"""
# Get the additional data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import ogap, inf
dta_ogap = pd.Series(ogap, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
dta_inf = pd.Series(inf, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
exog = pd.concat((dta_fedfunds.shift(), dta_ogap, dta_inf), axis=1).iloc[4:]
# Fit the 2-regime model
mod_fedfunds3 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[4:], k_regimes=2, exog=exog)
res_fedfunds3 = mod_fedfunds3.fit()
# Fit the 3-regime model
np.random.seed(12345)
mod_fedfunds4 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[4:], k_regimes=3, exog=exog)
res_fedfunds4 = mod_fedfunds4.fit(search_reps=20)
res_fedfunds3.summary()
res_fedfunds4.summary()
"""
Explanation: Taylor rule with 2 or 3 regimes
We now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better.
Because the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions.
End of explanation
"""
fig, axes = plt.subplots(3, figsize=(10,7))
ax = axes[0]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[0])
ax.set(title='Smoothed probability of a low-interest rate regime')
ax = axes[1]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[1])
ax.set(title='Smoothed probability of a medium-interest rate regime')
ax = axes[2]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[2])
ax.set(title='Smoothed probability of a high-interest rate regime')
fig.tight_layout()
"""
Explanation: Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below.
End of explanation
"""
# Get the federal funds rate data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import areturns
dta_areturns = pd.Series(areturns, index=pd.date_range('2004-05-04', '2014-5-03', freq='W'))
# Plot the data
dta_areturns.plot(title='Absolute returns, S&P500', figsize=(12,3))
# Fit the model
mod_areturns = sm.tsa.MarkovRegression(
dta_areturns.iloc[1:], k_regimes=2, exog=dta_areturns.iloc[:-1], switching_variance=True)
res_areturns = mod_areturns.fit()
res_areturns.summary()
"""
Explanation: Switching variances
We can also accomodate switching variances. In particular, we consider the model
$$
y_t = \mu_{S_t} + y_{t-1} \beta_{S_t} + \varepsilon_t \quad \varepsilon_t \sim N(0, \sigma_{S_t}^2)
$$
We use maximum likelihood to estimate the parameters of this model: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma_0^2, \sigma_1^2$.
The application is to absolute returns on stocks, where the data can be found at http://www.stata-press.com/data/r14/snp500.
End of explanation
"""
res_areturns.smoothed_marginal_probabilities[0].plot(
title='Probability of being in a low-variance regime', figsize=(12,3));
"""
Explanation: The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy.
End of explanation
"""
|
akinorihomma/Computer-Science-2 | ニューラルネット入門_補足資料.ipynb | unlicense | np.array?
"""
Explanation: Python 基礎
iPython (or jupyter) のヘルプテクニック
これを見ている人は jupyter notebook を使っていると思うが、次のコードを動かしてみてほしい。
End of explanation
"""
def hogehoge():
""" docstring here! """
return 0
hogehoge?
"""
Explanation: iPython では ? をつけて実行することで、 Docstring (各プログラムの説明文) を簡単に参照することができる。
もし「関数はしってるんだけど、引数が分からない」という場合に試してみよう。
なお Python スクリプトにおける Docstring の書き方は
Python
def hogehoge():
""" docstring here! """
return 0
である。(試しに次も実行してみよう)
End of explanation
"""
class memory_sum:
c = None
def __init__(self, a):
self.a = a
print("run __init__ ")
def __call__(self, b):
self.c = b + self.a
print("__call__\t:", b, "+", self.a, "=", self.c)
def show_sum(self):
print("showsum()\t:", self.c)
"""
Explanation: 初めてのオブジェクト指向言語
Python は オブジェクト指向言語 かつ ライブラリを使う上で必須の知識 のため、簡単におさらいする。
class を簡単に言えば「C言語における構造体の発展させ、変数だけでなく関数も内包できて、引き継ぎもできるやつ」である
メモリ付き電卓を例に、以下のようなクラスを用意した。
End of explanation
"""
A = memory_sum(15)
"""
Explanation: クラスは一種の 枠組み なので 実体を用意(インスタンス) する
このとき コンストラクタ と呼ばれる、インスタンスの初期化関数 def __init__() が実行される
End of explanation
"""
A(30)
"""
Explanation: インスタンスは特に関数を呼び出されない場合 def __call__() が実行される
End of explanation
"""
A.show_sum()
"""
Explanation: もちろん関数を呼び出すこともできる
End of explanation
"""
A.c
"""
Explanation: またインスタンス内の変数へ、直接アクセスできる
End of explanation
"""
class sum_sub(memory_sum):
def sum(self, a, b):
self.a = a
self.b = b
self.c = a + b
print(self.c)
def sub(self, a, b):
self.a = a
self.b = b
self.c = a - b
print(self.c)
def show_result(self):
print(self.c)
B = sum_sub(30)
B.sum(30, 10)
B.sub(30, 10)
B.show_result()
"""
Explanation: クラスの引き継ぎ(継承)とは、すでに定義済みのクラスを引用することである。
例として memory_sum を引きついで、引き算機能もつけた場合以下のようになる。
End of explanation
"""
B.show_sum()
"""
Explanation: sum_sub は memory_sum を継承しているため、 memory_sum で定義した関数も利用できる
End of explanation
"""
arr = np.arange(-10, 10, 0.1)
arr1 = F.relu(arr, use_cudnn=False)
plt.plot(arr, arr1.data)
"""
Explanation: これだけ知っていれば Chainer のコードも多少は読めるようになる。
オブジェクト指向に興味が湧いた場合は、
Python 公式チュートリアル
平澤 章 著『オブジェクト指向でなぜつくるのか』
中山 清喬, 国本 大悟 著『スッキリわかるJava入門』
がおすすめである(特にオブジェクト指向とJavaの結びつきは強いので、言語違いとは言わず読んでみてほしい)
Chainer
活性化関数の確認
chainer.functions では活性化関数や損失関数など基本的な関数が定義されている
ReLU (ランプ関数)
隠れ層の活性化関数として、今日用いられている関数。
入力値が $0$ 以下なら $0$ を返し、$0$ 以上なら入力値をそのまま出力するだけの関数。
$$ ReLU(x)=\max(0, x) $$
「数学的に微分可能なの?」と疑問になった方は鋭く、数学的には微分可能ではないものの、微分は以下のように定義している(らしい)。
$$ \frac{d ReLU(x)}{dx} = (if ~ 0 \leq x)~ 1,~ (else)~ 0 $$
End of explanation
"""
arr = np.arange(-10, 10, 0.1)
arr2 = F.sigmoid(arr, use_cudnn=False)
plt.plot(arr, arr2.data)
"""
Explanation: Sigmoid 関数
みんな大好きシグモイド関数
$$ sigmoid(x)= \frac{1}{1 + \exp(-x)} $$
End of explanation
"""
arr = chainer.Variable(np.array([[-5.0, 0.5, 6.0, 10.0]], dtype=np.float32))
plt.plot(F.softmax(arr).data[0])
print("softmax適用後の値: ", F.softmax(arr).data[0])
print("総和: ", sum(F.softmax(arr).data[0]))
"""
Explanation: softmax 関数
softmax は正規化指数関数とも言われ、値を確率にすることができる。
$$ softmax(x_i) = \frac{exp(x_i)}{\sum_j exp(x_j)} $$
また損失関数として交差エントロピーと組み合わせることで、多クラス分類を行なうことができる。
chainer.links.Classifier() における実装において、デフォルトが softmax_cross_entropy である
End of explanation
"""
x1 = chainer.Variable(np.array([1]).astype(np.float32))
x2 = chainer.Variable(np.array([2]).astype(np.float32))
x3 = chainer.Variable(np.array([3]).astype(np.float32))
"""
Explanation: Variable クラスについて
chainer においては、通常の配列や numpy配列 や cupy配列 をそのまま使うのではなく、 Variable というクラスを利用する
(chianer 1.1 以降は自動的に Variable クラスにラッピングされるらしい)
Variable クラスではデータアクセスや勾配計算などを容易に行える。
順伝搬
End of explanation
"""
y = (x1 - 2 * x2 - 1)**2 + (x2 * x3 - 1)**2 + 1
y.data
"""
Explanation: 試しに下式を計算する(順方向の計算)
$$ y = (x_1 - 2 x_2 - 1)^2 + (x_2 x_3 - 1)^2 + 1 $$
各パラメータを当てはめると
$$ y = (1 - 2 \times 2 - 1)^2 + (2 \times 3 - 1)^2 + 1 = (-4)^2 + 5^2 + 1 = 42$$
End of explanation
"""
y.backward()
"""
Explanation: 逆伝搬
では今度は y の微分値を求める(逆方向の計算)
End of explanation
"""
x1.grad
"""
Explanation: $$ \frac{\delta y}{\delta x_1} = 2(x_1 - 2 x_2 - 1) = 2(1 - 2 \times 2 - 1) = -8$$
End of explanation
"""
x2.grad
"""
Explanation: $$ \frac{\delta y}{\delta x_2} = -4 (x_1 - 2 x_2 - 1) + 2 x_3 ( x_2 x_3 - 1) = -4 (1 - 2 \times 2 - 1) + 2 \times 3 ( 2 \times 3 - 1) = 46 $$
End of explanation
"""
x3.grad
"""
Explanation: $$ \frac{\delta y}{\delta x_3} = 2 x_2 ( x_2 x_3 - 1) = 2 \times 2 (2 \times 3 - 1) = 20$$
End of explanation
"""
l = L.Linear(2, 3)
l.W.data
l.b.data
x = chainer.Variable(np.array(range(4)).astype(np.float32).reshape(2,2))
y = l(x)
y.data
"""
Explanation: links クラスについて
chainer.links は chainer.Variable のサブセットのような存在
ニューラルネットにおいて、ある層から次の層へデータを変換する(線形作用素)関数は
$$ \boldsymbol{y} = W \boldsymbol{x} + \boldsymbol{b} $$
として表現でき、Chainer においては次のように表される。
End of explanation
"""
x.data.dot(l.W.data.T) + l.b.data # bias は 0 なので足しても足さなくとも同じ
"""
Explanation: $$ \boldsymbol{y} = W \boldsymbol{x} + \boldsymbol{b} $$
に当てはめ、確認すると同一であることがわかる
End of explanation
"""
train, test = chainer.datasets.get_mnist()
"""
Explanation: データセットの読み込み
End of explanation
"""
type(train)
"""
Explanation: train と test の型を確認する
End of explanation
"""
print(len(train[0][0]))
print(type(train[0][0]))
print(train[0][1])
print(type(train[0][1]))
"""
Explanation: chainer.datasets.tuple_dataset.TupleDataset ということがわかる
(実は chainer.datasets.get_mnist() を確認すれば自明である)
では train の中身はどうか
End of explanation
"""
plt.imshow(train[0][0].reshape(28,28))
plt.gray() # gray scale にする
plt.grid()
train_iter = chainer.iterators.SerialIterator(train, 100)
np.shape(train_iter.dataset)
"""
Explanation: 単純に 画像データ と 正解ラベル がセットになっている
なので画像データの方を reshape(28, 28) することで、画像として表示することも可能である
(ただし 0 - 255 の値ではなく 0.0 - 1.0 になっているので注意)
また chainer.datasets.get_mnist(ndim=2) とした場合は reshape 不要になる
End of explanation
"""
log = pd.read_json('./result/log')
"""
Explanation: 出力ファイルの処理
```Python
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
trainer.extend(extensions.dump_graph('main/loss'))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
``
で出力される **log** と **cg.dot`** の処理方法についてここでは簡単に紹介する。
log
実は log は単なる json ファイルなのだが、馴染みがない人にとっては扱い方が地味面倒である。
ここでは pandas を利用し、データを処理、グラフ化してみる。
すでに import pandas as pd されており、 ./result/log にターゲットとなる log があるという状態において
End of explanation
"""
log
"""
Explanation: とするだけで json ファイルの読み込みは完了である。
次に log のテーブルを見てみると、
End of explanation
"""
epoch = log['epoch']
plt.plot(epoch, log['main/accuracy'])
plt.plot(epoch, log['validation/main/accuracy'])
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend(loc='best')
"""
Explanation: Python
# Print selected entries of the log to stdout
# Here "main" refers to the target link of the "main" optimizer again, and
# "validation" refers to the default name of the Evaluator extension.
# Entries other than 'epoch' are reported by the Classifier link, called by
# either the updater or the evaluator.
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
によってコンソールに出力されたものと同じものが出て来ることがわかる。
では値だけでは分からないので、訓練とテストの過程をグラフ化してみよう。
End of explanation
"""
|
moonbury/pythonanywhere | learn_scipy/7702OS_Chap_03_rev20141229.ipynb | gpl-3.0 | import numpy
vectorA = numpy.array([1,2,3,4,5,6,7])
vectorA
vectorB = vectorA[::-1].copy()
vectorB
vectorB[0]=123
vectorB
vectorA
vectorB = vectorA[::-1].copy()
vectorB
"""
Explanation: <center><font color=red>Learning SciPy for Numerical and Scientific Computing</font></center>
Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com).
<b><font color='red'>
NOTE: This IPython notebook should be read alonside the corresponding chapter in the book, where each piece of code is fully explained.
</font></b>
<br>
<center>Chapter 3. SciPy for Linear Algebra</center>
Summary
This chapter explores the treatment of matrices (whether normal or sparse) with the modules on linear algebra – linalg and sparse.linalg, which expand and improve the NumPy module with the same name.
References
Linear Algebra (scipy.linalg)<br>
http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html
<br>
<br>
Sparse Eigenvalue Problems with ARPACK<br>
http://docs.scipy.org/doc/scipy/reference/tutorial/arpack.html
<br>
<br>
Vector creation
End of explanation
"""
vectorC = vectorA + vectorB
vectorC
vectorD = vectorB - vectorA
vectorD
"""
Explanation: Vector Operations
Additon/subtraction
End of explanation
"""
dotProduct1 = numpy.dot(vectorA,vectorB)
dotProduct1
dotProduct2 = (vectorA*vectorB).sum()
dotProduct2
"""
Explanation: Scalar/Dot product
End of explanation
"""
vectorA = numpy.array([5, 6, 7])
vectorA
vectorB = numpy.array([7, 6, 5])
vectorB
crossProduct = numpy.cross(vectorA,vectorB)
crossProduct
crossProduct = numpy.cross(vectorB,vectorA)
crossProduct
"""
Explanation: Cross/vectorial product (on 3 dimensional space vectors)
End of explanation
"""
import numpy
A=numpy.matrix("1,2,3;4,5,6")
print(A)
A=numpy.matrix([[1,2,3],[4,5,6]])
print(A)
"""
Explanation: Matrix creation
End of explanation
"""
A=numpy.matrix([ [0,10,0,0,0], [0,0,20,0,0], [0,0,0,30,0],
[0,0,0,0,40], [0,0,0,0,0] ])
A
A[0,1],A[1,2],A[2,3],A[3,4]
rows=numpy.array([0,1,2,3])
cols=numpy.array([1,2,3,4])
vals=numpy.array([10,20,30,40])
import scipy.sparse
A=scipy.sparse.coo_matrix( (vals,(rows,cols)) )
print(A)
print(A.todense())
scipy.sparse.isspmatrix_coo(A)
B=numpy.mat(numpy.ones((3,3)))
W=numpy.mat(numpy.zeros((3,3)))
print(numpy.bmat('B,W;W,B'))
a=numpy.array([[1,2],[3,4]])
a
a*a
"""
Explanation: <font color=red><b> Please, refers to the corresponding section on the book to read about the meaning of the following matrix </b></font>
$$ \boxed{ \begin{pmatrix} 0 & 10 & 0 & 0 & 0 \ 0 & 0 & 20 & 0 & 0 \ 0 & 0 & 0 & 30 & 0 \ 0 & 0 & 0 & 0 & 40 \ 0 & 0 & 0 & 0 & 0 \end{pmatrix} }$$
End of explanation
"""
A=numpy.mat(a)
A
A*A
numpy.dot(A,A)
b=numpy.array([[1,2,3],[3,4,5]])
numpy.dot(a,b)
numpy.multiply(A,A)
a=numpy.arange(5); A=numpy.mat(a)
a.shape, A.shape, a.transpose().shape, A.transpose().shape
import scipy.linalg
A=scipy.linalg.hadamard(8)
zero_sum_rows = (numpy.sum(A,0)==0)
B=A[zero_sum_rows,:]
print(B[0:3,:])
"""
Explanation: <font color=red><b> Please, refers to the corresponding section on the book to read about the meaning of the following matrix product </b></font>
$$ \boxed{ \begin{align} \begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix} & \begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix} = \begin{pmatrix} 7 & 10 \ 15 & 22 \end{pmatrix} \end{align} }$$
End of explanation
"""
import numpy
A = numpy.matrix("1+1j, 2-1j; 3-1j, 4+1j")
print (A)
print (A.T)
print (A.H)
"""
Explanation: Matrix methods
End of explanation
"""
mu=1/numpy.sqrt(2)
A=numpy.matrix([[mu,0,mu],[0,1,0],[mu,0,-mu]])
B=scipy.linalg.kron(A,A)
print (B[:,0:-1:2])
"""
Explanation: Operations between matrices
<font color=red><b> Please, refers to the corresponding section on the book to read about the meaning
of the following basis vectors </b></font>
$$ \boxed{ \begin{align} v_{1} & = \frac{1}{\sqrt{2}}\begin{pmatrix} 1,0,1 \end{pmatrix} \ v_{2} & = \frac{1}{\sqrt{2}}\begin{pmatrix} 0,0,0 \end{pmatrix} \ v_{3} & = \frac{1}{\sqrt{2}}\begin{pmatrix} 1,0,-1 \end{pmatrix} \end{align} }$$
End of explanation
"""
A=numpy.matrix("1,1j;21,3")
print (A**2);
print (numpy.asarray(A)**2)
a=numpy.arange(0,2*numpy.pi,1.6)
A = scipy.linalg.toeplitz(a)
print (A)
print (numpy.exp(A))
print (scipy.linalg.expm(A))
x=10**100; y=9; v=numpy.matrix([x,y])
scipy.linalg.norm(v,2) # the right method
"""
Explanation: Functions on matrices
End of explanation
"""
numpy.sqrt(x*x+y*y) # the wrong method
"""
Explanation: <font color=red>As mentioned in the book, the following command will generate an error from the python computational engine </font>
End of explanation
"""
%matplotlib inline
import numpy
import scipy.misc
from scipy.linalg import svd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12.0, 8.0)
img=scipy.misc.lena()
U,s,Vh=svd(img) # Singular Value Decomposition
A = numpy.dot( U[:,0:32], # use only 32 singular values
numpy.dot( numpy.diag(s[0:32]),
Vh[0:32,:]))
plt.subplot(121,aspect='equal');
plt.gray()
plt.imshow(img)
plt.subplot(122,aspect='equal');
plt.imshow(A)
plt.show()
"""
Explanation: Eigenvalue problems and matrix decompositions
This section refers the reader to the SciPy documentation related to eigenvalues problems and matrix decomposition
http://docs.scipy.org/doc/scipy/reference/linalg.html
http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvals.html
Image compression via the singular value decomposition
<font color=red><b> Please, refers to the corresponding section on the book to read about the meaning of the following two eaquations </b></font>
$$ \boxed{ \begin{align} A = U \cdot S \cdot V^{}, & \quad U = \begin{pmatrix} u_{1} \ \vdots \ u_{n} \end{pmatrix}, & S = \begin{pmatrix} s_{1} & & \ & \ddots & \ & & s_{n} \end{pmatrix}, & \quad V^{} = \begin{pmatrix} v_{1} \quad \cdots \quad v_{n} \end{pmatrix} \end{align} }$$
$$ \begin{equation} \boxed{ \sum_{j=1}^{k} s_{j}(u_{j} \cdot v_{j}) } \end{equation} $$
End of explanation
"""
A=numpy.mat(numpy.eye(3,k=1))
print(A)
b=numpy.mat(numpy.arange(3) + 1).T
print(b)
xinfo=scipy.linalg.lstsq(A,b)
print (xinfo[0].T) # output the solution
"""
Explanation: Solvers
<font color=red><b> Please, refers to the corresponding section on the book to read about the meaning of the following eaquation </b></font>
$$ \boxed{ \begin{align} \begin{pmatrix} 0 & 1 & 0 \ 0 & 0 & 1 \ 0 & 0 & 0 \end{pmatrix} & \begin{pmatrix} x \ y \ z \end{pmatrix} = \begin{pmatrix} 1 \ 2\ 3 \end{pmatrix} \end{align} }$$
End of explanation
"""
|
OpenPIV/openpiv-python | synimage/Synthetic_Image_Generator_examples.ipynb | gpl-3.0 | import synimagegen
import matplotlib.pyplot as plt
import numpy as np
import os
%matplotlib inline
"""
Explanation: Synthetic Image Generation examples
In this Notebook a number of test cases using the PIV sythetic image generator will be presented.
The three examples shown are:
1. Using a fully synthetic flow field created by a random equation
2. Using pre attained experimental data
3. Using flow simulation results
In each of these cases the pattern to recieving the synthetic images is the same. The part that mostly varies is the data passed.
Data is either syntheticly made or pass to the create_synimage_parameters function.
The parameters for both images (frame_a, frame_b) are created
The parameters are passed to the generate_particle_image function in order to create the image representation.
Finally the images are shown on the screen as grayschale images.
End of explanation
"""
ground_truth,cv,x_1,y_1,U_par,V_par,par_diam1,par_int1,x_2,y_2,par_diam2,par_int2 = synimagegen.create_synimage_parameters(None,[0,1],[0,1],[256,256],dt=0.0025)
frame_a = synimagegen.generate_particle_image(256, 256, x_1, y_1, par_diam1, par_int1,16)
frame_b = synimagegen.generate_particle_image(256, 256, x_2, y_2, par_diam2, par_int2,16)
fig = plt.figure(figsize=(20,10))
a = fig.add_subplot(1, 2, 1,)
imgplot = plt.imshow(frame_a, cmap='gray')
a.set_title('frame_a')
a = fig.add_subplot(1, 2, 2)
imgplot = plt.imshow(frame_b, cmap='gray')
a.set_title('frame_b')
"""
Explanation: Example 1: synthetic flow field created by a random equation
In this case no data is passed to the function, so a random equation is invoked from the cff module. (line 46,52 in synimagegen.py)
This equation defines the velocities U,V for each point in the X,Y plane.
This equation is ment to be changed to suit the testing needs of each users system.
End of explanation
"""
data = np.load('PIV_experiment_data.npz')
data = np.stack([data['X'], data['Y'],data['U'] ,data['V']], axis=2)
ground_truth,cv,x_1,y_1,U_par,V_par,par_diam1,par_int1,x_2,y_2,par_diam2,par_int2 = synimagegen.create_synimage_parameters(data,[0,1],[0,1],[256,256],inter=True,dt=0.0025)
frame_a = synimagegen.generate_particle_image(256, 256, x_1, y_1, par_diam1, par_int1,16)
frame_b = synimagegen.generate_particle_image(256, 256, x_2, y_2, par_diam2, par_int2,16)
fig = plt.figure(figsize=(20,10))
a = fig.add_subplot(1, 2, 1)
imgplot = plt.imshow(frame_a, cmap='gray')
a.set_title('frame_a')
a = fig.add_subplot(1, 2, 2)
imgplot = plt.imshow(frame_b, cmap='gray')
a.set_title('frame_b')
"""
Explanation: Example 2: pre attained experimental data
In this case experiment data is passed to the function, and the interpolation flag is enabled. Thus using the data to create a continous flow field by interpolation and then using the field to create the paramters.
End of explanation
"""
path_to_file = os.getcwd() + '/velocity_report.txt'
ground_truth,cv,x_1,y_1,U_par,V_par,par_diam1,par_int1,x_2,y_2,par_diam2,par_int2 = synimagegen.create_synimage_parameters(None,[0,1],[0,1],[256,256],path=path_to_file,inter=True,dt=0.0025)
frame_a = synimagegen.generate_particle_image(256, 256, x_1, y_1, par_diam1, par_int1,16)
frame_b = synimagegen.generate_particle_image(256, 256, x_2, y_2, par_diam2, par_int2,16)
fig = plt.figure(figsize=(20,10))
a = fig.add_subplot(1, 2, 1)
imgplot = plt.imshow(frame_a, cmap='gray')
a.set_title('frame_a')
a = fig.add_subplot(1, 2, 2)
imgplot = plt.imshow(frame_b, cmap='gray')
a.set_title('frame_b')
"""
Explanation: Example 3: flow simulation results
In this case flow simulation results are passed to the function, in the form of a tab-delimited text file.
The file is parsed and the data is used in order to continous flow field by interpolation.
End of explanation
"""
|
cmmorrow/sci-analysis | docs/sci_analysis_main.ipynb | mit | import warnings
warnings.filterwarnings("ignore")
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
"""
Explanation: sci-analysis
An easy to use and powerful python-based data exploration and analysis tool
Current Version
2.2 --- Released January 5, 2019
What is sci-analysis?
sci-analysis is a python package for quickly performing exploratory data analysis (EDA). It aims to make performing EDA easier for newcomers and experienced data analysts alike by abstracting away the specific SciPy, NumPy, and Matplotlib commands. This is accomplished by using sci-analysis's analyze() function.
The main features of sci-analysis are:
* Fast EDA with the analyze() function.
* Great looking graphs without writing several lines of matplotlib code.
* Automatic use of the most appropriate hypothesis test for the supplied data.
* Automatic handling of missing values.
Currently, sci-analysis is capable of performing four common statistical analysis techniques:
* Histograms and summary of numeric data
* Histograms and frequency of categorical data
* Bivariate and linear regression analysis
* Location testing
What's new in sci-analysis version 2.2?
Version 2.2 adds the ability to add data labels to scatter plots.
The default behavior of the histogram and statistics was changed from assuming a sample, to assuming a population.
Fixed a bug involving the Mann Whitney U test, where the minimum size was set incorrectly.
Verified compatibility with python 3.7.
Getting started with sci-analysis
sci-analysis requires python 2.7, 3.5, 3.6, or 3.7.
If one of these four version of python is already installed then this section can be skipped.
If you use MacOS or Linux, python should already be installed. You can check by opening a terminal window and typing which python on the command line. To verify what version of python you have installed, type python --version at the command line. If the version is 2.7.x, 3.5.x, 3.6.x, or 3.7.x where x is any number, sci-analysis should work properly.
Note: It is not recommended to use sci-analysis with the system installed python. This is because the version of python that comes with your OS will require root permission to manage, might be changed when upgrading the OS, and can break your OS if critical packages are accidentally removed. More info on why the system python should not be used can be found here: https://github.com/MacPython/wiki/wiki/Which-Python
If you are on Windows, you might need to install python. You can check to see if python is installed by clicking the Start button, typing cmd in the run text box, then type python.exe on the command line. If you receive an error message, you need to install python.
The easiest way to install python on any OS is by installing Anaconda or Mini-conda from this page:
https://www.continuum.io/downloads
If you are on MacOS and have GCC installed, python can be installed with homebrew using the command:
brew install python
If you are on Linux, python can be installed with pyenv using the instructions here:
https://github.com/pyenv/pyenv
If you are on Windows, you can download the python binary from the following page, but be warned that compiling the required packages will be required using this method:
https://www.python.org/downloads/windows/
Installing sci-analysis
sci-analysis can be installed with pip by typing the following:
pip install sci-analysis
On Linux, you can install pip from your OS package manager. If you have Anaconda or Mini-conda, pip should already be installed. Otherwise, you can download pip from the following page:
https://pypi.python.org/pypi/pip
sci-analysis works best in conjunction with the excellent pandas and jupyter notebook python packages. If you don't have either of these packages installed, you can install them by typing the following:
pip install pandas
pip install jupyter
Using sci-analysis
From the python interpreter or in the first cell of a Jupyter notebook, type:
End of explanation
"""
%matplotlib inline
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
"""
Explanation: This will tell python to import the sci-analysis function analyze().
Note: Alternatively, the function analyse() can be imported instead, as it is an alias for analyze(). For the case of this documentation, analyze() will be used for consistency.
If you are using sci-analysis in a Jupyter notebook, you need to use the following code instead to enable inline plots:
End of explanation
"""
np.random.seed(987654321)
data = st.norm.rvs(size=1000)
analyze(xdata=data)
"""
Explanation: Now, sci-analysis should be ready to use. Try the following code:
End of explanation
"""
pets = ['dog', 'cat', 'rat', 'cat', 'rabbit', 'dog', 'hamster', 'cat', 'rabbit', 'dog', 'dog']
analyze(pets)
"""
Explanation: A histogram, box plot, summary stats, and test for normality of the data should appear above.
Note: numpy and scipy.stats were only imported for the purpose of the above example. sci-analysis uses numpy and scipy internally, so it isn't necessary to import them unless you want to explicitly use them.
A histogram and statistics for categorical data can be performed with the following command:
End of explanation
"""
from inspect import signature
print(analyze.__name__, signature(analyze))
print(analyze.__doc__)
"""
Explanation: Let's examine the analyze() function in more detail. Here's the signature for the analyze() function:
End of explanation
"""
example1 = [0.2, 0.25, 0.27, np.nan, 0.32, 0.38, 0.39, np.nan, 0.42, 0.43, 0.47, 0.51, 0.52, 0.56, 0.6]
example2 = [0.23, 0.27, 0.29, np.nan, 0.33, 0.35, 0.39, 0.42, np.nan, 0.46, 0.48, 0.49, np.nan, 0.5, 0.58]
analyze(example1, example2)
"""
Explanation: analyze() will detect the desired type of data analysis to perform based on whether the ydata argument is supplied, and whether the xdata argument is a two-dimensional array-like object.
The xdata and ydata arguments can accept most python array-like objects, with the exception of strings. For example, xdata will accept a python list, tuple, numpy array, or a pandas Series object. Internally, iterable objects are converted to a Vector object, which is a pandas Series of type float64.
Note: A one-dimensional list, tuple, numpy array, or pandas Series object will all be referred to as a vector throughout the documentation.
If only the xdata argument is passed and it is a one-dimensional vector of numeric values, the analysis performed will be a histogram of the vector with basic statistics and Shapiro-Wilk normality test. This is useful for visualizing the distribution of the vector. If only the xdata argument is passed and it is a one-dimensional vector of categorical (string) values, the analysis performed will be a histogram of categories with rank, frequencies and percentages displayed.
If xdata and ydata are supplied and are both equal length one-dimensional vectors of numeric data, an x/y scatter plot with line fit will be graphed and the correlation between the two vectors will be calculated. If there are non-numeric or missing values in either vector, they will be ignored. Only values that are numeric in each vector, at the same index will be included in the correlation. For example, the two following two vectors will yield:
End of explanation
"""
np.random.seed(987654321)
group_a = st.norm.rvs(size=50)
group_b = st.norm.rvs(size=25)
group_c = st.norm.rvs(size=30)
group_d = st.norm.rvs(size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
"""
Explanation: If xdata is a sequence or dictionary of vectors, a location test and summary statistics for each vector will be performed. If each vector is normally distributed and they all have equal variance, a one-way ANOVA is performed. If the data is not normally distributed or the vectors do not have equal variance, a non-parametric Kruskal-Wallis test will be performed instead of a one-way ANOVA.
Note: Vectors should be independent from one another --- that is to say, there shouldn't be values in one vector that are derived from or some how related to a value in another vector. These dependencies can lead to weird and often unpredictable results.
A proper use case for a location test would be if you had a table with measurement data for multiple groups, such as test scores per class, average height per country or measurements per trial run, where the classes, countries, and trials are the groups. In this case, each group should be represented by it's own vector, which are then all wrapped in a dictionary or sequence.
If xdata is supplied as a dictionary, the keys are the names of the groups and the values are the array-like objects that represent the vectors. Alternatively, xdata can be a python sequence of the vectors and the groups argument a list of strings of the group names. The order of the group names should match the order of the vectors passed to xdata.
Note: Passing the data for each group into xdata as a sequence or dictionary is often referred to as "unstacked" data. With unstacked data, the values for each group are in their own vector. Alternatively, if values are in one vector and group names in another vector of equal length, this format is referred to as "stacked" data. The analyze() function can handle either stacked or unstacked data depending on which is most convenient.
For example:
End of explanation
"""
np.random.seed(987654321)
group_a = st.norm.rvs(0.0, 1, size=50)
group_b = st.norm.rvs(0.0, 3, size=25)
group_c = st.norm.rvs(0.1, 1, size=30)
group_d = st.norm.rvs(0.0, 1, size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
"""
Explanation: In the example above, sci-analysis is telling us the four groups are normally distributed (by use of the Bartlett Test, Oneway ANOVA and the near straight line fit on the quantile plot), the groups have equal variance and the groups have matching means. The only significant difference between the four groups is the sample size we specified. Let's try another example, but this time change the variance of group B:
End of explanation
"""
np.random.seed(987654321)
group_a = st.norm.rvs(0.0, 1, size=50)
group_b = st.norm.rvs(0.0, 3, size=25)
group_c = st.weibull_max.rvs(1.2, size=30)
group_d = st.norm.rvs(0.0, 1, size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
"""
Explanation: In the example above, group B has a standard deviation of 2.75 compared to the other groups that are approximately 1. The quantile plot on the right also shows group B has a much steeper slope compared to the other groups, implying a larger variance. Also, the Kruskal-Wallis test was used instead of the Oneway ANOVA because the pre-requisite of equal variance was not met.
In another example, let's compare groups that have different distributions and different means:
End of explanation
"""
import pandas as pd
np.random.seed(987654321)
df = pd.DataFrame(
{
'ID' : np.random.randint(10000, 50000, size=60).astype(str),
'One' : st.norm.rvs(0.0, 1, size=60),
'Two' : st.norm.rvs(0.0, 3, size=60),
'Three' : st.weibull_max.rvs(1.2, size=60),
'Four' : st.norm.rvs(0.0, 1, size=60),
'Month' : ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] * 5,
'Condition' : ['Group A', 'Group B', 'Group C', 'Group D'] * 15
}
)
df
"""
Explanation: The above example models group C as a Weibull distribution, while the other groups are normally distributed. You can see the difference in the distributions by the one-sided tail on the group C boxplot, and the curved shape of group C on the quantile plot. Group C also has significantly the lowest mean as indicated by the Tukey-Kramer circles and the Kruskal-Wallis test.
Using sci-analysis with pandas
Pandas is a python package that simplifies working with tabular or relational data. Because columns and rows of data in a pandas DataFrame are naturally array-like, using pandas with sci-analysis is the preferred way to use sci-analysis.
Let's create a pandas DataFrame to use for analysis:
End of explanation
"""
analyze(
df['One'],
name='Column One',
title='Distribution from pandas'
)
"""
Explanation: This creates a table (pandas DataFrame object) with 6 columns and an index which is the row id. The following command can be used to analyze the distribution of the column titled One:
End of explanation
"""
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
title='Bivariate Analysis between Column One and Column Three'
)
"""
Explanation: Anywhere you use a python list or numpy Array in sci-analysis, you can use a column or row of a pandas DataFrame (known in pandas terms as a Series). This is because a pandas Series has much of the same behavior as a numpy Array, causing sci-analysis to handle a pandas Series as if it were a numpy Array.
By passing two array-like arguments to the analyze() function, the correlation can be determined between the two array-like arguments. The following command can be used to analyze the correlation between columns One and Three:
End of explanation
"""
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
contours=True,
fit=False,
title='Bivariate Analysis between Column One and Column Three'
)
"""
Explanation: Since there isn't a correlation between columns One and Three, it might be useful to see where most of the data is concentrated. This can be done by adding the argument contours=True and turning off the best fit line with fit=False. For example:
End of explanation
"""
analyze(
df['One'],
df['Three'],
labels=df['ID'],
highlight=df[df['Three'] < -2.0]['ID'],
fit=False,
xname='Column One',
yname='Column Three',
title='Bivariate Analysis between Column One and Column Three'
)
"""
Explanation: With a few point below -2.0, it might be useful to know which data point they are. This can be done by passing the ID column to the labels argument and then selecting which labels to highlight with the highlight argument:
End of explanation
"""
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
groups=df['Condition'],
title='Bivariate Analysis between Column One and Column Three'
)
"""
Explanation: To check whether an individual Condition correlates between Column One and Column Three, the same analysis can be done, but this time by passing the Condition column to the groups argument. For example:
End of explanation
"""
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
groups=df['Condition'],
boxplot_borders=False,
highlight=['Group B'],
title='Bivariate Analysis between Column One and Column Three'
)
"""
Explanation: The borders of the graph have boxplots for all the data points on the x-axis and y-axis, regardless of which group they belong to. The borders can be removed by adding the argument boxplot_borders=False.
According to the Spearman Correlation, there is no significant correlation among the groups. Group B is the only group with a negative slope, but it can be difficult to see the data points for Group B with so many colors on the graph. The Group B data points can be highlighted by using the argument highlight=['Group B']. In fact, any number of groups can be highlighted by passing a list of the group names using the highlight argument.
End of explanation
"""
analyze(
df['Two'],
groups=df['Condition'],
categories='Condition',
name='Column Two',
title='Oneway from pandas'
)
"""
Explanation: Performing a location test on data in a pandas DataFrame requires some explanation. A location test can be performed with stacked or unstacked data. One method will be easier than the other depending on how the data to be analyzed is stored. In the example DataFrame used so far, to perform a location test between the groups in the Condition column, the stacked method will be easier to use.
Let's start with an example. The following code will perform a location test using each of the four values in the Condition column:
End of explanation
"""
analyze(
[df['One'], df['Two'], df['Three'], df['Four']],
groups=['One', 'Two', 'Three', 'Four'],
categories='Columns',
title='Unstacked Oneway'
)
"""
Explanation: From the graph, there are four groups: Group A, Group B, Group C and Group D in Column Two. The analysis shows that the variances are equal and there is no significant difference in the means. Noting the tests that are being performed, the Bartlett test is being used to check for equal variance because all four groups are normally distributed, and the Oneway ANOVA is being used to test if all means are equal because all four groups are normally distributed and the variances are equal. However, if not all the groups are normally distributed, the Levene Test will be used to check for equal variance instead of the Bartlett Test. Also, if the groups are not normally distributed or the variances are not equal, the Kruskal-Wallis test will be used instead of the Oneway ANOVA.
If instead the four columns One, Two, Three and Four are to be analyzed, the easier way to perform the analysis is with the unstacked method. The following code will perform a location test of the four columns:
End of explanation
"""
analyze(
{'One': df['One'], 'Two': df['Two'], 'Three': df['Three'], 'Four': df['Four']},
categories='Columns',
title='Unstacked Oneway Using a Dictionary'
)
"""
Explanation: To perform a location test using the unstacked method, the columns to be analyzed are passed in a list or tuple, and the groups argument needs to be a list or tuple of the group names. One thing to note is that the groups argument was used to explicitly define the group names. This will only work if the group names and order are known in advance. If they are unknown, a dictionary comprehension can be used instead of a list comprehension to to get the group names along with the data:
End of explanation
"""
def set_quarter(data):
month = data['Month']
if month.all() in ('Jan', 'Feb', 'Mar'):
quarter = 'Q1'
elif month.all() in ('Apr', 'May', 'Jun'):
quarter = 'Q2'
elif month.all() in ('Jul', 'Aug', 'Sep'):
quarter = 'Q3'
elif month.all() in ('Oct', 'Nov', 'Dec'):
quarter = 'Q4'
else:
quarter = 'Unknown'
data.loc[:, 'Quarter'] = quarter
return data
"""
Explanation: The output will be identical to the previous example. The analysis also shows that the variances are not equal, and the means are not matched. Also, because the data in column Three is not normally distributed, the Levene Test is used to test for equal variance instead of the Bartlett Test, and the Kruskal-Wallis Test is used instead of the Oneway ANOVA.
With pandas, it's possible to perform advanced aggregation and filtering functions using the GroupBy object's apply() method. Since the sample sizes were small for each month in the above examples, it might be helpful to group the data by annual quarters instead. First, let's create a function that adds a column called Quarter to the DataFrame where the value is either Q1, Q2, Q3 or Q4 depending on the month.
End of explanation
"""
quarters = ('Q1', 'Q2', 'Q3', 'Q4')
df2 = df.groupby(df['Month']).apply(set_quarter)
data = {quarter: data['Two'] for quarter, data in df2.groupby(df2['Quarter'])}
analyze(
[data[quarter] for quarter in quarters],
groups=quarters,
categories='Quarters',
name='Column Two',
title='Oneway of Annual Quarters'
)
"""
Explanation: This function will take a GroupBy object called data, where data's DataFrame object was grouped by month, and set the variable quarter based off the month. Then, a new column called Quarter is added to data where the value of each row is equal to quarter. Finally, the resulting DataFrame object is returned.
Using the new function is simple. The same techniques from previous examples are used, but this time, a new DataFrame object called df2 is created by first grouping by the Month column then calling the apply() method which will run the set_quarter() function.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.