Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
8,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Built-in plotting methods for Raw objects
This tutorial shows how to plot continuous data as a time series, how to plot
the spectral density of continuous data, and how to plot the sensor locations
and projectors stored in
Step1: We've seen in a previous tutorial <tut-raw-class> how to plot data
from a
Step2: It may not be obvious when viewing this tutorial online, but by default, the
Step3: If the data have been filtered, vertical dashed lines will automatically
indicate filter boundaries. The spectrum for each channel type is drawn in
its own subplot; here we've passed the average=True parameter to get a
summary for each channel type, but it is also possible to plot each channel
individually, with options for how the spectrum should be computed,
color-coding the channels by location, and more. For example, here is a plot
of just a few sensors (specified with the picks parameter), color-coded
by spatial location (via the spatial_colors parameter, see the
documentation of
Step4: Alternatively, you can plot the PSD for every sensor on its own axes, with
the axes arranged spatially to correspond to sensor locations in space, using
Step5: This plot is also interactive; hovering over each "thumbnail" plot will
display the channel name in the bottom left of the plot window, and clicking
on a thumbnail plot will create a second figure showing a larger version of
the selected channel's spectral density (as if you had called
Step6: Plotting sensor locations from Raw objects
The channel locations in a
Step7: Plotting projectors from Raw objects
As seen in the output of | Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
Explanation: Built-in plotting methods for Raw objects
This tutorial shows how to plot continuous data as a time series, how to plot
the spectral density of continuous data, and how to plot the sensor locations
and projectors stored in :class:~mne.io.Raw objects.
:depth: 2
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping the :class:~mne.io.Raw
object to just 60 seconds before loading it into RAM to save memory:
End of explanation
raw.plot()
Explanation: We've seen in a previous tutorial <tut-raw-class> how to plot data
from a :class:~mne.io.Raw object using :doc:matplotlib
<matplotlib:index>, but :class:~mne.io.Raw objects also have several
built-in plotting methods:
:meth:~mne.io.Raw.plot
:meth:~mne.io.Raw.plot_psd
:meth:~mne.io.Raw.plot_psd_topo
:meth:~mne.io.Raw.plot_sensors
:meth:~mne.io.Raw.plot_projs_topomap
The first three are discussed here in detail; the last two are shown briefly
and covered in-depth in other tutorials.
Interactive data browsing with Raw.plot()
The :meth:~mne.io.Raw.plot method of :class:~mne.io.Raw objects provides
a versatile interface for exploring continuous data. For interactive viewing
and data quality checking, it can be called with no additional parameters:
End of explanation
raw.plot_psd(average=True)
Explanation: It may not be obvious when viewing this tutorial online, but by default, the
:meth:~mne.io.Raw.plot method generates an interactive plot window with
several useful features:
It spaces the channels equally along the y-axis.
20 channels are shown by default; you can scroll through the channels
using the :kbd:↑ and :kbd:↓ arrow keys, or by clicking on the
colored scroll bar on the right edge of the plot.
The number of visible channels can be adjusted by the n_channels
parameter, or changed interactively using :kbd:page up and :kbd:page
down keys.
You can toggle the display to "butterfly" mode (superimposing all
channels of the same type on top of one another) by pressing :kbd:b,
or start in butterfly mode by passing the butterfly=True parameter.
It shows the first 10 seconds of the :class:~mne.io.Raw object.
You can shorten or lengthen the window length using :kbd:home and
:kbd:end keys, or start with a specific window duration by passing the
duration parameter.
You can scroll in the time domain using the :kbd:← and
:kbd:→ arrow keys, or start at a specific point by passing the
start parameter. Scrolling using :kbd:shift:kbd:→ or
:kbd:shift:kbd:← scrolls a full window width at a time.
It allows clicking on channels to mark/unmark as "bad".
When the plot window is closed, the :class:~mne.io.Raw object's
info attribute will be updated, adding or removing the newly
(un)marked channels to/from the :class:~mne.Info object's bads
field (A.K.A. raw.info['bads']).
.. TODO: discuss annotation snapping in the below bullets
It allows interactive :term:annotation <annotations> of the raw data.
This allows you to mark time spans that should be excluded from future
computations due to large movement artifacts, line noise, or other
distortions of the signal. Annotation mode is entered by pressing
:kbd:a. See annotations-tutorial for details.
It automatically applies any :term:projectors <projector> before plotting
the data.
These can be enabled/disabled interactively by clicking the Proj
button at the lower right corner of the plot window, or disabled by
default by passing the proj=False parameter. See
tut-projectors-background for more info on projectors.
These and other keyboard shortcuts are listed in the Help window, accessed
through the Help button at the lower left corner of the plot window.
Other plot properties (such as color of the channel traces, channel order and
grouping, simultaneous plotting of :term:events, scaling, clipping,
filtering, etc.) can also be adjusted through parameters passed to the
:meth:~mne.io.Raw.plot method; see the docstring for details.
Plotting spectral density of continuous data
To visualize the frequency content of continuous data, the
:class:~mne.io.Raw object provides a :meth:~mne.io.Raw.plot_psd to plot
the spectral density_ of the data.
End of explanation
midline = ['EEG 002', 'EEG 012', 'EEG 030', 'EEG 048', 'EEG 058', 'EEG 060']
raw.plot_psd(picks=midline)
Explanation: If the data have been filtered, vertical dashed lines will automatically
indicate filter boundaries. The spectrum for each channel type is drawn in
its own subplot; here we've passed the average=True parameter to get a
summary for each channel type, but it is also possible to plot each channel
individually, with options for how the spectrum should be computed,
color-coding the channels by location, and more. For example, here is a plot
of just a few sensors (specified with the picks parameter), color-coded
by spatial location (via the spatial_colors parameter, see the
documentation of :meth:~mne.io.Raw.plot_psd for full details):
End of explanation
raw.plot_psd_topo()
Explanation: Alternatively, you can plot the PSD for every sensor on its own axes, with
the axes arranged spatially to correspond to sensor locations in space, using
:meth:~mne.io.Raw.plot_psd_topo:
End of explanation
raw.copy().pick_types(meg=False, eeg=True).plot_psd_topo()
Explanation: This plot is also interactive; hovering over each "thumbnail" plot will
display the channel name in the bottom left of the plot window, and clicking
on a thumbnail plot will create a second figure showing a larger version of
the selected channel's spectral density (as if you had called
:meth:~mne.io.Raw.plot_psd on that channel).
By default, :meth:~mne.io.Raw.plot_psd_topo will show only the MEG
channels if MEG channels are present; if only EEG channels are found, they
will be plotted instead:
End of explanation
raw.plot_sensors(ch_type='eeg')
Explanation: Plotting sensor locations from Raw objects
The channel locations in a :class:~mne.io.Raw object can be easily plotted
with the :meth:~mne.io.Raw.plot_sensors method. A brief example is shown
here; notice that channels in raw.info['bads'] are plotted in red. More
details and additional examples are given in the tutorial
tut-sensor-locations.
End of explanation
raw.plot_projs_topomap(colorbar=True)
Explanation: Plotting projectors from Raw objects
As seen in the output of :meth:mne.io.read_raw_fif above, there are
:term:projectors <projector> included in the example :class:~mne.io.Raw
file (representing environmental noise in the signal, so it can later be
"projected out" during preprocessing). You can visualize these projectors
using the :meth:~mne.io.Raw.plot_projs_topomap method. By default it will
show one figure per channel type for which projectors are present, and each
figure will have one subplot per projector. The three projectors in this file
were only computed for magnetometers, so one figure with three subplots is
generated. More details on working with and plotting projectors are given in
tut-projectors-background and tut-artifact-ssp.
End of explanation |
8,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 3 - Basic Artificial Neural Network
In this lab we will build a very rudimentary Artificial Neural Network (ANN) and use it to solve some basic classification problems. This example is implemented with only basic math and linear algebra functions using Python's scientific computing library numpy. This will allow us to study how each aspect of the network works, and to gain an intuitive understanding of its functions. In future labs we will use higher-level libraries such as Keras and Tensorflow which automate and optimize most of these functions, making the network much faster and easier to use.
The code and MNIST test data is taken directly from http
Step9: Next, we will build the artificial neural network by defining a new class called Network. This class will contain all the data for our neural network, as well as all the methods we need to compute activations between each layer, and train the network through backpropagation and stochastic gradient descent (SGD).
Step10: Finally, we define two helper functions which compute the sigmoid activation function and it's derivative which is used in backpropagation.
Step11: Iris dataset example
Now we will test our basic artificial neural network on a very simple classification problem. First we will use the seaborn data visualization library to load the 'iris' dataset,
which consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor), with four features measuring the length and the width of each flower's sepals and petals. After we load the data we will vizualize it using a pairwise plot using a buit-in function in seaborn. A pairwise plot is a kind of exploratory data analysis that helps us to find relationships between pairs of features within a multi-dimensional data set. In this case, we can use it to understand which features might be most useful for determining the species of the flower.
Step12: Next, we will prepare the data set for training in our ANN. Here is a list of operations we need to perform on the data set so that it will work with the Network class we created above
Step13: MNIST dataset example
Next, we will test our ANN on another, slightly more difficult classification problem. The data set we'll be using is called MNIST, which contains tens of thousands of scanned images of handwritten digits, classified according to the digit type from 0-9. The name MNIST comes from the fact that it is a Modified (M) version of a dataset originally developed by the United States' National Institute of Standards and Technology (NIST). This is a very popular dataset used to measure the effectiveness of Machine Learning models for image recongnition. This time we don't have to do as much data management since the data is already provided in the right format here.
We will get into more details about working with images and proper data formats for image data in later labs, but you can already use this data to test the effectiveness of our network. With the default settings you should be able to get a classification accuracy of 95% in the test set.
note
Step14: We can use the matplotlib library to visualize one of the training images. In the data set, the pixel values of each 28x28 pixel image is encoded in a straight list of 784 numbers, so before we visualize it we have to use numpy's reshape function to convert it back to a 2d matrix form
Step15: Assignment 3 - classification
Now that you have a basic understanding of how an artificial neural network works and have seen it applied to a classification task using two types of data, see if you can use the network to solve another classification problem using another data set.
In the week-3 folder there is a data set called wine.csv which is another common data set used to test classification capabilities of machine learning algorithms. You can find a description of the data set here | Python Code:
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set(style="ticks", color_codes=True)
from sklearn.preprocessing import OneHotEncoder
from sklearn.utils import shuffle
Explanation: Lab 3 - Basic Artificial Neural Network
In this lab we will build a very rudimentary Artificial Neural Network (ANN) and use it to solve some basic classification problems. This example is implemented with only basic math and linear algebra functions using Python's scientific computing library numpy. This will allow us to study how each aspect of the network works, and to gain an intuitive understanding of its functions. In future labs we will use higher-level libraries such as Keras and Tensorflow which automate and optimize most of these functions, making the network much faster and easier to use.
The code and MNIST test data is taken directly from http://neuralnetworksanddeeplearning.com/ by Michael Nielsen. Please review the first chapter of the book for a thorough explanation of the code.
First we import the Python libraries we will be using, including the random library for generating random numbers, numpy for scientific computing, matplotlib and seaborn for creating data visualizations, and several helpful modules from the sci-kit learn machine learning library:
End of explanation
class Network(object):
def __init__(self, sizes):
The list ``sizes`` contains the number of neurons in the
respective layers of the network. For example, if the list
was [2, 3, 1] then it would be a three-layer network, with the
first layer containing 2 neurons, the second layer 3 neurons,
and the third layer 1 neuron. The biases and weights for the
network are initialized randomly, using a Gaussian
distribution with mean 0, and variance 1. Note that the first
layer is assumed to be an input layer, and by convention we
won't set any biases for those neurons, since biases are only
ever used in computing the outputs for later layers.
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])]
def feedforward (self, a):
Return the output of the network if "a" is input. The np.dot()
function computes the matrix multiplication between the weight and input
matrices for each set of layers. When used with numpy arrays, the '+'
operator performs matrix addition.
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
Train the neural network using mini-batch stochastic
gradient descent. The "training_data" is a list of tuples
"(x, y)" representing the training inputs and the desired
outputs. The other non-optional parameters specify the number
of epochs, size of each mini-batch, and the learning rate.
If "test_data" is provided then the network will be evaluated
against the test data after each epoch, and partial progress
printed out. This is useful for tracking progress, but slows
things down substantially.
# create an empty array to store the accuracy results from each epoch
results = []
n = len(training_data)
if test_data:
n_test = len(test_data)
# this is the code for one training step, done once for each epoch
for j in xrange(epochs):
# before each epoch, the data is randomly shuffled
random.shuffle(training_data)
# training data is broken up into individual mini-batches
mini_batches = [ training_data[k:k+mini_batch_size]
for k in xrange(0, n, mini_batch_size) ]
# then each mini-batch is used to update the parameters of the
# network using backpropagation and the specified learning rate
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
# if a test data set is provided, the accuracy results
# are displayed and stored in the 'results' array
if test_data:
num_correct = self.evaluate(test_data)
accuracy = "%.2f" % (100 * (float(num_correct) / n_test))
print "Epoch", j, ":", num_correct, "/", n_test, "-", accuracy, "% acc"
results.append(accuracy)
else:
print "Epoch", j, "complete"
return results
def update_mini_batch(self, mini_batch, eta):
Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The "mini_batch" is a list of tuples "(x, y)", and "eta"
is the learning rate.
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
def backprop(self, x, y):
Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``.
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
Note that the variable l in the loop below is used a little
differently to the notation in Chapter 2 of the book. Here,
l = 1 means the last layer of neurons, l = 2 is the
second-last layer, and so on. It's a renumbering of the
scheme in the book, used here to take advantage of the fact
that Python can use negative indices in lists.
for l in xrange(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
return (nabla_b, nabla_w)
def evaluate(self, test_data):
Return the number of test inputs for which the neural
network outputs the correct result. Note that the neural
network's output is assumed to be the index of whichever
neuron in the final layer has the highest activation.
Numpy's argmax() function returns the position of the
largest element in an array. We first create a list of
predicted value and target value pairs, and then count
the number of times those values match to get the total
number correct.
test_results = [(np.argmax(self.feedforward(x)), y)
for (x, y) in test_data]
return sum(int(x == y) for (x, y) in test_results)
def cost_derivative(self, output_activations, y):
Return the vector of partial derivatives \partial C_x /
\partial a for the output activations.
return (output_activations-y)
Explanation: Next, we will build the artificial neural network by defining a new class called Network. This class will contain all the data for our neural network, as well as all the methods we need to compute activations between each layer, and train the network through backpropagation and stochastic gradient descent (SGD).
End of explanation
def sigmoid(z):
# The sigmoid activation function.
return 1.0/(1.0 + np.exp(-z))
def sigmoid_prime(z):
# Derivative of the sigmoid function.
return sigmoid(z)*(1-sigmoid(z))
Explanation: Finally, we define two helper functions which compute the sigmoid activation function and it's derivative which is used in backpropagation.
End of explanation
iris_data = sns.load_dataset("iris")
# randomly shuffle data
iris_data = shuffle(iris_data)
# print first 5 data points
print iris_data[:5]
# create pairplot of iris data
g = sns.pairplot(iris_data, hue="species")
Explanation: Iris dataset example
Now we will test our basic artificial neural network on a very simple classification problem. First we will use the seaborn data visualization library to load the 'iris' dataset,
which consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor), with four features measuring the length and the width of each flower's sepals and petals. After we load the data we will vizualize it using a pairwise plot using a buit-in function in seaborn. A pairwise plot is a kind of exploratory data analysis that helps us to find relationships between pairs of features within a multi-dimensional data set. In this case, we can use it to understand which features might be most useful for determining the species of the flower.
End of explanation
# convert iris data to numpy format
iris_array = iris_data.as_matrix()
# split data into feature and target sets
X = iris_array[:, :4].astype(float)
y = iris_array[:, -1]
print y[0]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# convert the textual category data to integer using numpy's unique() function
_, y = np.unique(y, return_inverse=True)
# convert the list of targets to a vertical matrix with the dimensions [1 x number of samples]
# this is necessary for later computation
y = y.reshape(-1,1)
# combine feature and target data into a new python array
data = []
for i in range(X.shape[0]):
data.append(tuple([X[i].reshape(-1,1), y[i][0]]))
# split data into training and test sets
trainingSplit = int(.7 * len(data))
training_data = data[:trainingSplit]
test_data = data[trainingSplit:]
# create an instance of the one-hot encoding function from the sci-kit learn library
enc = OneHotEncoder()
# use the function to figure out how many categories exist in the data
enc.fit(y)
# convert only the target data in the training set to one-hot encoding
training_data = [[_x, enc.transform(_y.reshape(-1,1)).toarray().reshape(-1,1)] for _x, _y in training_data]
# define the network
net = Network([4, 32, 3])
# train the network using SGD, and output the results
results = net.SGD(training_data, 2, 10, 0.2, test_data=test_data)
# visualize the results
plt.plot(results)
plt.ylabel('accuracy (%)')
plt.ylim([0,100.0])
plt.show()
Explanation: Next, we will prepare the data set for training in our ANN. Here is a list of operations we need to perform on the data set so that it will work with the Network class we created above:
Convert data to numpy format
Normalize the data so that each features is scaled from 0 to 1
Split data into feature and target data sets by extracting specific rows from the numpy array. In this case the features are in the first four columns, and the target is in the last column, which in Python we can access with a negative index
Recombine the data into a single Python array, so that each entry in the array represents one sample, and each sample is composed of two numpy arrays, one for the feature data, and one for the target
Split this data set into training and testing sets
Finally, we also need to convert the targets of the training set to 'one-hot' encoding (OHE). OHE takes each piece of categorical data and converts it to a list of binary values the length of which is equal to the number of categories, and the position of the current category denoted with a '1' and '0' for all others. For example, in our dataset we have 3 possible categories: versicolor, virginica, and setosa. After applying OHE, versicolor becomes [1,0,0], virginica becomes [0,1,0], and setosa becomes [0,0,1]. OHE is often used to represent target data in neural networks because it allows easy comparison to the output coming from the network's final layer.
End of explanation
import mnist_loader
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
Explanation: MNIST dataset example
Next, we will test our ANN on another, slightly more difficult classification problem. The data set we'll be using is called MNIST, which contains tens of thousands of scanned images of handwritten digits, classified according to the digit type from 0-9. The name MNIST comes from the fact that it is a Modified (M) version of a dataset originally developed by the United States' National Institute of Standards and Technology (NIST). This is a very popular dataset used to measure the effectiveness of Machine Learning models for image recongnition. This time we don't have to do as much data management since the data is already provided in the right format here.
We will get into more details about working with images and proper data formats for image data in later labs, but you can already use this data to test the effectiveness of our network. With the default settings you should be able to get a classification accuracy of 95% in the test set.
note: since this is a much larger data set than the Iris data, the training will take substantially more time.
End of explanation
img = training_data[0][0][:,0].reshape((28,28))
fig = plt.figure()
plt.imshow(img, interpolation='nearest', vmin = 0, vmax = 1, cmap=plt.cm.gray)
plt.axis('off')
plt.show()
net = Network([784, 30, 10])
results = net.SGD(training_data, 2, 10, 3.0, test_data=test_data)
plt.plot(results)
plt.ylabel('accuracy (%)')
plt.ylim([0,100.0])
plt.show()
Explanation: We can use the matplotlib library to visualize one of the training images. In the data set, the pixel values of each 28x28 pixel image is encoded in a straight list of 784 numbers, so before we visualize it we have to use numpy's reshape function to convert it back to a 2d matrix form
End of explanation
wine_data = np.loadtxt(open("./data/wine.csv","rb"),delimiter=",")
wine_data = shuffle(wine_data)
X = wine_data[:,1:]
y = wine_data[:, 0]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# convert the textual category data to integer using numpy's unique() function
_, y = np.unique(y, return_inverse=True)
# convert the list of targets to a vertical matrix with the dimensions [1 x number of samples]
# this is necessary for later computation
y = y.reshape(-1,1)
# combine feature and target data into a new python array
data = []
for i in range(X.shape[0]):
data.append(tuple([X[i].reshape(-1,1), y[i][0]]))
# split data into training and test sets
trainingSplit = int(.8 * len(data))
training_data = data[:trainingSplit]
test_data = data[trainingSplit:]
# create an instance of the one-hot encoding function from the sci-kit learn library
enc = OneHotEncoder()
# use the function to figure out how many categories exist in the data
enc.fit(y)
# convert only the target data in the training set to one-hot encoding
training_data = [[_x, enc.transform(_y.reshape(-1,1)).toarray().reshape(-1,1)] for _x, _y in training_data]
# define the network
net = Network([13, 55, 3])
# train the network using SGD, and output the results
results = net.SGD(training_data, 30, 10, 0.44, test_data=test_data)
# visualize the results
plt.plot(results)
plt.ylabel('accuracy (%)')
plt.ylim([0,100.0])
plt.show()
Explanation: Assignment 3 - classification
Now that you have a basic understanding of how an artificial neural network works and have seen it applied to a classification task using two types of data, see if you can use the network to solve another classification problem using another data set.
In the week-3 folder there is a data set called wine.csv which is another common data set used to test classification capabilities of machine learning algorithms. You can find a description of the data set here:
https://archive.ics.uci.edu/ml/datasets/Wine
The code below uses numpy to import this .csv file as a 2d numpy array. As before, we first shuffle the data set, and then split it into feature and target sets. This time, the target is in the first column of the data, with the rest of the columns representing the 13 features.
From there you should be able to go through and format the data set in a similar way as we did for the Iris data above. Remember to split the data into both training and test sets, and encode the training targets as one-hot vectors. When you create the network, make sure to specify the proper dimensions for the input and output layer so that it matches the number of features and target categories in the data set. You can also experiment with different sizes for the hidden layer. If you are not achieving good results, try changing some of the hyper-parameters, including the size and quantity of hidden layers in the network specification, and the number of epochs, the size of a mini-batch, and the learning rate in the SGD function call. With a training/test split of 80/20 you should be able to achieve 100% accuracy Within 30 epochs.
Remeber to commit your changes and submit a pull request when you are done.
Hint: do not be fooled by the category labels that come with this data set! Even though the labels are already integers (1,2,3) we need to always make sure that our category labels are sequential integers and start with 0. To make sure this is the case you should always use the np.unique() function on the target data as we did with the Iris example above.
End of explanation |
8,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 1 - About Functional Programming
What is Functional Programming
Functional programming is a programming paradigm that revolves around pure functions.
A pure function is a function which can be represented as a mathematical expression. That means, no side-effects should be present, i.e. no I/O operations, no global state changes, no database interactions.
<img src="files/PureFunction.png" width="500" alt="Pure Function Representation">
The output from a pure function is depended ONLY on its inputs. Thus, if a pure function is called with the same inputs a million times, you would get the same result every single time.
Step1: In the above example, the output of the function global_sum changed due to the value of a, thus it is unfunctional function.
Step2: and in the above example better_sum, the function returns always the same value for the set of input and only provided input can have any impact on the output of the function.
## Characteristics of functional programming
Functions are first class (objects). So, data and functions are treated as same and have access to same operations(such as passing a function to another function).
Recursion as primary control structure.
There is a focus on LISt Processing and are often used with recursion on sub-lists as a substitute for loops.
Avoid "side-effects". It excludes the almost ubiquitous pattern in imperative languages of assigning first one, then another value to the same variable to track the program state.
Either discourages or outright disallows statements, and instead works with the evaluation of expressions (in other words, functions plus arguments). In the pure case, one program is one expression (plus supporting definitions).
FP worries about what is to be computed rather than how it is to be computed.
Much FP utilizes "higher order" functions (in other words, functions that operate on functions that operate on functions).
Functions as First-Class citizens
In functional programming, functions can be treated as objects. That is, they can assigned to a variable, can be passed as arguments or even returned from other functions.
Step3: The lambda
The simplest way to initialize a pure function in python is by using lambda keyword, which helps in defining the one-line function. Functions initialized with lambda can often called anonymous functions
Step5: Functions as Objects
Functions are first-class objects in Python, meaning they have attributes and can be referenced and assigned to variables.
Step6: Adding attributes to a function
Step7: higher-order function
Python also supports higher-order functions, meaning that functions can accept other functions as arguments and return functions to the caller.
Step14: 13=2*4+5
F -> product_func
m => 5
x -> 2
y -> 4
2*4+5 = 8+5 = 13
In the above example higher-order function that takes two inputs- A function F(x) and a multiplier m.
Nested Functions
In Python, Function(s) can also be defined within the scope of another function. If this type of function definition is used the inner function is only in scope inside the outer function, so it is most often useful when the inner function is being returned (moving it to the outer scope) or when it is being passed into another function.
Notice that in the below example, a new instance of the function inner() is created on each call to outer(). That is because it is defined during the execution of outer(). The creation of the second instance has no impact on the first.
Step15: Inner / Nested Functions - When to use
Encapsulation
You use inner functions to protect them from anything happening outside of the function, meaning that they are hidden from the global scope.
Step16: NOTE
Step17: Following DRY (Don't Repeat Yourself)
This type can be used if you have a section of code base in function is repeated in numerous places. For example, you might write a function which processes a file, and you want to accept either an open file object or a file name
Step18: or have similar logic which can be replaced by a function, such as mathematical functions, or code base which can be clubed by using some parameters.
Step19: ??? why code
Step20: Closures & Factory Functions <sup>1</sup>
They are techniques for implementing lexically scoped name binding with first-class functions. It is a record, storing a function together with an environment. a mapping associating each free variable of the function (variables that are used locally, but defined in an enclosing scope) with the value or reference to which the name was bound when the closure was created.
A closure—unlike a plain function—allows the function to access those captured variables through the closure's copies of their values or references, even when the function is invoked outside their scope.
Step21: both a and b are closures—or rather, variables with a closure as value—in both cases produced by returning a nested function with a free variable from an enclosing function, so that the free variable binds to the parameter x of the enclosing function. However, in the first case the nested function has a name, g, while in the second case the nested function is anonymous. The closures need not be assigned to a variable, and can be used directly, as in the last lines—the original name (if any) used in defining them is irrelevant. This usage may be deemed an "anonymous closure".
1
Step22: Closures can avoid the use of global values and provides some form of data hiding. It can also provide an object oriented solution to the problem.
When there are few methods (one method in most cases) to be implemented in a class, closures can provide an alternate and more elegant solutions. But when the number of attributes and methods get larger, better implement a class.
Comprehensions
Using comprehensions is often a way both to make code more compact and to shift our focus from the "how" to the "what". It is an expression that uses the same keywords as loop and conditional blocks, but inverts their order to focus on the data rather than on the procedure.
Simply changing the form of expression can often make a surprisingly large difference in how we reason about code and how easy it is to understand. The ternary operator also performs a similar restructuring of our focus, using the same keywords in a different order.
List Comprehensions
A way to create a new list from existing list based on defined logic
Unconditional Compreshensions
Step23: Conditional Compreshensions
Step24: !!!! Tip !!!!
Copy the variable assignment for our new empty list (line 3)
Copy the expression that we’ve been append-ing into this new list (line 6)
Copy the for loop line, excluding the final
Step26: Nested if statements in for loop
Step27: Set Comprehensions
Set comprehensions allow sets to be constructed using the same principles as list comprehensions, the only difference is that resulting sequence is a set and "{}" are used instead of "[]".
Step28: Dictionary Comprehensions
Step29: This map doesn’t take a named function. It takes an anonymous, inlined function defined with lambda. The parameters of the lambda are defined to the left of the colon. The function body is defined to the right of the colon. The result of running the function body is (implicitly) returned.
The unfunctional code below takes a list of real names and appends them with randomly assigned code names.
Step30: Generator Comprehension
They are simply a generator expression with a parenthesis "()" around it. Otherwise, the syntax and the way of working is like list comprehension, but a generator comprehension returns a generator instead of a list.
Step31: Summary
When struggling to write a comprehension, don’t panic. Start with a for loop first and copy-paste your way into a comprehension.
Any for loop that looks like this
Step32: Can be rewritten into a list comprehension like this
Step33: NOTE
If you can nudge a for loop until it looks like the ones above, you can rewrite it as a list comprehension.
Recursion
In Functional programming iteration such as while or for statements are avoided. Also it does not have the provision of state-updates.
In FP recursion is used to overcome iterations, since any iterative code can be converted to recursive code as shown in the below examples.
Step34: or, using lambda
Step35: map, reduce and filter
These are three functions which facilitate a functional approach to programming. map, reduce and filter are three higher-order functions that appear in all pure functional languages including Python. They are often are used in functional code to make it more elegant.
Map
It basically provides kind of parallelism by calling the requested function over all elements in a list/array or in other words,
Map applies a function to all the items in the given list and returns a new list.
It takes a function and a collection of items as parameters and makes a new, empty collection, runs the function on each item in the original collection and inserts each return value into the new collection. It then returns the updated collection.
This is a simple map that takes a list of names and returns a list of the lengths of those names
Step36: This can be rewritten as a lamba
Step37: Reduce
Reduce takes a function and a collection of items. It returns a value that is created by combining the items. This is a simple reduce. It returns the sum of all the items in the collection.
Step38: In the above example, x is the current iterated item and a is the accumulator.
It is the value returned by the execution of the lambda on the previous item. reduce() walks through the items. For each one, it runs the lambda on the current a and x and returns the result as the a of the next iteration.
What is a in the first iteration? There is no previous iteration result for it to pass along. reduce() uses the first item in the collection for a in the first iteration and starts iterating at the second item. That is, the first x is the second item.
This code counts how often the word 'the' appears in a list of strings
Step39: NOTE
Step40: functools
The functools module is for higher-order functions
Step51: Now lets see the magic of partial
Step54: Here we import wraps from the functools module and use it as a decorator for the nested wrapper function inside of another_function to map the name and doc to the wrapper function
update_wrapper
The partial object does not have name or doc attributes by default, and without those attributes decorated functions are more difficult to debug. Using update_wrapper(), copies or adds attributes from the original function to the partial object. | Python Code:
# not so functional function
a = 0
def global_sum(x):
global a
x += a
return x
print(global_sum(1))
print(a)
a = 11
print(global_sum(1))
print(a)
# not so functional function
a = 0
def global_sum(x):
global a
return x + a
print(global_sum(x=1))
print(a)
a = 11
print(global_sum(x=1))
print(a)
Explanation: Section 1 - About Functional Programming
What is Functional Programming
Functional programming is a programming paradigm that revolves around pure functions.
A pure function is a function which can be represented as a mathematical expression. That means, no side-effects should be present, i.e. no I/O operations, no global state changes, no database interactions.
<img src="files/PureFunction.png" width="500" alt="Pure Function Representation">
The output from a pure function is depended ONLY on its inputs. Thus, if a pure function is called with the same inputs a million times, you would get the same result every single time.
End of explanation
# a better functional function
def better_sum(a, x):
return a+x
num = better_sum(1, 1)
print(num)
num = better_sum(1, 3)
print(num)
num = better_sum(1, 1)
print(num)
Explanation: In the above example, the output of the function global_sum changed due to the value of a, thus it is unfunctional function.
End of explanation
a = 10
def test_function():
pass
print(id(a), dir(a))
print(id(test_function), dir(test_function))
Explanation: and in the above example better_sum, the function returns always the same value for the set of input and only provided input can have any impact on the output of the function.
## Characteristics of functional programming
Functions are first class (objects). So, data and functions are treated as same and have access to same operations(such as passing a function to another function).
Recursion as primary control structure.
There is a focus on LISt Processing and are often used with recursion on sub-lists as a substitute for loops.
Avoid "side-effects". It excludes the almost ubiquitous pattern in imperative languages of assigning first one, then another value to the same variable to track the program state.
Either discourages or outright disallows statements, and instead works with the evaluation of expressions (in other words, functions plus arguments). In the pure case, one program is one expression (plus supporting definitions).
FP worries about what is to be computed rather than how it is to be computed.
Much FP utilizes "higher order" functions (in other words, functions that operate on functions that operate on functions).
Functions as First-Class citizens
In functional programming, functions can be treated as objects. That is, they can assigned to a variable, can be passed as arguments or even returned from other functions.
End of explanation
# Example lambda keyword
product_func = lambda x, y: x*y
print(product_func(10, 20))
print(product_func(10, 2))
concat = lambda x, y: [x, y]
print(concat([1,2,3], 4))
Explanation: The lambda
The simplest way to initialize a pure function in python is by using lambda keyword, which helps in defining the one-line function. Functions initialized with lambda can often called anonymous functions
End of explanation
def square(x):
This returns the square of the requested number `x`
return x**2
print(square(10))
print(square(100))
# Assignation to another variable
mySquare = square
print(mySquare(100))
print(square)
print(mySquare)
print(id(square))
print(id(mySquare))
# attributes present
print("*"*30)
print(dir(square))
print("*"*30)
print(mySquare.__name__)
print("*"*30)
print(square.__code__)
print("*"*30)
print(square.__doc__)
Explanation: Functions as Objects
Functions are first-class objects in Python, meaning they have attributes and can be referenced and assigned to variables.
End of explanation
square.d = 10
print(dir(square))
Explanation: Adding attributes to a function
End of explanation
print(square(square(square(2))))
product_func = lambda x, y: x*y
sum_func = lambda F, m: lambda x, y: F(x, y)+m
print(sum_func(product_func, 5)(2, 4))
print(sum_func)
print(sum_func(product_func, 5))
print(sum_func(product_func, 5)(3, 5))
Explanation: higher-order function
Python also supports higher-order functions, meaning that functions can accept other functions as arguments and return functions to the caller.
End of explanation
def outer(a):
Outer function
y = 0
def inner(x):
inner function
y = x*x*a
return(y)
print(a)
return inner
my_out = outer
my_out(102)
o = outer(10)
b = outer(20)
print("*"*20)
print(b)
print(o)
print("*"*20)
print(o(10))
print(b(10))
def outer():
Outer function
if 'a' in locals():
a +=10
else:
print("~"),
a = 20
def inner(x):
inner function
return(x*x*a)
print(a)
return inner
# oo = outer
# print(oo.__doc__)
o = outer()
print("*"*20)
print(o)
print(o(10))
print(o.__doc__)
b = outer()
print(b)
print(b(30))
print(b.__doc__)
x = 0
def outer():
x = 1
def inner():
x = 2
print("inner:", x)
inner()
print("outer:", x)
outer()
print("global:", x)
x = 0
def outer():
x = 1
def inner():
nonlocal x
x = 2
print("inner:", x)
inner()
print("outer:", x)
outer()
print("global:", x)
# inner: 2
# outer: 2
# global: 0
def outer(a):
Outer function
y = 1
def inner(x):
inner function
nonlocal y
print(y)
y = x*x*a
return("y =" + str(y))
print(a)
return inner
o = outer(10)
b = outer(20)
print("*"*20)
print(o)
print(o(10))
print("*"*20)
print(b)
print(b(10))
Explanation: 13=2*4+5
F -> product_func
m => 5
x -> 2
y -> 4
2*4+5 = 8+5 = 13
In the above example higher-order function that takes two inputs- A function F(x) and a multiplier m.
Nested Functions
In Python, Function(s) can also be defined within the scope of another function. If this type of function definition is used the inner function is only in scope inside the outer function, so it is most often useful when the inner function is being returned (moving it to the outer scope) or when it is being passed into another function.
Notice that in the below example, a new instance of the function inner() is created on each call to outer(). That is because it is defined during the execution of outer(). The creation of the second instance has no impact on the first.
End of explanation
# Encapsulation
def increment(current):
def inner_increment(x): # hidden from outer code
return x + 1
next_number = inner_increment(current)
return [current, next_number]
print(increment(10))
Explanation: Inner / Nested Functions - When to use
Encapsulation
You use inner functions to protect them from anything happening outside of the function, meaning that they are hidden from the global scope.
End of explanation
increment.inner_increment(109)
### NOT WORKING
def update(str_val):
def updating(ori_str, key, value):
token = "$"
if key in ori_str:
ori_str = ori_str.replace(token+key, value)
return ori_str
keyval = [{"test1": "val_test", "t1" : "val_1"}, {"test2": "val_test2", "t2" : "val_2"}]
keyval1 = [{"test1": "val_test", "t1" : "val_1"}, {"test2": "val_test2", "t2" : "val_2"}]
ori_str = "This is a $test1 and $test2, $t1 and $t2"
# for k in keyval:
# for key, value in k.items():
# ori_str = updateing(ori_str, key, value)
sdd = [ key, value [for key, value in k] for(k in keyval) ]
print(ori_str)
update("D")
ld = [{'a': 10, 'b': 20}, {'p': 10, 'u': 100}]
[kv for d in ld for kv in d.items()]
ori_str = "This is a $test;1 and $test2, $t1 and $t2"
print(ori_str.replace("test1", "TEST1"))
print(ori_str)
Explanation: NOTE: We can not access directly the inner function
End of explanation
# Keepin’ it DRY
def process(file_name):
def do_stuff(file_process):
for line in file_process:
print(line)
if isinstance(file_name, str):
with open(file_name, 'r') as f:
do_stuff(f)
else:
do_stuff(file_name)
process(["test", "test3", "t33"])do_stuff(file_name)
process("test.txt")
Explanation: Following DRY (Don't Repeat Yourself)
This type can be used if you have a section of code base in function is repeated in numerous places. For example, you might write a function which processes a file, and you want to accept either an open file object or a file name:
End of explanation
def square(n):
return n**2
def cube(n):
return n**3
print(square(2))
def sqr(a, b):
return a**b
Explanation: or have similar logic which can be replaced by a function, such as mathematical functions, or code base which can be clubed by using some parameters.
End of explanation
def test():
print("TESTTESTTEST")
def yes(name):
print("Ja, ", name)
return True
return yes
d = test()
print("XSSSS")
print(d("Venky"))
def power(exp):
def subfunc(a):
return a**exp
return subfunc
square = power(2)
hexa = power(6)
print(square)
print(hexa)
print(square(5)) # 5**2
print()
print(hexa(3)) # 3**6
print(power(6)(3))
# subfunc(3) where exp = 6
# SQuare
# exp -> 2
# Square(5)
# a -> 5
# 5**2
# 25
Power(6)(3, x)
def a1(m):
x = m * 2
def b(v, t=None):
if t:
print(x, m, t)
return v + t
else:
print(x, m, v)
return v + x
return b
n = a1(2)
print(n(3))
print(n(3, 10))
def f1(a):
def f2(b):
return f2
def f3(c):
return f3
def f4(d):
return f4
def f5(e):
return f5
print (f1(1)(2)(3)(4)(5))
def f1(a):
def f2(b):
def f3(c):
def f4(d):
def f5(e):
print(e)
return f5
return f4
return f3
return f2
f1(1)(2)(3)(4)(5)
Explanation: ??? why code
End of explanation
def f(x):
def g(y):
return x + y
return g
def h(x):
return lambda y: x + y
a = f(1)
b = h(1)
print(a, b)
print(a(5), b(5))
print(f(1)(5), h(1)(5))
Explanation: Closures & Factory Functions <sup>1</sup>
They are techniques for implementing lexically scoped name binding with first-class functions. It is a record, storing a function together with an environment. a mapping associating each free variable of the function (variables that are used locally, but defined in an enclosing scope) with the value or reference to which the name was bound when the closure was created.
A closure—unlike a plain function—allows the function to access those captured variables through the closure's copies of their values or references, even when the function is invoked outside their scope.
End of explanation
def make_adder(x):
def add(y):
return x + y
return add
plus10 = make_adder(10)
print(plus10(12)) # make_adder(10).add(12)
print(make_adder(10)(12))
Explanation: both a and b are closures—or rather, variables with a closure as value—in both cases produced by returning a nested function with a free variable from an enclosing function, so that the free variable binds to the parameter x of the enclosing function. However, in the first case the nested function has a name, g, while in the second case the nested function is anonymous. The closures need not be assigned to a variable, and can be used directly, as in the last lines—the original name (if any) used in defining them is irrelevant. This usage may be deemed an "anonymous closure".
1: Copied from : "https://en.wikipedia.org/wiki/Closure_(computer_programming)"
End of explanation
# Original
doubled_numbers = []
for n in range(1,6):
doubled_numbers.append(n*2)
print(doubled_numbers)
#list compreshensions
doubled_numbers = [n * 2 for n in range(1,12,2)] # 1 ,3, 5, 7, 9, 11
print(doubled_numbers)
Explanation: Closures can avoid the use of global values and provides some form of data hiding. It can also provide an object oriented solution to the problem.
When there are few methods (one method in most cases) to be implemented in a class, closures can provide an alternate and more elegant solutions. But when the number of attributes and methods get larger, better implement a class.
Comprehensions
Using comprehensions is often a way both to make code more compact and to shift our focus from the "how" to the "what". It is an expression that uses the same keywords as loop and conditional blocks, but inverts their order to focus on the data rather than on the procedure.
Simply changing the form of expression can often make a surprisingly large difference in how we reason about code and how easy it is to understand. The ternary operator also performs a similar restructuring of our focus, using the same keywords in a different order.
List Comprehensions
A way to create a new list from existing list based on defined logic
Unconditional Compreshensions
End of explanation
doubled_odds = []
for n in range(1,12):
if n % 2 == 1:
doubled_odds.append(n * 2)
print(doubled_odds)
doubled_odds = [n * 2 for n in range(1,12) if n% 2 == 1]
print(doubled_odds)
Explanation: Conditional Compreshensions
End of explanation
# FROM
numbers = range(2,10)
doubled_odds = []
for n in numbers:
if n % 2 == 1:
doubled_odds.append(n * 2)
print(doubled_odds)
# TO
numbers = range(2,10)
doubled_odds = [n * 2 for n in numbers if n % 2 == 1]
Explanation: !!!! Tip !!!!
Copy the variable assignment for our new empty list (line 3)
Copy the expression that we’ve been append-ing into this new list (line 6)
Copy the for loop line, excluding the final : (line 4)
Copy the if statement line, also without the : (line 5)
End of explanation
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0]
lst = []
for v in l:
if v == 0 :
lst.append ('Zero')
else:
if v % 2 == 0:
lst.append ('even')
else:
lst.append ('odd')
print(lst)
lst = ["zero" if v == 0 else "even" if v%2 == 0 else "odd" for v in l]
print(lst)
print(['yes' if v == 1 else 'no' if v == 2 else 'idle' for v in l])
# def flatten_list_new(lst, result=None):
# Flattens a nested list
# >>> flatten_list([ [1, 2, [3, 4] ], [5, 6], 7])
# [1, 2, 3, 4, 5, 6, 7]
#
# # if result is None:
# # result = []
# # else:
# result = [x if not isinstance(x, list) else flatten_list_new(x, list) for x in lst]
# # result = [ x if not isinstance(x, list) else isinstance(x, list) for x in lst ]
# # result = [x for x in a if not isinstance(x, list) else isinstance(x, list)]
# # for x in a:
# # if isinstance(x, list):
# # flatten_list(x, result)
# # else:
# # result.append(x)
# return result
# lst = [ [1, 2, [3, 4] ], [5, 6], 7]
# print(flatten_list_new(lst))
lst = []
for a in range(10):
if a % 2==0:
for x in range(a, 10):
lst.append(x)
print(lst)
n = 10
lsts = [x for a in range(10) if a % 2==0 for x in range(a, 10) ]
print(lsts)
n = 10
lsts = [x for a in range(10)
if a % 2==0
for x in range(a, 10) ]
print(lsts)
%%time
import os
files = []
for d in os.walk(r"E:\code\mj\lep\Section 1 - Core Python"):
for f in d[2]:
if f.endswith(".py"):
files.append(os.path.join(d[0], f))
print(len(files))
%%time
import os
files = [os.path.join(d[0], f)
for d in os.walk(r"E:\code\mj\lep\Section 1 - Core Python")
for f in d[2] if f.endswith(".py")]
print(len(files))
%%time
import os
files = [os.path.join(d[0], f)
for d in os.walk(r"E:\code\mj\lep\Section 1 - Core Python")
for f in d[2] if f.endswith(".py")]
print(len(files))
%%time
restFiles = []
for d in os.walk(r"C:\apps\PortableGit"):
if "etc" in d[0]:
for f in d[2]:
if f.endswith(".exe"):
restFiles.append(os.path.join(d[0], f))
print(len(restFiles))
%%time
restFiles = [os.path.join(d[0], f)
for d in os.walk(r"C:\apps\PortableGit")
if "etc" in d[0]
for f in d[2]
if f.endswith(".exe")]
print(len(restFiles))
%%time
matrix = []
for row_idx in range(0, 3):
itmList = []
for item_idx in range(0, 3):
if item_idx == row_idx:
itmList.append(1)
else:
itmList.append(0)
matrix.append(itmList)
print(matrix)
matrix = [ [ 1 if item_idx == row_idx
else 0 for item_idx in range(0, 3)]
for row_idx in range(0, 3) ]
print(matrix)
matrix = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
]
transposed = []
for i in range(4):
# the following 3 lines implement the nested listcomp
transposed_row = []
for row in matrix:
transposed_row.append(row[i])
transposed.append(transposed_row)
print(transposed)
transposed = [[row[i] for row in matrix] for i in range(4)]
print(transposed)
lst = [1,2,34,4,5]
print(lst)
lst.append(2)
print(lst)
lst.append(2)
print(lst)
l = set(lst)
print(l)
Explanation: Nested if statements in for loop
End of explanation
names = [ 'aaLok', 'Manish', 'aalOk', 'Manish', 'Gupta', 'Johri', 'Mayank' ]
new_names1 = [name[0].upper() + name[1:].lower() for name in names if len(name) > 1 ]
new_names = {name[0].upper() + name[1:].lower() for name in names if len(name) > 1 }
print(new_names1)
print(new_names)
done_urls.append(url)
urls = set(urls)
left_urls = list(urls.difference(done_urls))
Explanation: Set Comprehensions
Set comprehensions allow sets to be constructed using the same principles as list comprehensions, the only difference is that resulting sequence is a set and "{}" are used instead of "[]".
End of explanation
original = {'a':10, 'b': 34, 'A': 7, 'Z':3, "z": 199}
mcase_frequency = { k.lower() : original.get(k.lower(), 0) + original.get(k.upper(), 0) for k in original.keys() }
print(mcase_frequency)
original = {'a':10, 'b': 34, 'A': 7, 'Z':3, "z": 199, 'c': 10}
flipped = {value: key for key, value in original.items()}
print(flipped)
original = {'a': 10, 'b': 34, 'A': 7, 'Z':3, "z": 199, 'c': 10}
newdict = {}
for key, value in original.items():
if (value not in newdict):
newdict[value] = key
print(newdict)
newdict = {value: key for key, value in original.items() if (value not in newdict)}
print(newdict)
x = {"a": 10, "b": 20, "c": 20}
print(x)
x["a"] = 100
print(x)
Explanation: Dictionary Comprehensions
End of explanation
import random
names_dict = {}
names = ["Mayank", "Manish", "Aalok", "Roshan Musheer"]
code_names = ['Mr. Normal', 'Mr. God', 'Mr. Cool', 'The Big Boss']
random.shuffle(code_names)
for i in range(len(names)):
names_dict[names[i]] = code_names[i]
print(names_dict)
random.shuffle(code_names)
new_dict = {names[i] : code_names[i] for i in range(len(names))}
print(new_dict)
import random
names_dict = {}
names = ["Mayank", "Manish", "Aalok", "Roshan Musheer"]
code_names = ['Mr. Normal', 'Mr. God', 'Mr. Cool', 'The Big Boss']
random.shuffle(code_names)
d = list(zip(names, code_names))
print(d)
names_dict = dict(d)
print(names_dict)
Explanation: This map doesn’t take a named function. It takes an anonymous, inlined function defined with lambda. The parameters of the lambda are defined to the left of the colon. The function body is defined to the right of the colon. The result of running the function body is (implicitly) returned.
The unfunctional code below takes a list of real names and appends them with randomly assigned code names.
End of explanation
x = (x**2 for x in range(20))
print(x)
print(list(x))
itm = 10
print(itm / 2)
Explanation: Generator Comprehension
They are simply a generator expression with a parenthesis "()" around it. Otherwise, the syntax and the way of working is like list comprehension, but a generator comprehension returns a generator instead of a list.
End of explanation
def condition_based_on(itm):
return itm % 2 == 0
old_things = range(2,20, 3)
new_things = []
for ITEM in old_things:
if condition_based_on(ITEM):
new_things.append(ITEM)
print(new_things)
Explanation: Summary
When struggling to write a comprehension, don’t panic. Start with a for loop first and copy-paste your way into a comprehension.
Any for loop that looks like this:
End of explanation
new_things = [ITEM for ITEM in old_things if condition_based_on(ITEM)]
print(new_things)
Explanation: Can be rewritten into a list comprehension like this:
End of explanation
def fib(n):
# the first two values
l = [1, 1]
# Calculating the others
for i in range(2, n + 1):
l.append(l[i -1] + l[i - 2])
return l[n]
# Show Fibonacci from 1 to 5
for i in [1, 2, 3, 4, 5]:
print (i, '=>', fib(i))
def fib(n):
if n > 1:
return fib(n - 1) + fib(n - 2)
else:
return 1
# Show Fibonacci from 1 to 5
for i in range(1,6):
print (i, '=>', fib(i))
Explanation: NOTE
If you can nudge a for loop until it looks like the ones above, you can rewrite it as a list comprehension.
Recursion
In Functional programming iteration such as while or for statements are avoided. Also it does not have the provision of state-updates.
In FP recursion is used to overcome iterations, since any iterative code can be converted to recursive code as shown in the below examples.
End of explanation
fibonacci = (lambda x, x_1=1, x_2=0:
x_2 if x == 0
else fibonacci(x - 1, x_1 + x_2, x_1))
print(fibonacci(10))
Explanation: or, using lambda
End of explanation
names = ["Manish Kumar", "Aalok", "Mayank Johri","Durgaprasad"]
lst = []
for name in names:
lst.append(len(name))
print(lst)
names = ["Manish Kumar", "Aalok", "Mayank Johri","Durgaprasad"]
tmp = map(len, names)
print(tmp)
lst = tuple(tmp)
print(lst)
# This is a map that squares every number in the passed collection:
power = map(lambda x: x*x, lst)
print(power)
print(list(power))
help(map)
list(map(pow,[2, 3, 4], [10, 11, 12]))
import random
names_dict = {}
names = ["Mayank", "Manish", "Aalok", "Roshan Musheer"]
code_names = ['Mr. Normal', 'Mr. God', 'Mr. Cool', 'The Big Boss']
for i in range(len(names)):
# name = random.choice(code_names)
while name in names_dict.values():
name = random.choice(code_names)
names_dict[names[i]] = name
print(names_dict)
Explanation: map, reduce and filter
These are three functions which facilitate a functional approach to programming. map, reduce and filter are three higher-order functions that appear in all pure functional languages including Python. They are often are used in functional code to make it more elegant.
Map
It basically provides kind of parallelism by calling the requested function over all elements in a list/array or in other words,
Map applies a function to all the items in the given list and returns a new list.
It takes a function and a collection of items as parameters and makes a new, empty collection, runs the function on each item in the original collection and inserts each return value into the new collection. It then returns the updated collection.
This is a simple map that takes a list of names and returns a list of the lengths of those names:
End of explanation
import random
names = ["Mayank", "Manish", "Aalok", "Roshan Musheer"]
code_names = ['Mr. Normal', 'Mr. God', 'Mr. Cool', 'The Big Boss']
random.shuffle(code_names)
a_dict = lambda: {k: v for k, v in zip(names, code_names)}
print(a_dict())
# Excercise -> Try the above one using map, if possible
def dictMap(f, xs) :
return dict((f(i), i) for i in xs)
lst = [1,2,4,6]
lst2 = [3,5, 7,9]
print(list(map(pow, lst, lst2)))
def fahrenheit(T):
return ((float(9)/5)*T + 32)
temp = (36.5, 37, 37.5, 39)
F = map(fahrenheit, temp)
print(list(F))
Explanation: This can be rewritten as a lamba:
End of explanation
from functools import reduce
product = reduce(lambda a, x: a * x, range(1, 6))
print(product) # (((1 * 2 )* 3 )* 4) * 5
product = reduce(lambda a, x: a * x, range(-1, 6))
print(product)
## NOTE the 20 at the end
print(reduce(lambda a, x: a + x, range(1, 6), 20)) #-> 10 + 1 + 2+
help(reduce)
Explanation: Reduce
Reduce takes a function and a collection of items. It returns a value that is created by combining the items. This is a simple reduce. It returns the sum of all the items in the collection.
End of explanation
sentences = ['Copy the variable assignment for our new empty list'
'Copy the expression that we’ve been append-ing into this new list'
'Copy the for loop line, excluding the final'
'Copy the if statement line, also without the']
count = 0
for sentence in sentences:
count += sentence.count('the')
print(count)
help(reduce)
# This is the same code written as a reduce:
from functools import reduce
def countme(x):
return x.count('the')
sentences = ['Copy the variable assignment for our new empty list'
'Copy the expression that we’ve been append-ing into this new list'
'Copy the for loop line, excluding the final'
'Copy the if statement line, also without the']
sam_count = reduce(lambda a, x: a + countme(x),
sentences, 0)
print(sam_count)
Explanation: In the above example, x is the current iterated item and a is the accumulator.
It is the value returned by the execution of the lambda on the previous item. reduce() walks through the items. For each one, it runs the lambda on the current a and x and returns the result as the a of the next iteration.
What is a in the first iteration? There is no previous iteration result for it to pass along. reduce() uses the first item in the collection for a in the first iteration and starts iterating at the second item. That is, the first x is the second item.
This code counts how often the word 'the' appears in a list of strings:
End of explanation
fib = [0,1,1,2,3,5,8,13,21,34,55]
result = filter(lambda x: x % 2 != 0, fib)
print(list(result))
def get_odd(val):
return val % 2 != 0
result = list(filter(get_odd, fib))
print(result)
result = filter(lambda x: x % 2 == 0, fib)
print(list(result))
def get_even(val):
return val % 2 == 0
result = list(filter(get_even, fib))
print(result)
apis = [{'name': 'UpdateUser', 'type': 'POST', "body": "{'name': '$name'}"},
{'name': 'addUser', 'type': 'POST', "body": "{name : '$name'}"},
{'name': 'listUsers', 'type': 'GET'},
{'name': 'listUsers', 'type': ''},
{'name': 'listUsers_withNone', 'type': None},
{'name': 'testWithoutType'}]
posts = 0
for api in apis:
if 'type' in api and api['type'] == 'POST':
posts += 1
print(posts)
posts = 0
c = []
c = list(filter(lambda x:'type' in x and x['type'] is not 'POST', apis))
print(c)
print(len(list(c)))
posts = 0
c = []
c = list(filter(lambda x:'type' in x and x['type'] is 'POST', apis))
print(c)
print(len(list(c)))
people = [{'name': 'Mary', 'height': 160},
{'name': 'Isla', 'height': 80},
{'name': 'Sam'}]
heights = map(lambda x: x['height'],
filter(lambda x: 'height' in x, people))
print(heights)
if len(heights) > 0:
from operator import add
average_height = reduce(add, heights) / len(heights)
print(average_height)
Explanation: NOTE:
How does this code come up with its initial a? The starting point for the number of incidences of 'Sam' cannot be 'Mary read a story to Sam and Isla.' The initial accumulator is specified with the third argument to reduce(). This allows the use of a value of a different type from the items in the collection.
Benefits map and reduce
they are often one-liners.
the important parts of the iteration - the collection, the operation and the return value - are always in the same places in every map and reduce.
the code in a loop may affect variables defined before it or code that runs after it. By convention, maps and reduces are functional.
map and reduce are elemental operations. Every time a person reads a for loop, they have to work through the logic line by line. There are few structural regularities they can use to create a scaffolding on which to hang their understanding of the code. In contrast, map and reduce are at once building blocks that can be combined into complex algorithms, and elements that the code reader can instantly understand and abstract in their mind. “Ah, this code is transforming each item in this collection. It’s throwing some of the transformations away. It’s combining the remainder into a single output.”
map and reduce have many friends that provide useful, tweaked versions of their basic behaviour. For example: filter, all, any and find.
Filtering
The function filter(function, list) offers an elegant way to filter out all the elements of a list, for which the function function returns True.
The function filter(f,l) needs a function f as its first argument. f returns a Boolean value, i.e. either True or False. This function will be applied to every element of the list l. Only if f returns True will the element of the list be included in the result list.
End of explanation
def power(base, exponent):
return base ** exponent
def square(base):
return power(base, 2)
def cube(base):
return power(base, 3)
Explanation: functools
The functools module is for higher-order functions: functions that act on or return other functions. In general, any callable object can be treated as a function for the purposes of this module.
Common functions in functools are as follows
partial
reduce
partial
functools.partial does the follows:
Makes a new version of a function with one or more arguments already filled in.
New version of a function documents itself.
End of explanation
from functools import partial
square = partial(power, exponent=2)
cube = partial(power, exponent=3)
print(square(2))
print(cube(2))
print(square(2, exponent=4))
print(cube(2, exponent=9))
from functools import partial
def multiply(x,y):
return x * y
# create a new function that multiplies by 2
db2 = partial(multiply,2)
print(db2(4))
db4 = partial(multiply, 4)
print(db4(3))
from functools import partial
#----------------------------------------------------------------------
def add(x, y):
return x + y
#----------------------------------------------------------------------
def multiply(x, y):
return x * y
#----------------------------------------------------------------------
def run(func):
print (func())
#----------------------------------------------------------------------
def main():
a1 = partial(add, 1, 2)
m1 = partial(multiply, 5, 8)
run(a1)
run(m1)
if __name__ == "__main__":
main()
def another_function(func):
A function that accepts another function
def wrapper():
A wrapping function
val = "The result of %s is %s" % (func(),
eval(func())
)
return val
return wrapper
#----------------------------------------------------------------------
@another_function
def a_function():
A pretty useless function
return "1+1"
#----------------------------------------------------------------------
if __name__ == "__main__":
print (a_function.__name__)
print (a_function.__doc__)
print(a_function())
from functools import wraps
#----------------------------------------------------------------------
def another_function(func):
A function that accepts another function
@wraps(func)
def wrapper():
A wrapping function
val = "The result of %s is %s" % (func(),
eval(func())
)
return val
return wrapper
#----------------------------------------------------------------------
@another_function
def a_function():
A pretty useless function
return "1+1"
#----------------------------------------------------------------------
if __name__ == "__main__":
#a_function()
print (a_function.__name__)
print (a_function.__doc__)
print(a_function())
Explanation: Now lets see the magic of partial
End of explanation
import functools
def myfunc1(a, b=2):
print ('\tcalled myfunc1 with:', (a, b))
return
def myfunc(a, b=2):
Docstring for myfunc().
print ('\tcalled myfunc with:', (a, b))
return
def show_details(name, f):
Show details of a callable object.
print ('%s:' % name)
print ('\tobject:', f)
print ('\t__name__:',)
try:
print (f.__name__)
except AttributeError:
print ('(no __name__)')
print ('\t__doc__', repr(f.__doc__))
print
return
show_details('myfunc1', myfunc1)
print("~"*20)
show_details('myfunc', myfunc)
p1 = functools.partial(myfunc, b=4)
print("+"*20)
show_details('raw wrapper', p1)
print("^"*20)
print ('Updating wrapper:')
print ('\tassign:', functools.WRAPPER_ASSIGNMENTS)
print ('\tupdate:', functools.WRAPPER_UPDATES)
print("*"*20)
functools.update_wrapper(p1, myfunc)
show_details('updated wrapper', p1)
Explanation: Here we import wraps from the functools module and use it as a decorator for the nested wrapper function inside of another_function to map the name and doc to the wrapper function
update_wrapper
The partial object does not have name or doc attributes by default, and without those attributes decorated functions are more difficult to debug. Using update_wrapper(), copies or adds attributes from the original function to the partial object.
End of explanation |
8,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: Check that the folder exists
Step4: List of data files in data_dir
Step5: Data load
Initial loading of the data
Step6: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step7: We need to define some parameters
Step8: We should check if everithing is OK with an alternation histogram
Step9: If the plot looks good we can apply the parameters with
Step10: Measurements infos
All the measurement data is in the d variable. We can print it
Step11: Or check the measurements duration
Step12: Compute background
Compute the background using automatic threshold
Step13: Burst search and selection
Step14: Preliminary selection and plots
Step15: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods
Step16: Zero threshold on nd
Select bursts with
Step17: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
Step18: Selection 2
Bursts are here weighted using weights $w$
Step19: Selection 3
Bursts are here selected according to
Step20: Save data to file
Step21: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step22: This is just a trick to format the different variables | Python Code:
ph_sel_name = "all-ph"
data_id = "27d"
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:37:43 2017
Duration: 8 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
Explanation: Data folder:
End of explanation
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Check that the folder exists:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
file_list
## Selection for POLIMI 2012-12-6 dataset
# file_list.pop(2)
# file_list = file_list[1:-2]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for P.E. 2012-12-6 dataset
# file_list.pop(1)
# file_list = file_list[:-1]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'AexAem': Ph_sel(Aex='Aem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files in data_dir:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
from mpl_toolkits.axes_grid1 import AxesGrid
import lmfit
print('lmfit version:', lmfit.__version__)
assert d.dir_ex == 0
assert d.leakage == 0
d.burst_search(m=10, F=6, ph_sel=ph_sel)
print(d.ph_sel, d.num_bursts)
ds_sa = d.select_bursts(select_bursts.naa, th1=30)
ds_sa.num_bursts
Explanation: Burst search and selection
End of explanation
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
ds_sas0 = ds_sa.select_bursts(select_bursts.S, S2=0.10)
ds_sas = ds_sa.select_bursts(select_bursts.S, S2=0.15)
ds_sas2 = ds_sa.select_bursts(select_bursts.S, S2=0.20)
ds_sas3 = ds_sa.select_bursts(select_bursts.S, S2=0.25)
ds_st = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
ds_sas.num_bursts
dx = ds_sas0
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas2
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas3
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
plt.title('(nd + na) for A-only population using different S cutoff');
dx = ds_sa
alex_jointplot(dx);
dplot(ds_sa, hist_S)
Explanation: Preliminary selection and plots
End of explanation
dx = ds_sa
bin_width = 0.03
bandwidth = 0.03
bins = np.r_[-0.2 : 1 : bin_width]
x_kde = np.arange(bins.min(), bins.max(), 0.0002)
## Weights
weights = None
## Histogram fit
fitter_g = mfit.MultiFitter(dx.S)
fitter_g.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_g.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_hist_orig = fitter_g.hist_pdf
S_2peaks = fitter_g.params.loc[0, 'p1_center']
dir_ex_S2p = S_2peaks/(1 - S_2peaks)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p)
## KDE
fitter_g.calc_kde(bandwidth=bandwidth)
fitter_g.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak = fitter_g.kde_max_pos[0]
dir_ex_S_kde = S_peak/(1 - S_peak)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=True)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak*100));
## 2-Asym-Gaussian
fitter_ag = mfit.MultiFitter(dx.S)
fitter_ag.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_ag.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.1, p2_center=0.4))
#print(fitter_ag.fit_obj[0].model.fit_report())
S_2peaks_a = fitter_ag.params.loc[0, 'p1_center']
dir_ex_S2pa = S_2peaks_a/(1 - S_2peaks_a)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2pa)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_ag, ax=ax[1])
ax[1].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_a*100));
Explanation: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods:
- Histogram git with 2 Gaussians or with 2 asymmetric Gaussians
(an asymmetric Gaussian has right- and left-side of the peak
decreasing according to different sigmas).
- KDE maximum
In the following we apply these methods using different selection
or weighting schemes to reduce amount of FRET population and make
fitting of the A-only population easier.
Even selection
Here A-only and FRET population are evenly selected.
End of explanation
dx = ds_sa.select_bursts(select_bursts.nd, th1=-100, th2=0)
fitter = bext.bursts_fitter(dx, 'S')
fitter.fit_histogram(model = mfit.factory_gaussian(center=0.1))
S_1peaks_th = fitter.params.loc[0, 'center']
dir_ex_S1p = S_1peaks_th/(1 - S_1peaks_th)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S1p)
mfit.plot_mfit(fitter)
plt.xlim(-0.1, 0.6)
Explanation: Zero threshold on nd
Select bursts with:
$$n_d < 0$$.
End of explanation
dx = ds_sa
## Weights
weights = 1 - mfit.gaussian(dx.S[0], fitter_g.params.loc[0, 'p2_center'], fitter_g.params.loc[0, 'p2_sigma'])
weights[dx.S[0] >= fitter_g.params.loc[0, 'p2_center']] = 0
## Histogram fit
fitter_w1 = mfit.MultiFitter(dx.S)
fitter_w1.weights = [weights]
fitter_w1.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w1.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w1 = fitter_w1.params.loc[0, 'p1_center']
dir_ex_S2p_w1 = S_2peaks_w1/(1 - S_2peaks_w1)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w1)
## KDE
fitter_w1.calc_kde(bandwidth=bandwidth)
fitter_w1.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w1 = fitter_w1.kde_max_pos[0]
dir_ex_S_kde_w1 = S_peak_w1/(1 - S_peak_w1)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w1)
def plot_weights(x, weights, ax):
ax2 = ax.twinx()
x_sort = x.argsort()
ax2.plot(x[x_sort], weights[x_sort], color='k', lw=4, alpha=0.4)
ax2.set_ylabel('Weights');
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w1, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w1*100))
mfit.plot_mfit(fitter_w1, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w1*100));
Explanation: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
End of explanation
## Weights
sizes = dx.nd[0] + dx.na[0] #- dir_ex_S_kde_w3*dx.naa[0]
weights = dx.naa[0] - abs(sizes)
weights[weights < 0] = 0
## Histogram
fitter_w4 = mfit.MultiFitter(dx.S)
fitter_w4.weights = [weights]
fitter_w4.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w4.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w4 = fitter_w4.params.loc[0, 'p1_center']
dir_ex_S2p_w4 = S_2peaks_w4/(1 - S_2peaks_w4)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w4)
## KDE
fitter_w4.calc_kde(bandwidth=bandwidth)
fitter_w4.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w4 = fitter_w4.kde_max_pos[0]
dir_ex_S_kde_w4 = S_peak_w4/(1 - S_peak_w4)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w4)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w4, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w4*100))
mfit.plot_mfit(fitter_w4, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w4*100));
Explanation: Selection 2
Bursts are here weighted using weights $w$:
$$w = n_{aa} - |n_a + n_d|$$
End of explanation
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
print(ds_saw.num_bursts)
dx = ds_saw
## Weights
weights = None
## 2-Gaussians
fitter_w5 = mfit.MultiFitter(dx.S)
fitter_w5.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w5 = fitter_w5.params.loc[0, 'p1_center']
dir_ex_S2p_w5 = S_2peaks_w5/(1 - S_2peaks_w5)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w5)
## KDE
fitter_w5.calc_kde(bandwidth=bandwidth)
fitter_w5.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w5 = fitter_w5.kde_max_pos[0]
S_2peaks_w5_fiterr = fitter_w5.fit_res[0].params['p1_center'].stderr
dir_ex_S_kde_w5 = S_peak_w5/(1 - S_peak_w5)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w5)
## 2-Asym-Gaussians
fitter_w5a = mfit.MultiFitter(dx.S)
fitter_w5a.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5a.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.05, p2_center=0.3))
S_2peaks_w5a = fitter_w5a.params.loc[0, 'p1_center']
dir_ex_S2p_w5a = S_2peaks_w5a/(1 - S_2peaks_w5a)
#print(fitter_w5a.fit_obj[0].model.fit_report(min_correl=0.5))
print('Fitted direct excitation (na/naa) [2-Asym-Gauss]:', dir_ex_S2p_w5a)
fig, ax = plt.subplots(1, 3, figsize=(19, 4.5))
mfit.plot_mfit(fitter_w5, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5*100))
mfit.plot_mfit(fitter_w5, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w5*100));
mfit.plot_mfit(fitter_w5a, ax=ax[2])
mfit.plot_mfit(fitter_g, ax=ax[2], plot_model=False, plot_kde=False)
ax[2].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5a*100));
Explanation: Selection 3
Bursts are here selected according to:
$$n_{aa} - |n_a + n_d| > 30$$
End of explanation
sample = data_id
n_bursts_aa = ds_sas.num_bursts[0]
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_aa dir_ex_S1p dir_ex_S_kde dir_ex_S2p dir_ex_S2pa '
'dir_ex_S2p_w1 dir_ex_S_kde_w1 dir_ex_S_kde_w4 dir_ex_S_kde_w5 dir_ex_S2p_w5 dir_ex_S2p_w5a '
'S_2peaks_w5 S_2peaks_w5_fiterr\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-dir_ex_aa-fit-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
8,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parameters
Step2: Imports
Step3: tf.data.Dataset
Step4: Let's have a look at the data
Step5: Keras model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course
Step6: Learning Rate schedule
Step7: Train and validate the model
Step8: Visualize predictions | Python Code:
BATCH_SIZE = 64
EPOCHS = 10
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
Explanation: Parameters
End of explanation
import os, re, math, json, shutil, pprint
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import IPython.display as display
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
print("Tensorflow version " + tf.__version__)
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.ioff()
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=1)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0', figsize=(16,9))
# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
batch_train_ds = training_dataset.unbatch().batch(N)
# eager execution: loop through datasets normally
if tf.executing_eagerly():
for validation_digits, validation_labels in validation_dataset:
validation_digits = validation_digits.numpy()
validation_labels = validation_labels.numpy()
break
for training_digits, training_labels in batch_train_ds:
training_digits = training_digits.numpy()
training_labels = training_labels.numpy()
break
else:
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = batch_train_ds.make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
fig = plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
plt.grid(b=None)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
display.display(fig)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
def plot_learning_rate(lr_func, epochs):
xx = np.arange(epochs+1, dtype=np.float)
y = [lr_decay(x) for x in xx]
fig, ax = plt.subplots(figsize=(9, 6))
ax.set_xlabel('epochs')
ax.set_title('Learning rate\ndecays from {:0.3g} to {:0.3g}'.format(y[0], y[-2]))
ax.minorticks_on()
ax.grid(True, which='major', axis='both', linestyle='-', linewidth=1)
ax.grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
ax.step(xx,y, linewidth=3, where='post')
display.display(fig)
class PlotTraining(tf.keras.callbacks.Callback):
def __init__(self, sample_rate=1, zoom=1):
self.sample_rate = sample_rate
self.step = 0
self.zoom = zoom
self.steps_per_epoch = 60000//BATCH_SIZE
def on_train_begin(self, logs={}):
self.batch_history = {}
self.batch_step = []
self.epoch_history = {}
self.epoch_step = []
self.fig, self.axes = plt.subplots(1, 2, figsize=(16, 7))
plt.ioff()
def on_batch_end(self, batch, logs={}):
if (batch % self.sample_rate) == 0:
self.batch_step.append(self.step)
for k,v in logs.items():
# do not log "batch" and "size" metrics that do not change
# do not log training accuracy "acc"
if k=='batch' or k=='size':# or k=='acc':
continue
self.batch_history.setdefault(k, []).append(v)
self.step += 1
def on_epoch_end(self, epoch, logs={}):
plt.close(self.fig)
self.axes[0].cla()
self.axes[1].cla()
self.axes[0].set_ylim(0, 1.2/self.zoom)
self.axes[1].set_ylim(1-1/self.zoom/2, 1+0.1/self.zoom/2)
self.epoch_step.append(self.step)
for k,v in logs.items():
# only log validation metrics
if not k.startswith('val_'):
continue
self.epoch_history.setdefault(k, []).append(v)
display.clear_output(wait=True)
for k,v in self.batch_history.items():
self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.batch_step) / self.steps_per_epoch, v, label=k)
for k,v in self.epoch_history.items():
self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.epoch_step) / self.steps_per_epoch, v, label=k, linewidth=3)
self.axes[0].legend()
self.axes[1].legend()
self.axes[0].set_xlabel('epochs')
self.axes[1].set_xlabel('epochs')
self.axes[0].minorticks_on()
self.axes[0].grid(True, which='major', axis='both', linestyle='-', linewidth=1)
self.axes[0].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
self.axes[1].minorticks_on()
self.axes[1].grid(True, which='major', axis='both', linestyle='-', linewidth=1)
self.axes[1].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
display.display(self.fig)
Explanation: Imports
End of explanation
AUTO = tf.data.experimental.AUTOTUNE
def read_label(tf_bytestring):
label = tf.io.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.io.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(AUTO) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# For TPU, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
Explanation: Let's have a look at the data
End of explanation
model = tf.keras.Sequential(
[
tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(kernel_size=3, filters=12, use_bias=False, padding='same'),
tf.keras.layers.BatchNormalization(center=True, scale=False),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(kernel_size=6, filters=24, use_bias=False, padding='same', strides=2),
tf.keras.layers.BatchNormalization(center=True, scale=False),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(kernel_size=6, filters=32, use_bias=False, padding='same', strides=2),
tf.keras.layers.BatchNormalization(center=True, scale=False),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(200, use_bias=False),
tf.keras.layers.BatchNormalization(center=True, scale=False),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.01),
loss='categorical_crossentropy',
metrics=['accuracy'])
# print model layers
model.summary()
# utility callback that displays training curves
plot_training = PlotTraining(sample_rate=10, zoom=16)
Explanation: Keras model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
End of explanation
# lr decay function
def lr_decay(epoch):
return 0.01 * math.pow(0.666, epoch)
# lr schedule callback
lr_decay_callback = tf.keras.callbacks.LearningRateScheduler(lr_decay, verbose=True)
# important to see what you are doing
plot_learning_rate(lr_decay, EPOCHS)
Explanation: Learning Rate schedule
End of explanation
steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset
print("Steps per epoch: ", steps_per_epoch)
history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1, callbacks=[plot_training, lr_decay_callback])
Explanation: Train and validate the model
End of explanation
# recognize digits from local fonts
probabilities = model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
Explanation: Visualize predictions
End of explanation |
8,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test Script
Used by tests.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Test Script Recipe Parameters
This should be called by the tests scripts only.
When run will generate a say hello log.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute Test Script
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Test Script
Used by tests.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Test Script Recipe Parameters
This should be called by the tests scripts only.
When run will generate a say hello log.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'hello':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'hour':[
],
'say':'Hello Manual',
'sleep':0
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Test Script
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
8,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering Methods Covered Here
K Means,
Hclus,
DBSCAN,
Gaussian Mixture Models,
Birch,
miniBatch Kmeans
Mean Shift
Silhouette Coefficient
If the ground truth labels are not known, evaluation must be performed using the model itself. The Silhouette Coefficient (sklearn.metrics.silhouette_score) is an example of such an evaluation, where a higher Silhouette Coefficient score relates to a model with better defined clusters. The Silhouette Coefficient is defined for each sample and is composed of two scores
Step1: https
Step2: To measure the quality of clustering results, there are two kinds of validity indices
Step3: Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred.
Drawbacks
Contrary to inertia, MI-based measures require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting).
Hierarchical clustering
Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample.
The AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach
Step4: DBSCAN
The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. The central component to the DBSCAN is the concept of core samples, which are samples that are in areas of high density. A cluster is therefore a set of core samples, each close to each other (measured by some distance measure) and a set of non-core samples that are close to a core sample (but are not themselves core samples). There are two parameters to the algorithm, min_samples and eps, which define formally what we mean when we say dense. Higher min_samples or lower eps indicate higher density necessary to form a cluster.
More formally, we define a core sample as being a sample in the dataset such that there exist min_samples other samples within a distance of eps, which are defined as neighbors of the core sample. This tells us that the core sample is in a dense area of the vector space. A cluster is a set of core samples that can be built by recursively taking a core sample, finding all of its neighbors that are core samples, finding all of their neighbors that are core samples, and so on. A cluster also has a set of non-core samples, which are samples that are neighbors of a core sample in the cluster but are not themselves core samples. Intuitively, these samples are on the fringes of a cluster.
Step6: Gaussian mixture models
a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information.
sklearn.mixture is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from data. Facilities to help determine the appropriate number of components are also provided.
A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.
Scikit-learn implements different classes to estimate Gaussian mixture models, that correspond to different estimation strategies.
cite- https
Step7: mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data
Step8: The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. Here it is 8.
BIRCH
The Birch (Balanced Iterative Reducing and Clustering using Hierarchies ) builds a tree called the Characteristic Feature Tree (CFT) for the given data. The data is essentially lossy compressed to a set of Characteristic Feature nodes (CF Nodes). The CF Nodes have a number of subclusters called Characteristic Feature subclusters (CF Subclusters) and these CF Subclusters located in the non-terminal CF Nodes can have CF Nodes as children.
The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory. This information includes
Step9: # Mini Batch K-Means
The MiniBatchKMeans is a variant of the KMeans algorithm which uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. Mini-batches are subsets of the input data, randomly sampled in each training iteration. These mini-batches drastically reduce the amount of computation required to converge to a local solution. In contrast to other algorithms that reduce the convergence time of k-means, mini-batch k-means produces results that are generally only slightly worse than the standard algorithm.
The algorithm iterates between two major steps, similar to vanilla k-means. In the first step, samples are drawn randomly from the dataset, to form a mini-batch. These are then assigned to the nearest centroid. In the second step, the centroids are updated. In contrast to k-means, this is done on a per-sample basis.
Step10: Mean Shift
MeanShift clustering aims to discover blobs in a smooth density of samples. It is a centroid based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids.
Mean shift clustering using a flat kernel.
Mean shift clustering aims to discover “blobs” in a smooth density of samples. It is a centroid-based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids.
Seeding is performed using a binning technique for scalability.
Step11: knowledge of the ground truth class assignments labels_true and
our clustering algorithm assignments of the same samples labels_pred
https | Python Code:
import warnings
warnings.filterwarnings("ignore")
from collections import Counter
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn import metrics
from sklearn.metrics import pairwise_distances
from sklearn.cluster import AgglomerativeClustering
from sklearn.cluster import DBSCAN
clusdf=pd.read_csv('C:\\Users\\ajaohri\\Desktop\\ODSP\\data\\plantTraits.csv')
Explanation: Clustering Methods Covered Here
K Means,
Hclus,
DBSCAN,
Gaussian Mixture Models,
Birch,
miniBatch Kmeans
Mean Shift
Silhouette Coefficient
If the ground truth labels are not known, evaluation must be performed using the model itself. The Silhouette Coefficient (sklearn.metrics.silhouette_score) is an example of such an evaluation, where a higher Silhouette Coefficient score relates to a model with better defined clusters. The Silhouette Coefficient is defined for each sample and is composed of two scores:
a: The mean distance between a sample and all other points in the same class.
b: The mean distance between a sample and all other points in the next nearest cluster.
The Silhouette Coefficient s for a single sample is then given as: b-a/(max(a,b)
Homogeneity, completeness and V-measure
the following two desirable objectives for any cluster assignment:
- homogeneity: each cluster contains only members of a single class.
- completeness: all members of a given class are assigned to the same cluster.
those concept as scores homogeneity_score and completeness_score. Both are bounded below by 0.0 and above by 1.0 (higher is better): Their harmonic mean called V-measure is computed by v_measure_score
K Means Clustering
The KMeans algorithm clusters data by trying to separate samples in n groups of equal variance, minimizing a criterion known as the inertia or within-cluster sum-of-squares. This algorithm requires the number of clusters to be specified. It scales well to large number of samples and has been used across a large range of application areas in many different fields.
The k-means algorithm divides a set of samples into disjoint clusters , each described by the mean of the samples in the cluster called the cluster “centroids”. The K-means algorithm aims to choose centroids that minimise the inertia, or within-cluster sum of squared criterion:
Inertia, or the within-cluster sum of squares criterion, can be recognized as a measure of how internally coherent clusters are. It suffers from various drawbacks:
Inertia makes the assumption that clusters are convex and isotropic, which is not always the case. It responds poorly to elongated clusters, or manifolds with irregular shapes.
Inertia is not a normalized metric. But in very high-dimensional spaces, Euclidean distances tend to become inflated (this is an instance of the so-called “curse of dimensionality”). Running a dimensionality reduction algorithm such as PCA prior to k-means clustering can alleviate this problem and speed up the computations.
End of explanation
clusdf = clusdf.drop("Unnamed: 0", axis=1)
clusdf.head()
clusdf.info()
#missing values
clusdf.apply(lambda x: sum(x.isnull().values), axis = 0)
clusdf.head(20)
clusdf=clusdf.fillna(clusdf.mean())
Explanation: https://vincentarelbundock.github.io/Rdatasets/doc/cluster/plantTraits.html
Usage
data(plantTraits)
Format
A data frame with 136 observations on the following 31 variables.
pdias
Diaspore mass (mg)
longindex
Seed bank longevity
durflow
Flowering duration
height
Plant height, an ordered factor with levels 1 < 2 < ... < 8.
begflow
Time of first flowering, an ordered factor with levels 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9
mycor
Mycorrhizas, an ordered factor with levels 0never < 1 sometimes< 2always
vegaer
aerial vegetative propagation, an ordered factor with levels 0never < 1 present but limited< 2important.
vegsout
underground vegetative propagation, an ordered factor with 3 levels identical to vegaer above.
autopoll
selfing pollination, an ordered factor with levels 0never < 1rare < 2 often< the rule3
insects
insect pollination, an ordered factor with 5 levels 0 < ... < 4.
wind
wind pollination, an ordered factor with 5 levels 0 < ... < 4.
lign
a binary factor with levels 0:1, indicating if plant is woody.
piq
a binary factor indicating if plant is thorny.
ros
a binary factor indicating if plant is rosette.
semiros
semi-rosette plant, a binary factor (0: no; 1: yes).
leafy
leafy plant, a binary factor.
suman
summer annual, a binary factor.
winan
winter annual, a binary factor.
monocarp
monocarpic perennial, a binary factor.
polycarp
polycarpic perennial, a binary factor.
seasaes
seasonal aestival leaves, a binary factor.
seashiv
seasonal hibernal leaves, a binary factor.
seasver
seasonal vernal leaves, a binary factor.
everalw
leaves always evergreen, a binary factor.
everparti
leaves partially evergreen, a binary factor.
elaio
fruits with an elaiosome (dispersed by ants), a binary factor.
endozoo
endozoochorous fruits, a binary factor.
epizoo
epizoochorous fruits, a binary factor.
aquat
aquatic dispersal fruits, a binary factor.
windgl
wind dispersed fruits, a binary factor.
unsp
unspecialized mechanism of seed dispersal, a binary factor.
End of explanation
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
clusdf_scale = scale(clusdf)
n_samples, n_features = clusdf_scale.shape
n_samples, n_features
reduced_data = PCA(n_components=2).fit_transform(clusdf_scale)
#assuming height to be Y variable to be predicted
#n_digits = len(np.unique(clusdf.height))
#From R Cluster sizes:
#[1] "26 29 5 32"
n_digits=4
kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
kmeans.fit(reduced_data)
clusdf.head(20)
# Plot the decision boundary. For that, we will assign a color to each
h=0.02
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
kmeans = KMeans(n_clusters=4, random_state=0).fit(reduced_data)
kmeans.labels_
np.unique(kmeans.labels_, return_counts=True)
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(kmeans.labels_)
plt.show()
kmeans.cluster_centers_
metrics.silhouette_score(reduced_data, kmeans.labels_, metric='euclidean')
Explanation: To measure the quality of clustering results, there are two kinds of validity indices: external indices and internal indices.
An external index is a measure of agreement between two partitions where the first partition is the a priori known clustering structure, and the second results from the clustering procedure (Dudoit et al., 2002).
Internal indices are used to measure the goodness of a clustering structure without external information (Tseng et al., 2005
End of explanation
clustering = AgglomerativeClustering(n_clusters=4).fit(reduced_data)
clustering
clustering.labels_
np.unique(clustering.labels_, return_counts=True)
from scipy.cluster.hierarchy import dendrogram, linkage
Z = linkage(reduced_data)
dendrogram(Z)
#dn1 = hierarchy.dendrogram(Z, ax=axes[0], above_threshold_color='y',orientation='top')
plt.show()
metrics.silhouette_score(reduced_data, clustering.labels_, metric='euclidean')
Explanation: Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred.
Drawbacks
Contrary to inertia, MI-based measures require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting).
Hierarchical clustering
Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample.
The AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The linkage criteria determines the metric used for the merge strategy:
Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach.
Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters.
Average linkage minimizes the average of the distances between all observations of pairs of clusters.
Single linkage minimizes the distance between the closest observations of pairs of clusters.
End of explanation
db = DBSCAN().fit(reduced_data)
db
db.labels_
clusdf.shape
reduced_data.shape
reduced_data[:10,:2]
for i in range(0, reduced_data.shape[0]):
if db.labels_[i] == 0:
c1 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='r',marker='+')
elif db.labels_[i] == 1:
c2 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='g',marker='o')
elif db.labels_[i] == -1:c3 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='b',marker='*')
plt.legend([c1, c2, c3], ['Cluster 1', 'Cluster 2','Noise'])
plt.title('DBSCAN finds 2 clusters and noise')
plt.show()
Explanation: DBSCAN
The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. The central component to the DBSCAN is the concept of core samples, which are samples that are in areas of high density. A cluster is therefore a set of core samples, each close to each other (measured by some distance measure) and a set of non-core samples that are close to a core sample (but are not themselves core samples). There are two parameters to the algorithm, min_samples and eps, which define formally what we mean when we say dense. Higher min_samples or lower eps indicate higher density necessary to form a cluster.
More formally, we define a core sample as being a sample in the dataset such that there exist min_samples other samples within a distance of eps, which are defined as neighbors of the core sample. This tells us that the core sample is in a dense area of the vector space. A cluster is a set of core samples that can be built by recursively taking a core sample, finding all of its neighbors that are core samples, finding all of their neighbors that are core samples, and so on. A cluster also has a set of non-core samples, which are samples that are neighbors of a core sample in the cluster but are not themselves core samples. Intuitively, these samples are on the fringes of a cluster.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
clusdf.head()
reduced_data
# Plot the data with K Means Labels
from sklearn.cluster import KMeans
kmeans = KMeans(4, random_state=0)
labels = kmeans.fit(reduced_data).predict(reduced_data)
plt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis');
X=reduced_data
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist
def plot_kmeans(kmeans, X, n_clusters=4, rseed=0, ax=None):
labels = kmeans.fit_predict(X)
# plot the input data
ax = ax or plt.gca()
ax.axis('equal')
ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2)
# plot the representation of the KMeans model
centers = kmeans.cluster_centers_
radii = [cdist(X[labels == i], [center]).max()
for i, center in enumerate(centers)]
for c, r in zip(centers, radii):
ax.add_patch(plt.Circle(c, r, fc='#CCCCCC', lw=3, alpha=0.5, zorder=1))
kmeans = KMeans(n_clusters=4, random_state=0)
plot_kmeans(kmeans, X)
rng = np.random.RandomState(13)
X_stretched = np.dot(X, rng.randn(2, 2))
kmeans = KMeans(n_clusters=4, random_state=0)
plot_kmeans(kmeans, X_stretched)
from sklearn.mixture import GMM
gmm = GMM(n_components=4).fit(X)
labels = gmm.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis');
probs = gmm.predict_proba(X)
print(probs[:5].round(3))
size = 50 * probs.max(1) ** 2 # square emphasizes differences
plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', s=size);
from matplotlib.patches import Ellipse
def draw_ellipse(position, covariance, ax=None, **kwargs):
Draw an ellipse with a given position and covariance
ax = ax or plt.gca()
# Convert covariance to principal axes
if covariance.shape == (2, 2):
U, s, Vt = np.linalg.svd(covariance)
angle = np.degrees(np.arctan2(U[1, 0], U[0, 0]))
width, height = 2 * np.sqrt(s)
else:
angle = 0
width, height = 2 * np.sqrt(covariance)
# Draw the Ellipse
for nsig in range(1, 4):
ax.add_patch(Ellipse(position, nsig * width, nsig * height,
angle, **kwargs))
def plot_gmm(gmm, X, label=True, ax=None):
ax = ax or plt.gca()
labels = gmm.fit(X).predict(X)
if label:
ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2)
else:
ax.scatter(X[:, 0], X[:, 1], s=40, zorder=2)
ax.axis('equal')
w_factor = 0.2 / gmm.weights_.max()
for pos, covar, w in zip(gmm.means_, gmm.covars_, gmm.weights_):
draw_ellipse(pos, covar, alpha=w * w_factor)
gmm = GMM(n_components=4, random_state=42)
plot_gmm(gmm, X)
gmm = GMM(n_components=4, covariance_type='full', random_state=42)
plot_gmm(gmm, X_stretched)
from sklearn.datasets import make_moons
Xmoon, ymoon = make_moons(200, noise=.05, random_state=0)
plt.scatter(Xmoon[:, 0], Xmoon[:, 1]);
gmm2 = GMM(n_components=2, covariance_type='full', random_state=0)
plot_gmm(gmm2, Xmoon)
gmm16 = GMM(n_components=16, covariance_type='full', random_state=0)
plot_gmm(gmm16, Xmoon, label=False)
Explanation: Gaussian mixture models
a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information.
sklearn.mixture is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from data. Facilities to help determine the appropriate number of components are also provided.
A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.
Scikit-learn implements different classes to estimate Gaussian mixture models, that correspond to different estimation strategies.
cite- https://jakevdp.github.io/PythonDataScienceHandbook/05.12-gaussian-mixtures.html
End of explanation
%matplotlib inline
n_components = np.arange(1, 21)
models = [GMM(n, covariance_type='full', random_state=0).fit(Xmoon)
for n in n_components]
plt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC')
plt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC')
plt.legend(loc='best')
plt.xlabel('n_components')
plt.show()
Explanation: mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data
End of explanation
from sklearn.cluster import Birch
X = reduced_data
brc = Birch(branching_factor=50, n_clusters=None, threshold=0.5,compute_labels=True)
brc.fit(X)
brc.predict(X)
labels = brc.predict(X)
plt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis');
plt.show()
Explanation: The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. Here it is 8.
BIRCH
The Birch (Balanced Iterative Reducing and Clustering using Hierarchies ) builds a tree called the Characteristic Feature Tree (CFT) for the given data. The data is essentially lossy compressed to a set of Characteristic Feature nodes (CF Nodes). The CF Nodes have a number of subclusters called Characteristic Feature subclusters (CF Subclusters) and these CF Subclusters located in the non-terminal CF Nodes can have CF Nodes as children.
The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory. This information includes:
Number of samples in a subcluster.
Linear Sum - A n-dimensional vector holding the sum of all samples
Squared Sum - Sum of the squared L2 norm of all samples.
Centroids - To avoid recalculation linear sum / n_samples.
Squared norm of the centroids.
It is a memory-efficient, online-learning algorithm provided as an alternative to MiniBatchKMeans. It constructs a tree data structure with the cluster centroids being read off the leaf. These can be either the final cluster centroids or can be provided as input to another clustering algorithm such as AgglomerativeClustering.
End of explanation
from sklearn.cluster import MiniBatchKMeans
import numpy as np
X = reduced_data
# manually fit on batches
kmeans = MiniBatchKMeans(n_clusters=2,random_state=0,batch_size=6)
kmeans = kmeans.partial_fit(X[0:6,:])
kmeans = kmeans.partial_fit(X[6:12,:])
kmeans.cluster_centers_
kmeans.predict(X)
# fit on the whole data
kmeans = MiniBatchKMeans(n_clusters=4,random_state=0,batch_size=6,max_iter=10).fit(X)
kmeans.cluster_centers_
kmeans.predict(X)
# Plot the decision boundary. For that, we will assign a color to each
h=0.02
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: # Mini Batch K-Means
The MiniBatchKMeans is a variant of the KMeans algorithm which uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. Mini-batches are subsets of the input data, randomly sampled in each training iteration. These mini-batches drastically reduce the amount of computation required to converge to a local solution. In contrast to other algorithms that reduce the convergence time of k-means, mini-batch k-means produces results that are generally only slightly worse than the standard algorithm.
The algorithm iterates between two major steps, similar to vanilla k-means. In the first step, samples are drawn randomly from the dataset, to form a mini-batch. These are then assigned to the nearest centroid. In the second step, the centroids are updated. In contrast to k-means, this is done on a per-sample basis.
End of explanation
print(__doc__)
import numpy as np
from sklearn.cluster import MeanShift, estimate_bandwidth
from sklearn.datasets.samples_generator import make_blobs
# #############################################################################
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X = reduced_data
# #############################################################################
# Compute clustering with MeanShift
# The following bandwidth can be automatically detected using
bandwidth = estimate_bandwidth(X, quantile=0.2, n_samples=500)
ms = MeanShift(bandwidth=bandwidth, bin_seeding=True)
ms.fit(X)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
labels_unique = np.unique(labels)
n_clusters_ = len(labels_unique)
print("number of estimated clusters : %d" % n_clusters_)
# #############################################################################
# Plot result
import matplotlib.pyplot as plt
from itertools import cycle
plt.figure(1)
plt.clf()
colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')
for k, col in zip(range(n_clusters_), colors):
my_members = labels == k
cluster_center = cluster_centers[k]
plt.plot(X[my_members, 0], X[my_members, 1], col + '.')
plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=14)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
Explanation: Mean Shift
MeanShift clustering aims to discover blobs in a smooth density of samples. It is a centroid based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids.
Mean shift clustering using a flat kernel.
Mean shift clustering aims to discover “blobs” in a smooth density of samples. It is a centroid-based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids.
Seeding is performed using a binning technique for scalability.
End of explanation
from sklearn import metrics
from sklearn.metrics import pairwise_distances
from sklearn import datasets
dataset = datasets.load_iris()
X = dataset.data
y = dataset.target
import numpy as np
from sklearn.cluster import KMeans
kmeans_model = KMeans(n_clusters=3, random_state=1).fit(X)
labels = kmeans_model.labels_
labels_true=y
labels_pred=labels
from sklearn import metrics
metrics.adjusted_rand_score(labels_true, labels_pred)
from sklearn import metrics
metrics.adjusted_mutual_info_score(labels_true, labels_pred)
metrics.homogeneity_score(labels_true, labels_pred)
metrics.completeness_score(labels_true, labels_pred)
metrics.v_measure_score(labels_true, labels_pred)
metrics.silhouette_score(X, labels, metric='euclidean')
Explanation: knowledge of the ground truth class assignments labels_true and
our clustering algorithm assignments of the same samples labels_pred
https://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation
- adjusted Rand index is a function that measures the similarity of the two assignments
the Mutual Information is a function that measures the agreement of the two assignments, ignoring permutations.
The following two desirable objectives for any cluster assignment:
- homogeneity: each cluster contains only members of a single class.
- completeness: all members of a given class are assigned to the same cluster.
We can turn those concept as scores.Both are bounded below by 0.0 and above by 1.0 (higher is better)
homogeneity_score and
completeness_score.
Their harmonic mean called V-measure is computed by
- v_measure_score
The Silhouette Coefficient is defined for each sample and is composed of two scores:
a: The mean distance between a sample and all other points in the same class.
b: The mean distance between a sample and all other points in the next nearest cluster.
End of explanation |
8,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TensorFlow Probability Case Study
Step2: Step 1
Step3: Generate some synthetic observations
Note that TensorFlow Probability uses the convention that the initial dimension(s) of your data represent sample indices, and the final dimension(s) of your data represent the dimensionality of your samples.
Here we want 100 samples, each of which is a vector of length 2. We'll generate an array my_data with shape (100, 2). my_data[i,
Step4: Sanity check the observations
One potential source of bugs is messing up your synthetic data! Let's do some simple checks.
Step5: Ok, our samples look reasonble. Next step.
Step 2
Step6: We're going to use a Wishart prior for the precision matrix since there's an analytical solution for the posterior (see Wikipedia's handy table of conjugate priors).
The Wishart distribution has 2 parameters
Step7: The Wishart distribution is the conjugate prior for estimating the precision matrix of a multivariate normal with known mean $\mu$.
Suppose the prior Wishart parameters are $\nu, V$ and that we have $n$ observations of our multivariate normal, $x_1, \ldots, x_n$. The posterior parameters are $n + \nu, \left(V^{-1} + \sum_{i=1}^n (x_i-\mu)(x_i-\mu)^T \right)^{-1}$.
Step8: A quick plot of the posteriors and the true values. Note that the posteriors are close to the sample posteriors but are shrunk a bit toward the identity. Note also that the true values are pretty far from the mode of the posterior - presumably this is because prior isn't a very good match for our data. In a real problem we'd likely do better with something like a scaled inverse Wishart prior for the covariance (see, for example, Andrew Gelman's commentary on the subject), but then we wouldn't have a nice analytic posterior.
Step9: Step 3
Step10: Data log likelihood
First we'll implement the data log likelihood function.
Note
Step11: One key difference from the NumPy case is that our TensorFlow likelihood function will need to handle vectors of precision matrices rather than just single matrices. Vectors of parameters will be used when we sample from multiple chains.
We'll create a distribution object that works with a batch of precision matrices (i.e. one matrix per chain).
When computing log probabilities of our data, we'll need our data to be replicated in the same manner as our parameters so that there is one copy per batch variable. The shape of our replicated data will need to be as follows
Step12: Tip
Step13: Prior log likelihood
The prior is easier since we don't have to worry about data replication.
Step14: Build the joint log likelihood function
The data log likelihood function above depends on our observations, but the sampler won't have those. We can get rid of the dependency without using a global variable by using a [closure](https
Step15: Step 4
Step16: Identifying the problem
InvalidArgumentError (see above for traceback)
Step17: Why this fails
The very first new parameter value the sampler tries is an asymmetrical matrix. That causes the Cholesky decomposition to fail, since it's only defined for symmetrical (and positive definite) matrices.
The problem here is that our parameter of interest is a precision matrix, and precision matrices must be real, symmetric, and positive definite. The sampler doesn't know anything about this constraint (except possibly through gradients), so it is entirely possible that the sampler will propose an invalid value, leading to an exception, particularly if the step size is large.
With the Hamiltonian Monte Carlo sampler, we may be able to work around the problem by using a very small step size, since the gradient should keep the parameters away from invalid regions, but small step sizes mean slow convergence. With a Metropolis-Hastings sampler, which doesn't know anything about gradients, we're doomed.
Version 2
Step18: The TransformedDistribution class automates the process of applying a bijector to a distribution and making the necessary Jacobian correction to log_prob(). Our new prior becomes
Step19: We just need to invert the transform for our data log likelihood
Step20: Again we wrap our new functions in a closure.
Step21: Sampling
Now that we don't have to worry about our sampler blowing up because of invalid parameter values, let's generate some real samples.
The sampler works with the unconstrained version of our parameters, so we need to transform our initial value to its unconstrained version. The samples that we generate will also all be in their unconstrained form, so we need to transform them back. Bijectors are vectorized, so it's easy to do so.
Step22: Let's compare the mean of our sampler's output to the analytic posterior mean!
Step23: We're way off! Let's figure out why. First let's look at our samples.
Step24: Uh oh - it looks like they all have the same value. Let's figure out why.
The kernel_results_ variable is a named tuple that gives information about the sampler at each state. The is_accepted field is the key here.
Step25: All our samples were rejected! Presumably our step size was too big. I chose stepsize=0.1 purely arbitrarily.
Version 3
Step26: A quick check
Step27: Even better, our sample mean and standard deviation are close to what we expect from the analytic solution.
Step28: Checking for convergence
In general we won't have an analytic solution to check against, so we'll need to make sure the sampler has converged. One standard check is the Gelman-Rubin $\hat{R}$ statistic, which requires multiple sampling chains. $\hat{R}$ measures the degree to which variance (of the means) between chains exceeds what one would expect if the chains were identically distributed. Values of $\hat{R}$ close to 1 are used to indicate approximate convergence. See the source for details.
Step29: Model criticism
If we didn't have an analytic solution, this would be the time to do some real model criticism.
Here are a few quick histograms of the sample components relative to our ground truth (in red). Note that the samples have been shrunk from the sample precision matrix values toward the identity matrix prior.
Step30: Some scatterplots of pairs of precision components show that because of the correlation structure of the posterior, the true posterior values are not as unlikely as they appear from the marginals above.
Step31: Version 4
Step32: Sampling with the TransformedTransitionKernel
With the TransformedTransitionKernel, we no longer have to do manual transformations of our parameters. Our initial values and our samples are all precision matrices; we just have to pass in our unconstraining bijector(s) to the kernel and it takes care of all the transformations.
Step33: Checking convergence
The $\hat{R}$ convergence check looks good!
Step34: Comparison against the analytic posterior
Again let's check against the analytic posterior.
Step36: Optimizations
Now that we've got things running end-to-end, let's do a more optimized version. Speed doesn't matter too much for this example, but once matrices get larger, a few optimizations will make a big difference.
One big speed improvement we can make is to reparameterize in terms of the Cholesky decomposition. The reason is our data likelihood function requires both the covariance and the precision matrices. Matrix inversion is expensive ($O(n^3)$ for an $n \times n$ matrix), and if we parameterize in terms of either the covariance or the precision matrix, we need to do an inversion to get the other.
As a reminder, a real, positive-definite, symmetric matrix $M$ can be decomposed into a product of the form $M = L L^T$ where the matrix $L$ is lower triangular and has positive diagonals. Given the Cholesky decomposition of $M$, we can more efficiently obtain both $M$ (the product of a lower and an upper triangular matrix) and $M^{-1}$ (via back-substitution). The Cholesky factorization itself is not cheap to compute, but if we parameterize in terms of Cholesky factors, we only need to compute the Choleksy factorization of the initial parameter values.
Using the Cholesky decomposition of the covariance matrix
TFP has a version of the multivariate normal distribution, MultivariateNormalTriL, that is parameterized in terms of the Cholesky factor of the covariance matrix. So if we were to parameterize in terms of the Cholesky factor of the covariance matrix, we could compute the data log likelihood efficiently. The challenge is in computing the prior log likelihood with similar efficiency.
If we had a version of the inverse Wishart distribution that worked with Cholesky factors of samples, we'd be all set. Alas, we don't. (The team would welcome code submissions, though!) As an alternative, we can use a version of the Wishart distribution that works with Cholesky factors of samples together with a chain of bijectors.
At the moment, we're missing a few stock bijectors to make things really efficient, but I want to show the process as an exercise and a useful illustration of the power of TFP's bijectors.
A Wishart distribution that operates on Cholesky factors
The Wishart distribution has a useful flag, input_output_cholesky, that specifies that the input and output matrices should be Cholesky factors. It's more efficient and numerically advantageous to work with the Cholesky factors than full matrices, which is why this is desirable. An important point about the semantics of the flag
Step37: Building an inverse Wishart distribution
We have our covariance matrix $C$ decomposed into $C = L L^T$ where $L$ is lower triangular and has a positive diagonal. We want to know the probability of $L$ given that $C \sim W^{-1}(\nu, V)$ where $W^{-1}$ is the inverse Wishart distribution.
The inverse Wishart distribution has the property that if $C \sim W^{-1}(\nu, V)$, then the precision matrix $C^{-1} \sim W(\nu, V^{-1})$. So we can get the probability of $L$ via a TransformedDistribution that takes as parameters the Wishart distribution and a bijector that maps the Cholesky factor of precision matrix to a Cholesky factor of its inverse.
A straightforward (but not super efficient) way to get from the Cholesky factor of $C^{-1}$ to $L$ is to invert the Cholesky factor by back-solving, then forming the covariance matrix from these inverted factors, and then doing a Cholesky factorization.
Let the Cholesky decomposition of $C^{-1} = M M^T$. $M$ is lower triangular, so we can invert it using the MatrixInverseTriL bijector.
Forming $C$ from $M^{-1}$ is a little tricky
Step38: Our final distribution
Our inverse Wishart operating on Cholesky factors is as follows
Step39: We've got our inverse Wishart, but it's kind of slow because we have to do a Cholesky decomposition in the bijector. Let's return to the precision matrix parameterization and see what we can do there for optimization.
Final(!) Version
Step41: Optimized data log likelihood
We can use TFP's bijectors to build our own version of the multivariate normal. Here is the key idea
Step42: Combined log likelihood function
Now we combine our prior and data log likelihood functions in a closure.
Step43: Constraining bijector
Our samples are constrained to be valid Cholesky factors, which means they must be lower triangular matrices with positive diagonals. The TransformedTransitionKernel needs a bijector that maps unconstrained tensors to/from tensors with our desired constraints. We've removed the Cholesky decomposition from the bijector's inverse, which speeds things up.
Step44: Initial values
We generate a tensor of initial values. We're working with Cholesky factors, so we generate some Cholesky factor initial values.
Step45: Sampling
We sample N_CHAINS chains using the TransformedTransitionKernel.
Step46: Convergence check
A quick convergence check looks good
Step47: Comparing results to the analytic posterior
Step48: And again, the sample means and standard deviations match those of the analytic posterior. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import collections
import math
import os
import time
import numpy as np
import pandas as pd
import scipy
import scipy.stats
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
Explanation: TensorFlow Probability Case Study: Covariance Estimation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/TensorFlow_Probability_Case_Study_Covariance_Estimation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Probability_Case_Study_Covariance_Estimation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Probability_Case_Study_Covariance_Estimation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Probability_Case_Study_Covariance_Estimation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
I wrote this notebook as a case study to learn TensorFlow Probability. The problem I chose to solve is estimating a covariance matrix for samples of a 2-D mean 0 Gaussian random variable. The problem has a couple of nice features:
If we use an inverse Wishart prior for the covariance (a common approach), the problem has an analytic solution, so we can check our results.
The problem involves sampling a constrained parameter, which adds some interesting complexity.
The most straightforward solution is not the fastest one, so there is some optimization work to do.
I decided to write my experiences up as I went along. It took me awhile to wrap my head around the finer points of TFP, so this notebook starts fairly simply and then gradually works up to more complicated TFP features. I ran into lots of problems along the way, and I've tried to capture both the processes that helped me identify them and the workarounds I eventually found. I've tried to include lots of detail (including lots of tests to make sure individual steps are correct).
Why learn TensorFlow Probability?
I found TensorFlow Probability appealing for my project for a few reasons:
TensorFlow probability lets you prototype develop complex models interactively in a notebook. You can break your code up into small pieces that you can test interactively and with unit tests.
Once you're ready to scale up, you can take advantage of all of the infrastructure we have in place for making TensorFlow run on multiple, optimized processors on multiple machines.
Finally, while I really like Stan, I find it quite difficult to debug. You have to write all your modeling code in a standalone language that has very few tools for letting you poke at your code, inspect intermediate states, and so on.
The downside is that TensorFlow Probability is much newer than Stan and PyMC3, so the documentation is a work in progress, and there's lots of functionality that's yet to be built. Happily, I found TFP's foundation to be solid, and it's designed in a modular way that allows one to extend its functionality fairly straightforwardly. In this notebook, in addition to solving the case study, I'll show some ways to go about extending TFP.
Who this is for
I'm assuming that readers are coming to this notebook with some important prerequisites. You should:
Know the basics of Bayesian inference. (If you don't, a really nice first book is Statistical Rethinking)
Have some familiarity with an MCMC sampling library, e.g. Stan / PyMC3 / BUGS
Have a solid grasp of NumPy (One good intro is Python for Data Analysis)
Have at least passing familiarity with TensorFlow, but not necessarily expertise. (Learning TensorFlow is good, but TensorFlow's rapid evolution means that most books will be a bit dated. Stanford's CS20 course is also good.)
First attempt
Here's my first attempt at the problem. Spoiler: my solution doesn't work, and it's going to take several attempts to get things right! Although the process takes awhile, each attempt below has been useful for learning a new part of TFP.
One note: TFP doesn't currently implement the inverse Wishart distribution (we'll see at the end how to roll our own inverse Wishart), so instead I'll change the problem to that of estimating a precision matrix using a Wishart prior.
End of explanation
# We're assuming 2-D data with a known true mean of (0, 0)
true_mean = np.zeros([2], dtype=np.float32)
# We'll make the 2 coordinates correlated
true_cor = np.array([[1.0, 0.9], [0.9, 1.0]], dtype=np.float32)
# And we'll give the 2 coordinates different variances
true_var = np.array([4.0, 1.0], dtype=np.float32)
# Combine the variances and correlations into a covariance matrix
true_cov = np.expand_dims(np.sqrt(true_var), axis=1).dot(
np.expand_dims(np.sqrt(true_var), axis=1).T) * true_cor
# We'll be working with precision matrices, so we'll go ahead and compute the
# true precision matrix here
true_precision = np.linalg.inv(true_cov)
# Here's our resulting covariance matrix
print(true_cov)
# Verify that it's positive definite, since np.random.multivariate_normal
# complains about it not being positive definite for some reason.
# (Note that I'll be including a lot of sanity checking code in this notebook -
# it's a *huge* help for debugging)
print('eigenvalues: ', np.linalg.eigvals(true_cov))
Explanation: Step 1: get the observations together
My data here are all synthetic, so this is going to seem a bit tidier than a real-world example. However, there's no reason you can't generate some synthetic data of your own.
Tip: Once you've decided on the form of your model, you can pick some parameter values and use your chosen model to generate some synthetic data. As a sanity check of your implementation, you can then verify that your estimates include the true values of the parameters you chose. To make your debugging / testing cycle faster, you might consider a simplified version of your model (e.g. use fewer dimensions or fewer samples).
Tip: It's easiest to work with your observations as NumPy arrays. One important thing to note is that NumPy by default uses float64's, while TensorFlow by default uses float32's.
In general, TensorFlow operations want all arguments to have the same type, and you have to do explicit data casting to change types. If you use float64 observations, you'll need to add in a lot of cast operations. NumPy, in contrast, will take care of casting automatically. Hence, it is much easier to convert your Numpy data into float32 than it is to force TensorFlow to use float64.
Choose some parameter values
End of explanation
# Set the seed so the results are reproducible.
np.random.seed(123)
# Now generate some observations of our random variable.
# (Note that I'm suppressing a bunch of spurious about the covariance matrix
# not being positive semidefinite via check_valid='ignore' because it really is
# positive definite!)
my_data = np.random.multivariate_normal(
mean=true_mean, cov=true_cov, size=100,
check_valid='ignore').astype(np.float32)
my_data.shape
Explanation: Generate some synthetic observations
Note that TensorFlow Probability uses the convention that the initial dimension(s) of your data represent sample indices, and the final dimension(s) of your data represent the dimensionality of your samples.
Here we want 100 samples, each of which is a vector of length 2. We'll generate an array my_data with shape (100, 2). my_data[i, :] is the $i$th sample, and it is a vector of length 2.
(Remember to make my_data have type float32!)
End of explanation
# Do a scatter plot of the observations to make sure they look like what we
# expect (higher variance on the x-axis, y values strongly correlated with x)
plt.scatter(my_data[:, 0], my_data[:, 1], alpha=0.75)
plt.show()
print('mean of observations:', np.mean(my_data, axis=0))
print('true mean:', true_mean)
print('covariance of observations:\n', np.cov(my_data, rowvar=False))
print('true covariance:\n', true_cov)
Explanation: Sanity check the observations
One potential source of bugs is messing up your synthetic data! Let's do some simple checks.
End of explanation
def log_lik_data_numpy(precision, data):
# np.linalg.inv is a really inefficient way to get the covariance matrix, but
# remember we don't care about speed here
cov = np.linalg.inv(precision)
rv = scipy.stats.multivariate_normal(true_mean, cov)
return np.sum(rv.logpdf(data))
# test case: compute the log likelihood of the data given the true parameters
log_lik_data_numpy(true_precision, my_data)
Explanation: Ok, our samples look reasonble. Next step.
Step 2: Implement the likelihood function in NumPy
The main thing we'll need to write to perform our MCMC sampling in TF Probability is a log likelihood function. In general it's a bit trickier to write TF than NumPy, so I find it helpful to do an initial implementation in NumPy. I'm going to split the likelihood function into 2 pieces, a data likelihood function that corresponds to $P(data | parameters)$ and a prior likelihood function that corresponds to $P(parameters)$.
Note that these NumPy functions don't have to be super optimized / vectorized since the goal is just to generate some values for testing. Correctness is the key consideration!
First we'll implement the data log likelihood piece. That's pretty straightforward. The one thing to remember is that we're going to be working with precision matrices, so we'll parameterize accordingly.
End of explanation
PRIOR_DF = 3
PRIOR_SCALE = np.eye(2, dtype=np.float32) / PRIOR_DF
def log_lik_prior_numpy(precision):
rv = scipy.stats.wishart(df=PRIOR_DF, scale=PRIOR_SCALE)
return rv.logpdf(precision)
# test case: compute the prior for the true parameters
log_lik_prior_numpy(true_precision)
Explanation: We're going to use a Wishart prior for the precision matrix since there's an analytical solution for the posterior (see Wikipedia's handy table of conjugate priors).
The Wishart distribution has 2 parameters:
the number of degrees of freedom (labeled $\nu$ in Wikipedia)
a scale matrix (labeled $V$ in Wikipedia)
The mean for a Wishart distribution with parameters $\nu, V$ is $E[W] = \nu V$, and the variance is $\text{Var}(W_{ij}) = \nu(v_{ij}^2+v_{ii}v_{jj})$
Some useful intuition: You can generate a Wishart sample by generating $\nu$ independent draws $x_1 \ldots x_{\nu}$ from a multivariate normal random variable with mean 0 and covariance $V$ and then forming the sum $W = \sum_{i=1}^{\nu} x_i x_i^T$.
If you rescale Wishart samples by dividing them by $\nu$, you get the sample covariance matrix of the $x_i$. This sample covariance matrix should tend toward $V$ as $\nu$ increases. When $\nu$ is small, there is lots of variation in the sample covariance matrix, so small values of $\nu$ correspond to weaker priors and large values of $\nu$ correspond to stronger priors. Note that $\nu$ must be at least as large as the dimension of the space you're sampling or you'll generate singular matrices.
We'll use $\nu = 3$ so we have a weak prior, and we'll take $V = \frac{1}{\nu} I$ which will pull our covariance estimate toward the identity (recall that the mean is $\nu V$).
End of explanation
n = my_data.shape[0]
nu_prior = PRIOR_DF
v_prior = PRIOR_SCALE
nu_posterior = nu_prior + n
v_posterior = np.linalg.inv(np.linalg.inv(v_prior) + my_data.T.dot(my_data))
posterior_mean = nu_posterior * v_posterior
v_post_diag = np.expand_dims(np.diag(v_posterior), axis=1)
posterior_sd = np.sqrt(nu_posterior *
(v_posterior ** 2.0 + v_post_diag.dot(v_post_diag.T)))
Explanation: The Wishart distribution is the conjugate prior for estimating the precision matrix of a multivariate normal with known mean $\mu$.
Suppose the prior Wishart parameters are $\nu, V$ and that we have $n$ observations of our multivariate normal, $x_1, \ldots, x_n$. The posterior parameters are $n + \nu, \left(V^{-1} + \sum_{i=1}^n (x_i-\mu)(x_i-\mu)^T \right)^{-1}$.
End of explanation
sample_precision = np.linalg.inv(np.cov(my_data, rowvar=False, bias=False))
fig, axes = plt.subplots(2, 2)
fig.set_size_inches(10, 10)
for i in range(2):
for j in range(2):
ax = axes[i, j]
loc = posterior_mean[i, j]
scale = posterior_sd[i, j]
xmin = loc - 3.0 * scale
xmax = loc + 3.0 * scale
x = np.linspace(xmin, xmax, 1000)
y = scipy.stats.norm.pdf(x, loc=loc, scale=scale)
ax.plot(x, y)
ax.axvline(true_precision[i, j], color='red', label='True precision')
ax.axvline(sample_precision[i, j], color='red', linestyle=':', label='Sample precision')
ax.set_title('precision[%d, %d]' % (i, j))
plt.legend()
plt.show()
Explanation: A quick plot of the posteriors and the true values. Note that the posteriors are close to the sample posteriors but are shrunk a bit toward the identity. Note also that the true values are pretty far from the mode of the posterior - presumably this is because prior isn't a very good match for our data. In a real problem we'd likely do better with something like a scaled inverse Wishart prior for the covariance (see, for example, Andrew Gelman's commentary on the subject), but then we wouldn't have a nice analytic posterior.
End of explanation
# case 1: get log probabilities for a vector of iid draws from a single
# normal distribution
norm1 = tfd.Normal(loc=0., scale=1.)
probs1 = norm1.log_prob(tf.constant([1., 0.5, 0.]))
# case 2: get log probabilities for a vector of independent draws from
# multiple normal distributions with different parameters. Note the vector
# values for loc and scale in the Normal constructor.
norm2 = tfd.Normal(loc=[0., 2., 4.], scale=[1., 1., 1.])
probs2 = norm2.log_prob(tf.constant([1., 0.5, 0.]))
print('iid draws from a single normal:', probs1.numpy())
print('draws from a batch of normals:', probs2.numpy())
Explanation: Step 3: Implement the likelihood function in TensorFlow
Spoiler: Our first attempt isn't going to work; we'll talk about why below.
Tip: use TensorFlow eager mode when developing your likelihood functions. Eager mode makes TF behave more like NumPy - everything executes immediately, so you can debug interactively instead of having to use Session.run(). See the notes here.
Preliminary: Distribution classes
TFP has a collection of distribution classes that we'll use to generate our log probabilities. One thing to note is that these classes work with tensors of samples rather than just single samples - this allows for vectorization and related speedups.
A distribution can work with a tensor of samples in 2 different ways. It's simplest to illustrate these 2 ways with a concrete example involving a distribution with a single scalar paramter. I'll use the Poisson distribution, which has a rate parameter.
* If we create a Poisson with a single value for the rate parameter, a call to its sample() method return a single value. This value is called an event, and in this case the events are all scalars.
* If we create a Poisson with a tensor of values for the rate parameter, a call to its sample()method now returns multiple values, one for each value in the rate tensor. The object acts as a collection of independent Poissons, each with its own rate, and each of the values returned by a call to sample() corresponds to one of these Poissons. This collection of independent but not identically distributed events is called a batch.
* The sample() method takes a sample_shape parameter which defaults to an empty tuple. Passing a non-empty value for sample_shape results in sample returning multiple batches. This collection of batches is called a sample.
A distribution's log_prob() method consumes data in a manner that parallels how sample() generates it. log_prob() returns probabilities for samples, i.e. for multiple, independent batches of events.
* If we have our Poisson object that was created with a scalar rate, each batch is a scalar, and if we pass in a tensor of samples, we'll get out a tensor of the same size of log probabilities.
* If we have our Poisson object that was created with a tensor of shape T of rate values, each batch is a tensor of shape T. If we pass in a tensor of samples of shape D, T, we'll get out a tensor of log probabilities of shape D, T.
Below are some examples that illustrate these cases. See this notebook for a more detailed tutorial on events, batches, and shapes.
End of explanation
VALIDATE_ARGS = True
ALLOW_NAN_STATS = False
Explanation: Data log likelihood
First we'll implement the data log likelihood function.
Note: distributions can validate their input, but they don't do so by default. We'll definitely want to turn on validation while we're debugging! Once everything is working, we can turn validation off if speed is really critical.
End of explanation
def log_lik_data(precisions, replicated_data):
n = tf.shape(precisions)[0] # number of precision matrices
# We're estimating a precision matrix; we have to invert to get log
# probabilities. Cholesky inversion should be relatively efficient,
# but as we'll see later, it's even better if we can avoid doing the Cholesky
# decomposition altogether.
precisions_cholesky = tf.linalg.cholesky(precisions)
covariances = tf.linalg.cholesky_solve(
precisions_cholesky, tf.linalg.eye(2, batch_shape=[n]))
rv_data = tfd.MultivariateNormalFullCovariance(
loc=tf.zeros([n, 2]),
covariance_matrix=covariances,
validate_args=VALIDATE_ARGS,
allow_nan_stats=ALLOW_NAN_STATS)
return tf.reduce_sum(rv_data.log_prob(replicated_data), axis=0)
# For our test, we'll use a tensor of 2 precision matrices.
# We'll need to replicate our data for the likelihood function.
# Remember, TFP wants the data to be structured so that the sample dimensions
# are first (100 here), then the batch dimensions (2 here because we have 2
# precision matrices), then the event dimensions (2 because we have 2-D
# Gaussian data). We'll need to add a middle dimension for the batch using
# expand_dims, and then we'll need to create 2 replicates in this new dimension
# using tile.
n = 2
replicated_data = np.tile(np.expand_dims(my_data, axis=1), reps=[1, 2, 1])
print(replicated_data.shape)
Explanation: One key difference from the NumPy case is that our TensorFlow likelihood function will need to handle vectors of precision matrices rather than just single matrices. Vectors of parameters will be used when we sample from multiple chains.
We'll create a distribution object that works with a batch of precision matrices (i.e. one matrix per chain).
When computing log probabilities of our data, we'll need our data to be replicated in the same manner as our parameters so that there is one copy per batch variable. The shape of our replicated data will need to be as follows:
[sample shape, batch shape, event shape]
In our case, the event shape is 2 (since we are working with 2-D Gaussians). The sample shape is 100, since we have 100 samples. The batch shape will just be the number of precision matrices we're working with. It's wasteful to replicate the data each time we call the likelihood function, so we'll replicate the data in advance and pass in the replicated version.
Note that this is an inefficient implementation: MultivariateNormalFullCovariance is expensive relative to some alternatives that we'll talk about in the optimization section at the end.
End of explanation
# check against the numpy implementation
precisions = np.stack([np.eye(2, dtype=np.float32), true_precision])
n = precisions.shape[0]
lik_tf = log_lik_data(precisions, replicated_data=replicated_data).numpy()
for i in range(n):
print(i)
print('numpy:', log_lik_data_numpy(precisions[i], my_data))
print('tensorflow:', lik_tf[i])
Explanation: Tip: One thing I've found to be extremely helpful is writing little sanity checks of my TensorFlow functions. It's really easy to mess up the vectorization in TF, so having the simpler NumPy functions around is a great way to verify the TF output. Think of these as little unit tests.
End of explanation
@tf.function(autograph=False)
def log_lik_prior(precisions):
rv_precision = tfd.WishartTriL(
df=PRIOR_DF,
scale_tril=tf.linalg.cholesky(PRIOR_SCALE),
validate_args=VALIDATE_ARGS,
allow_nan_stats=ALLOW_NAN_STATS)
return rv_precision.log_prob(precisions)
# check against the numpy implementation
precisions = np.stack([np.eye(2, dtype=np.float32), true_precision])
n = precisions.shape[0]
lik_tf = log_lik_prior(precisions).numpy()
for i in range(n):
print(i)
print('numpy:', log_lik_prior_numpy(precisions[i]))
print('tensorflow:', lik_tf[i])
Explanation: Prior log likelihood
The prior is easier since we don't have to worry about data replication.
End of explanation
def get_log_lik(data, n_chains=1):
# The data argument that is passed in will be available to the inner function
# below so it doesn't have to be passed in as a parameter.
replicated_data = np.tile(np.expand_dims(data, axis=1), reps=[1, n_chains, 1])
@tf.function(autograph=False)
def _log_lik(precision):
return log_lik_data(precision, replicated_data) + log_lik_prior(precision)
return _log_lik
Explanation: Build the joint log likelihood function
The data log likelihood function above depends on our observations, but the sampler won't have those. We can get rid of the dependency without using a global variable by using a [closure](https://en.wikipedia.org/wiki/Closure_(computer_programming). Closures involve an outer function that build an environment containing variables needed by an inner function.
End of explanation
@tf.function(autograph=False)
def sample():
tf.random.set_seed(123)
init_precision = tf.expand_dims(tf.eye(2), axis=0)
# Use expand_dims because we want to pass in a tensor of starting values
log_lik_fn = get_log_lik(my_data, n_chains=1)
# we'll just do a few steps here
num_results = 10
num_burnin_steps = 10
states = tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
init_precision,
],
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=log_lik_fn,
step_size=0.1,
num_leapfrog_steps=3),
trace_fn=None,
seed=123)
return states
try:
states = sample()
except Exception as e:
# shorten the giant stack trace
lines = str(e).split('\n')
print('\n'.join(lines[:5]+['...']+lines[-3:]))
Explanation: Step 4: Sample
Ok, time to sample! To keep things simple, we'll just use 1 chain and we'll use the identity matrix as the starting point. We'll do things more carefully later.
Again, this isn't going to work - we'll get an exception.
End of explanation
def get_log_lik_verbose(data, n_chains=1):
# The data argument that is passed in will be available to the inner function
# below so it doesn't have to be passed in as a parameter.
replicated_data = np.tile(np.expand_dims(data, axis=1), reps=[1, n_chains, 1])
def _log_lik(precisions):
# An internal method we'll make into a TensorFlow operation via tf.py_func
def _print_precisions(precisions):
print('precisions:\n', precisions)
return False # operations must return something!
# Turn our method into a TensorFlow operation
print_op = tf.compat.v1.py_func(_print_precisions, [precisions], tf.bool)
# Assertions are also operations, and some care needs to be taken to ensure
# that they're executed
assert_op = tf.assert_equal(
precisions, tf.linalg.matrix_transpose(precisions),
message='not symmetrical', summarize=4, name='symmetry_check')
# The control_dependencies statement forces its arguments to be executed
# before subsequent operations
with tf.control_dependencies([print_op, assert_op]):
return (log_lik_data(precisions, replicated_data) +
log_lik_prior(precisions))
return _log_lik
@tf.function(autograph=False)
def sample():
tf.random.set_seed(123)
init_precision = tf.eye(2)[tf.newaxis, ...]
log_lik_fn = get_log_lik_verbose(my_data)
# we'll just do a few steps here
num_results = 10
num_burnin_steps = 10
states = tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
init_precision,
],
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=log_lik_fn,
step_size=0.1,
num_leapfrog_steps=3),
trace_fn=None,
seed=123)
try:
states = sample()
except Exception as e:
# shorten the giant stack trace
lines = str(e).split('\n')
print('\n'.join(lines[:5]+['...']+lines[-3:]))
Explanation: Identifying the problem
InvalidArgumentError (see above for traceback): Cholesky decomposition was not successful. The input might not be valid. That's not super helpful. Let's see if we can find out more about what happened.
We'll print out the parameters for each step so we can see the value for which things fail
We'll add some assertions to guard against specific problems.
Assertions are tricky because they're TensorFlow operations, and we have to take care that they get executed and don't get optimized out of the graph. It's worth reading this overview of TensorFlow debugging if you aren't familiar with TF assertions. You can explicitly force assertions to execute using tf.control_dependencies (see the comments in the code below).
TensorFlow's native Print function has the same behavior as assertions - it's an operation, and you need to take some care to ensure that it executes. Print causes additional headaches when we're working in a notebook: its output is sent to stderr, and stderr isn't displayed in the cell. We'll use a trick here: instead of using tf.Print, we'll create our own TensorFlow print operation via tf.pyfunc. As with assertions, we have to make sure our method executes.
End of explanation
# Our transform has 3 stages that we chain together via composition:
precision_to_unconstrained = tfb.Chain([
# step 3: flatten the lower triangular portion of the matrix
tfb.Invert(tfb.FillTriangular(validate_args=VALIDATE_ARGS)),
# step 2: take the log of the diagonals
tfb.TransformDiagonal(tfb.Invert(tfb.Exp(validate_args=VALIDATE_ARGS))),
# step 1: decompose the precision matrix into its Cholesky factors
tfb.Invert(tfb.CholeskyOuterProduct(validate_args=VALIDATE_ARGS)),
])
# sanity checks
m = tf.constant([[1., 2.], [2., 8.]])
m_fwd = precision_to_unconstrained.forward(m)
m_inv = precision_to_unconstrained.inverse(m_fwd)
# bijectors handle tensors of values, too!
m2 = tf.stack([m, tf.eye(2)])
m2_fwd = precision_to_unconstrained.forward(m2)
m2_inv = precision_to_unconstrained.inverse(m2_fwd)
print('single input:')
print('m:\n', m.numpy())
print('precision_to_unconstrained(m):\n', m_fwd.numpy())
print('inverse(precision_to_unconstrained(m)):\n', m_inv.numpy())
print()
print('tensor of inputs:')
print('m2:\n', m2.numpy())
print('precision_to_unconstrained(m2):\n', m2_fwd.numpy())
print('inverse(precision_to_unconstrained(m2)):\n', m2_inv.numpy())
Explanation: Why this fails
The very first new parameter value the sampler tries is an asymmetrical matrix. That causes the Cholesky decomposition to fail, since it's only defined for symmetrical (and positive definite) matrices.
The problem here is that our parameter of interest is a precision matrix, and precision matrices must be real, symmetric, and positive definite. The sampler doesn't know anything about this constraint (except possibly through gradients), so it is entirely possible that the sampler will propose an invalid value, leading to an exception, particularly if the step size is large.
With the Hamiltonian Monte Carlo sampler, we may be able to work around the problem by using a very small step size, since the gradient should keep the parameters away from invalid regions, but small step sizes mean slow convergence. With a Metropolis-Hastings sampler, which doesn't know anything about gradients, we're doomed.
Version 2: reparametrizing to unconstrained parameters
There is a straightforward solution to the problem above: we can reparameterize our model such that the new parameters no longer have these constraints. TFP provides a useful set of tools - bijectors - for doing just that.
Reparameterization with bijectors
Our precision matrix must be real and symmetric; we want an alternative parameterization that doesn't have these constraints. A starting point is a Cholesky factorization of the precision matrix. The Cholesky factors are still constrained - they are lower triangular, and their diagonal elements must be positive. However, if we take the log of the diagonals of the Cholesky factor, the logs are no longer are constrained to be positive, and then if we flatten the lower triangular portion into a 1-D vector, we no longer have the lower triangular constraint. The result in our case will be a length 3 vector with no constraints.
(The Stan manual has a great chapter on using transformations to remove various types of constraints on parameters.)
This reparameterization has little effect on our data log likelihood function - we just have to invert our transformation so we get back the precision matrix - but the effect on the prior is more complicated. We've specified that the probability of a given precision matrix is given by the Wishart distribution; what is the probability of our transformed matrix?
Recall that if we apply a monotonic function $g$ to a 1-D random variable $X$, $Y = g(X)$, the density for $Y$ is given by
$$
f_Y(y) = | \frac{d}{dy}(g^{-1}(y)) | f_X(g^{-1}(y))
$$
The derivative of $g^{-1}$ term accounts for the way that $g$ changes local volumes. For higher dimensional random variables, the corrective factor is the absolute value of the determinant of the Jacobian of $g^{-1}$ (see here).
We'll have to add a Jacobian of the inverse transform into our log prior likelihood function. Happily, TFP's Bijector class can take care of this for us.
The Bijector class is used to represent invertible, smooth functions used for changing variables in probability density functions. Bijectors all have a forward() method that performs a transform, an inverse() method that inverts it, and forward_log_det_jacobian() and inverse_log_det_jacobian() methods that provide the Jacobian corrections we need when we reparaterize a pdf.
TFP provides a collection of useful bijectors that we can combine through composition via the Chain operator to form quite complicated transforms. In our case, we'll compose the following 3 bijectors (the operations in the chain are performed from right to left):
The first step of our transform is to perform a Cholesky factorization on the precision matrix. There isn't a Bijector class for that; however, the CholeskyOuterProduct bijector takes the product of 2 Cholesky factors. We can use the inverse of that operation using the Invert operator.
The next step is to take the log of the diagonal elements of the Cholesky factor. We accomplish this via the TransformDiagonal bijector and the inverse of the Exp bijector.
Finally we flatten the lower triangular portion of the matrix to a vector using the inverse of the FillTriangular bijector.
End of explanation
def log_lik_prior_transformed(transformed_precisions):
rv_precision = tfd.TransformedDistribution(
tfd.WishartTriL(
df=PRIOR_DF,
scale_tril=tf.linalg.cholesky(PRIOR_SCALE),
validate_args=VALIDATE_ARGS,
allow_nan_stats=ALLOW_NAN_STATS),
bijector=precision_to_unconstrained,
validate_args=VALIDATE_ARGS)
return rv_precision.log_prob(transformed_precisions)
# Check against the numpy implementation. Note that when comparing, we need
# to add in the Jacobian correction.
precisions = np.stack([np.eye(2, dtype=np.float32), true_precision])
transformed_precisions = precision_to_unconstrained.forward(precisions)
lik_tf = log_lik_prior_transformed(transformed_precisions).numpy()
corrections = precision_to_unconstrained.inverse_log_det_jacobian(
transformed_precisions, event_ndims=1).numpy()
n = precisions.shape[0]
for i in range(n):
print(i)
print('numpy:', log_lik_prior_numpy(precisions[i]) + corrections[i])
print('tensorflow:', lik_tf[i])
Explanation: The TransformedDistribution class automates the process of applying a bijector to a distribution and making the necessary Jacobian correction to log_prob(). Our new prior becomes:
End of explanation
def log_lik_data_transformed(transformed_precisions, replicated_data):
# We recover the precision matrix by inverting our bijector. This is
# inefficient since we really want the Cholesky decomposition of the
# precision matrix, and the bijector has that in hand during the inversion,
# but we'll worry about efficiency later.
n = tf.shape(transformed_precisions)[0]
precisions = precision_to_unconstrained.inverse(transformed_precisions)
precisions_cholesky = tf.linalg.cholesky(precisions)
covariances = tf.linalg.cholesky_solve(
precisions_cholesky, tf.linalg.eye(2, batch_shape=[n]))
rv_data = tfd.MultivariateNormalFullCovariance(
loc=tf.zeros([n, 2]),
covariance_matrix=covariances,
validate_args=VALIDATE_ARGS,
allow_nan_stats=ALLOW_NAN_STATS)
return tf.reduce_sum(rv_data.log_prob(replicated_data), axis=0)
# sanity check
precisions = np.stack([np.eye(2, dtype=np.float32), true_precision])
transformed_precisions = precision_to_unconstrained.forward(precisions)
lik_tf = log_lik_data_transformed(
transformed_precisions, replicated_data).numpy()
for i in range(precisions.shape[0]):
print(i)
print('numpy:', log_lik_data_numpy(precisions[i], my_data))
print('tensorflow:', lik_tf[i])
Explanation: We just need to invert the transform for our data log likelihood:
precision = precision_to_unconstrained.inverse(transformed_precision)
Since we actually want the Cholesky factorization of the precision matrix, it would be more efficient to do just a partial inverse here. However, we'll leave optimization for later and will leave the partial inverse as an exercise for the reader.
End of explanation
def get_log_lik_transformed(data, n_chains=1):
# The data argument that is passed in will be available to the inner function
# below so it doesn't have to be passed in as a parameter.
replicated_data = np.tile(np.expand_dims(data, axis=1), reps=[1, n_chains, 1])
@tf.function(autograph=False)
def _log_lik_transformed(transformed_precisions):
return (log_lik_data_transformed(transformed_precisions, replicated_data) +
log_lik_prior_transformed(transformed_precisions))
return _log_lik_transformed
# make sure everything runs
log_lik_fn = get_log_lik_transformed(my_data)
m = tf.eye(2)[tf.newaxis, ...]
lik = log_lik_fn(precision_to_unconstrained.forward(m)).numpy()
print(lik)
Explanation: Again we wrap our new functions in a closure.
End of explanation
# We'll choose a proper random initial value this time
np.random.seed(123)
initial_value_cholesky = np.array(
[[0.5 + np.random.uniform(), 0.0],
[-0.5 + np.random.uniform(), 0.5 + np.random.uniform()]],
dtype=np.float32)
initial_value = initial_value_cholesky.dot(
initial_value_cholesky.T)[np.newaxis, ...]
# The sampler works with unconstrained values, so we'll transform our initial
# value
initial_value_transformed = precision_to_unconstrained.forward(
initial_value).numpy()
# Sample!
@tf.function(autograph=False)
def sample():
tf.random.set_seed(123)
log_lik_fn = get_log_lik_transformed(my_data, n_chains=1)
num_results = 1000
num_burnin_steps = 1000
states, is_accepted = tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
initial_value_transformed,
],
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=log_lik_fn,
step_size=0.1,
num_leapfrog_steps=3),
trace_fn=lambda _, pkr: pkr.is_accepted,
seed=123)
# transform samples back to their constrained form
precision_samples = [precision_to_unconstrained.inverse(s) for s in states]
return states, precision_samples, is_accepted
states, precision_samples, is_accepted = sample()
Explanation: Sampling
Now that we don't have to worry about our sampler blowing up because of invalid parameter values, let's generate some real samples.
The sampler works with the unconstrained version of our parameters, so we need to transform our initial value to its unconstrained version. The samples that we generate will also all be in their unconstrained form, so we need to transform them back. Bijectors are vectorized, so it's easy to do so.
End of explanation
print('True posterior mean:\n', posterior_mean)
print('Sample mean:\n', np.mean(np.reshape(precision_samples, [-1, 2, 2]), axis=0))
Explanation: Let's compare the mean of our sampler's output to the analytic posterior mean!
End of explanation
np.reshape(precision_samples, [-1, 2, 2])
Explanation: We're way off! Let's figure out why. First let's look at our samples.
End of explanation
# Look at the acceptance for the last 100 samples
print(np.squeeze(is_accepted)[-100:])
print('Fraction of samples accepted:', np.mean(np.squeeze(is_accepted)))
Explanation: Uh oh - it looks like they all have the same value. Let's figure out why.
The kernel_results_ variable is a named tuple that gives information about the sampler at each state. The is_accepted field is the key here.
End of explanation
# The number of chains is determined by the shape of the initial values.
# Here we'll generate 3 chains, so we'll need a tensor of 3 initial values.
N_CHAINS = 3
np.random.seed(123)
initial_values = []
for i in range(N_CHAINS):
initial_value_cholesky = np.array(
[[0.5 + np.random.uniform(), 0.0],
[-0.5 + np.random.uniform(), 0.5 + np.random.uniform()]],
dtype=np.float32)
initial_values.append(initial_value_cholesky.dot(initial_value_cholesky.T))
initial_values = np.stack(initial_values)
initial_values_transformed = precision_to_unconstrained.forward(
initial_values).numpy()
@tf.function(autograph=False)
def sample():
tf.random.set_seed(123)
log_lik_fn = get_log_lik_transformed(my_data)
# Tuning acceptance rates:
dtype = np.float32
num_burnin_iter = 3000
num_warmup_iter = int(0.8 * num_burnin_iter)
num_chain_iter = 2500
# Set the target average acceptance ratio for the HMC as suggested by
# Beskos et al. (2013):
# https://projecteuclid.org/download/pdfview_1/euclid.bj/1383661192
target_accept_rate = 0.651
# Initialize the HMC sampler.
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=log_lik_fn,
step_size=0.01,
num_leapfrog_steps=3)
# Adapt the step size using standard adaptive MCMC procedure. See Section 4.2
# of Andrieu and Thoms (2008):
# http://www4.ncsu.edu/~rsmith/MA797V_S12/Andrieu08_AdaptiveMCMC_Tutorial.pdf
adapted_kernel = tfp.mcmc.SimpleStepSizeAdaptation(
inner_kernel=hmc,
num_adaptation_steps=num_warmup_iter,
target_accept_prob=target_accept_rate)
states, is_accepted = tfp.mcmc.sample_chain(
num_results=num_chain_iter,
num_burnin_steps=num_burnin_iter,
current_state=initial_values_transformed,
kernel=adapted_kernel,
trace_fn=lambda _, pkr: pkr.inner_results.is_accepted,
parallel_iterations=1)
# transform samples back to their constrained form
precision_samples = precision_to_unconstrained.inverse(states)
return states, precision_samples, is_accepted
states, precision_samples, is_accepted = sample()
Explanation: All our samples were rejected! Presumably our step size was too big. I chose stepsize=0.1 purely arbitrarily.
Version 3: sampling with an adaptive step size
Since sampling with my arbitrary choice of step size failed, we have a few agenda items:
1. implement an adaptive step size, and
2. perform some convergence checks.
There is some nice sample code in tensorflow_probability/python/mcmc/hmc.py for implementing adaptive step sizes. I've adapted it below.
Note that there's a separate sess.run() statement for each step. This is really helpful for debugging, since it allows us to easily add some per-step diagnostics if need be. For example, we can show incremental progress, time each step, etc.
Tip: One apparently common way to mess up your sampling is to have your graph grow in the loop. (The reason for finalizing the graph before the session is run is to prevent just such problems.) If you haven't been using finalize(), though, a useful debugging check if your code slows to a crawl is to print out the graph size at each step via len(mygraph.get_operations()) - if the length increases, you're probably doing something bad.
We're going to run 3 independent chains here. Doing some comparisons between the chains will help us check for convergence.
End of explanation
print(np.mean(is_accepted))
Explanation: A quick check: our acceptance rate during our sampling is close to our target of 0.651.
End of explanation
precision_samples_reshaped = np.reshape(precision_samples, [-1, 2, 2])
print('True posterior mean:\n', posterior_mean)
print('Mean of samples:\n', np.mean(precision_samples_reshaped, axis=0))
print('True posterior standard deviation:\n', posterior_sd)
print('Standard deviation of samples:\n', np.std(precision_samples_reshaped, axis=0))
Explanation: Even better, our sample mean and standard deviation are close to what we expect from the analytic solution.
End of explanation
r_hat = tfp.mcmc.potential_scale_reduction(precision_samples).numpy()
print(r_hat)
Explanation: Checking for convergence
In general we won't have an analytic solution to check against, so we'll need to make sure the sampler has converged. One standard check is the Gelman-Rubin $\hat{R}$ statistic, which requires multiple sampling chains. $\hat{R}$ measures the degree to which variance (of the means) between chains exceeds what one would expect if the chains were identically distributed. Values of $\hat{R}$ close to 1 are used to indicate approximate convergence. See the source for details.
End of explanation
fig, axes = plt.subplots(2, 2, sharey=True)
fig.set_size_inches(8, 8)
for i in range(2):
for j in range(2):
ax = axes[i, j]
ax.hist(precision_samples_reshaped[:, i, j])
ax.axvline(true_precision[i, j], color='red',
label='True precision')
ax.axvline(sample_precision[i, j], color='red', linestyle=':',
label='Sample precision')
ax.set_title('precision[%d, %d]' % (i, j))
plt.tight_layout()
plt.legend()
plt.show()
Explanation: Model criticism
If we didn't have an analytic solution, this would be the time to do some real model criticism.
Here are a few quick histograms of the sample components relative to our ground truth (in red). Note that the samples have been shrunk from the sample precision matrix values toward the identity matrix prior.
End of explanation
fig, axes = plt.subplots(4, 4)
fig.set_size_inches(12, 12)
for i1 in range(2):
for j1 in range(2):
index1 = 2 * i1 + j1
for i2 in range(2):
for j2 in range(2):
index2 = 2 * i2 + j2
ax = axes[index1, index2]
ax.scatter(precision_samples_reshaped[:, i1, j1],
precision_samples_reshaped[:, i2, j2], alpha=0.1)
ax.axvline(true_precision[i1, j1], color='red')
ax.axhline(true_precision[i2, j2], color='red')
ax.axvline(sample_precision[i1, j1], color='red', linestyle=':')
ax.axhline(sample_precision[i2, j2], color='red', linestyle=':')
ax.set_title('(%d, %d) vs (%d, %d)' % (i1, j1, i2, j2))
plt.tight_layout()
plt.show()
Explanation: Some scatterplots of pairs of precision components show that because of the correlation structure of the posterior, the true posterior values are not as unlikely as they appear from the marginals above.
End of explanation
# The bijector we need for the TransformedTransitionKernel is the inverse of
# the one we used above
unconstrained_to_precision = tfb.Chain([
# step 3: take the product of Cholesky factors
tfb.CholeskyOuterProduct(validate_args=VALIDATE_ARGS),
# step 2: exponentiate the diagonals
tfb.TransformDiagonal(tfb.Exp(validate_args=VALIDATE_ARGS)),
# step 1: map a vector to a lower triangular matrix
tfb.FillTriangular(validate_args=VALIDATE_ARGS),
])
# quick sanity check
m = [[1., 2.], [2., 8.]]
m_inv = unconstrained_to_precision.inverse(m).numpy()
m_fwd = unconstrained_to_precision.forward(m_inv).numpy()
print('m:\n', m)
print('unconstrained_to_precision.inverse(m):\n', m_inv)
print('forward(unconstrained_to_precision.inverse(m)):\n', m_fwd)
Explanation: Version 4: simpler sampling of constrained parameters
Bijectors made sampling the precision matrix straightforward, but there was a fair amount of manual converting to and from the unconstrained representation. There is an easier way!
The TransformedTransitionKernel
The TransformedTransitionKernel simplifies this process. It wraps your sampler and handles all the conversions. It takes as an argument a list of bijectors that map unconstrained parameter values to constrained ones. So here we need the inverse of the precision_to_unconstrained bijector we used above. We could just use tfb.Invert(precision_to_unconstrained), but that would involve taking of inverses of inverses (TensorFlow isn't smart enough to simplify tf.Invert(tf.Invert()) to tf.Identity()), so instead we'll just write a new bijector.
Constraining bijector
End of explanation
@tf.function(autograph=False)
def sample():
tf.random.set_seed(123)
log_lik_fn = get_log_lik(my_data)
# Tuning acceptance rates:
dtype = np.float32
num_burnin_iter = 3000
num_warmup_iter = int(0.8 * num_burnin_iter)
num_chain_iter = 2500
# Set the target average acceptance ratio for the HMC as suggested by
# Beskos et al. (2013):
# https://projecteuclid.org/download/pdfview_1/euclid.bj/1383661192
target_accept_rate = 0.651
# Initialize the HMC sampler.
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=log_lik_fn,
step_size=0.01,
num_leapfrog_steps=3)
ttk = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=hmc, bijector=unconstrained_to_precision)
# Adapt the step size using standard adaptive MCMC procedure. See Section 4.2
# of Andrieu and Thoms (2008):
# http://www4.ncsu.edu/~rsmith/MA797V_S12/Andrieu08_AdaptiveMCMC_Tutorial.pdf
adapted_kernel = tfp.mcmc.SimpleStepSizeAdaptation(
inner_kernel=ttk,
num_adaptation_steps=num_warmup_iter,
target_accept_prob=target_accept_rate)
states = tfp.mcmc.sample_chain(
num_results=num_chain_iter,
num_burnin_steps=num_burnin_iter,
current_state=initial_values,
kernel=adapted_kernel,
trace_fn=None,
parallel_iterations=1)
# transform samples back to their constrained form
return states
precision_samples = sample()
Explanation: Sampling with the TransformedTransitionKernel
With the TransformedTransitionKernel, we no longer have to do manual transformations of our parameters. Our initial values and our samples are all precision matrices; we just have to pass in our unconstraining bijector(s) to the kernel and it takes care of all the transformations.
End of explanation
r_hat = tfp.mcmc.potential_scale_reduction(precision_samples).numpy()
print(r_hat)
Explanation: Checking convergence
The $\hat{R}$ convergence check looks good!
End of explanation
# The output samples have shape [n_steps, n_chains, 2, 2]
# Flatten them to [n_steps * n_chains, 2, 2] via reshape:
precision_samples_reshaped = np.reshape(precision_samples, [-1, 2, 2])
print('True posterior mean:\n', posterior_mean)
print('Mean of samples:\n', np.mean(precision_samples_reshaped, axis=0))
print('True posterior standard deviation:\n', posterior_sd)
print('Standard deviation of samples:\n', np.std(precision_samples_reshaped, axis=0))
Explanation: Comparison against the analytic posterior
Again let's check against the analytic posterior.
End of explanation
# An optimized Wishart distribution that has been transformed to operate on
# Cholesky factors instead of full matrices. Note that we gain a modest
# additional speedup by specifying the Cholesky factor of the scale matrix
# (i.e. by passing in the scale_tril parameter instead of scale).
class CholeskyWishart(tfd.TransformedDistribution):
Wishart distribution reparameterized to use Cholesky factors.
def __init__(self,
df,
scale_tril,
validate_args=False,
allow_nan_stats=True,
name='CholeskyWishart'):
# Wishart has a bunch of methods that we want to support but not
# implement. We'll subclass TransformedDistribution here to take care of
# those. We'll override the few for which speed is critical and implement
# them with a separate Wishart for which input_output_cholesky=True
super(CholeskyWishart, self).__init__(
distribution=tfd.WishartTriL(
df=df,
scale_tril=scale_tril,
input_output_cholesky=False,
validate_args=validate_args,
allow_nan_stats=allow_nan_stats),
bijector=tfb.Invert(tfb.CholeskyOuterProduct()),
validate_args=validate_args,
name=name
)
# Here's the Cholesky distribution we'll use for log_prob() and sample()
self.cholesky = tfd.WishartTriL(
df=df,
scale_tril=scale_tril,
input_output_cholesky=True,
validate_args=validate_args,
allow_nan_stats=allow_nan_stats)
def _log_prob(self, x):
return (self.cholesky.log_prob(x) +
self.bijector.inverse_log_det_jacobian(x, event_ndims=2))
def _sample_n(self, n, seed=None):
return self.cholesky._sample_n(n, seed)
# some checks
PRIOR_SCALE_CHOLESKY = np.linalg.cholesky(PRIOR_SCALE)
@tf.function(autograph=False)
def compute_log_prob(m):
w_transformed = tfd.TransformedDistribution(
tfd.WishartTriL(df=PRIOR_DF, scale_tril=PRIOR_SCALE_CHOLESKY),
bijector=tfb.Invert(tfb.CholeskyOuterProduct()))
w_optimized = CholeskyWishart(
df=PRIOR_DF, scale_tril=PRIOR_SCALE_CHOLESKY)
log_prob_transformed = w_transformed.log_prob(m)
log_prob_optimized = w_optimized.log_prob(m)
return log_prob_transformed, log_prob_optimized
for matrix in [np.eye(2, dtype=np.float32),
np.array([[1., 0.], [2., 8.]], dtype=np.float32)]:
log_prob_transformed, log_prob_optimized = [
t.numpy() for t in compute_log_prob(matrix)]
print('Transformed Wishart:', log_prob_transformed)
print('Optimized Wishart', log_prob_optimized)
Explanation: Optimizations
Now that we've got things running end-to-end, let's do a more optimized version. Speed doesn't matter too much for this example, but once matrices get larger, a few optimizations will make a big difference.
One big speed improvement we can make is to reparameterize in terms of the Cholesky decomposition. The reason is our data likelihood function requires both the covariance and the precision matrices. Matrix inversion is expensive ($O(n^3)$ for an $n \times n$ matrix), and if we parameterize in terms of either the covariance or the precision matrix, we need to do an inversion to get the other.
As a reminder, a real, positive-definite, symmetric matrix $M$ can be decomposed into a product of the form $M = L L^T$ where the matrix $L$ is lower triangular and has positive diagonals. Given the Cholesky decomposition of $M$, we can more efficiently obtain both $M$ (the product of a lower and an upper triangular matrix) and $M^{-1}$ (via back-substitution). The Cholesky factorization itself is not cheap to compute, but if we parameterize in terms of Cholesky factors, we only need to compute the Choleksy factorization of the initial parameter values.
Using the Cholesky decomposition of the covariance matrix
TFP has a version of the multivariate normal distribution, MultivariateNormalTriL, that is parameterized in terms of the Cholesky factor of the covariance matrix. So if we were to parameterize in terms of the Cholesky factor of the covariance matrix, we could compute the data log likelihood efficiently. The challenge is in computing the prior log likelihood with similar efficiency.
If we had a version of the inverse Wishart distribution that worked with Cholesky factors of samples, we'd be all set. Alas, we don't. (The team would welcome code submissions, though!) As an alternative, we can use a version of the Wishart distribution that works with Cholesky factors of samples together with a chain of bijectors.
At the moment, we're missing a few stock bijectors to make things really efficient, but I want to show the process as an exercise and a useful illustration of the power of TFP's bijectors.
A Wishart distribution that operates on Cholesky factors
The Wishart distribution has a useful flag, input_output_cholesky, that specifies that the input and output matrices should be Cholesky factors. It's more efficient and numerically advantageous to work with the Cholesky factors than full matrices, which is why this is desirable. An important point about the semantics of the flag: it's only an indication that the representation of the input and output to the distribution should change - it does not indicate a full reparameterization of the distribution, which would involve a Jacobian correction to the log_prob() function. We actually want to do this full reparameterization, so we'll build our own distribution.
End of explanation
# verify that the bijector works
m = np.array([[1., 0.], [2., 8.]], dtype=np.float32)
c_inv = m.dot(m.T)
c = np.linalg.inv(c_inv)
c_chol = np.linalg.cholesky(c)
wishart_cholesky_to_iw_cholesky = tfb.CholeskyToInvCholesky()
w_fwd = wishart_cholesky_to_iw_cholesky.forward(m).numpy()
print('numpy =\n', c_chol)
print('bijector =\n', w_fwd)
Explanation: Building an inverse Wishart distribution
We have our covariance matrix $C$ decomposed into $C = L L^T$ where $L$ is lower triangular and has a positive diagonal. We want to know the probability of $L$ given that $C \sim W^{-1}(\nu, V)$ where $W^{-1}$ is the inverse Wishart distribution.
The inverse Wishart distribution has the property that if $C \sim W^{-1}(\nu, V)$, then the precision matrix $C^{-1} \sim W(\nu, V^{-1})$. So we can get the probability of $L$ via a TransformedDistribution that takes as parameters the Wishart distribution and a bijector that maps the Cholesky factor of precision matrix to a Cholesky factor of its inverse.
A straightforward (but not super efficient) way to get from the Cholesky factor of $C^{-1}$ to $L$ is to invert the Cholesky factor by back-solving, then forming the covariance matrix from these inverted factors, and then doing a Cholesky factorization.
Let the Cholesky decomposition of $C^{-1} = M M^T$. $M$ is lower triangular, so we can invert it using the MatrixInverseTriL bijector.
Forming $C$ from $M^{-1}$ is a little tricky: $C = (M M^T)^{-1} = M^{-T}M^{-1} = M^{-T} (M^{-T})^T$. $M$ is lower triangular, so $M^{-1}$ will also be lower triangular, and $M^{-T}$ will be upper triangular. The CholeskyOuterProduct() bijector only works with lower triangular matrices, so we can't use it to form $C$ from $M^{-T}$. Our workaround is a chain of bijectors that permute the rows and columns of a matrix.
Luckily this logic is encapsulated in the CholeskyToInvCholesky bijector!
Combining all the pieces
End of explanation
inverse_wishart_cholesky = tfd.TransformedDistribution(
distribution=CholeskyWishart(
df=PRIOR_DF,
scale_tril=np.linalg.cholesky(np.linalg.inv(PRIOR_SCALE))),
bijector=tfb.CholeskyToInvCholesky())
Explanation: Our final distribution
Our inverse Wishart operating on Cholesky factors is as follows:
End of explanation
# Our new prior.
PRIOR_SCALE_CHOLESKY = np.linalg.cholesky(PRIOR_SCALE)
def log_lik_prior_cholesky(precisions_cholesky):
rv_precision = CholeskyWishart(
df=PRIOR_DF,
scale_tril=PRIOR_SCALE_CHOLESKY,
validate_args=VALIDATE_ARGS,
allow_nan_stats=ALLOW_NAN_STATS)
return rv_precision.log_prob(precisions_cholesky)
# Check against the slower TF implementation and the NumPy implementation.
# Note that when comparing to NumPy, we need to add in the Jacobian correction.
precisions = [np.eye(2, dtype=np.float32),
true_precision]
precisions_cholesky = np.stack([np.linalg.cholesky(m) for m in precisions])
precisions = np.stack(precisions)
lik_tf = log_lik_prior_cholesky(precisions_cholesky).numpy()
lik_tf_slow = tfd.TransformedDistribution(
distribution=tfd.WishartTriL(
df=PRIOR_DF, scale_tril=tf.linalg.cholesky(PRIOR_SCALE)),
bijector=tfb.Invert(tfb.CholeskyOuterProduct())).log_prob(
precisions_cholesky).numpy()
corrections = tfb.Invert(tfb.CholeskyOuterProduct()).inverse_log_det_jacobian(
precisions_cholesky, event_ndims=2).numpy()
n = precisions.shape[0]
for i in range(n):
print(i)
print('numpy:', log_lik_prior_numpy(precisions[i]) + corrections[i])
print('tensorflow slow:', lik_tf_slow[i])
print('tensorflow fast:', lik_tf[i])
Explanation: We've got our inverse Wishart, but it's kind of slow because we have to do a Cholesky decomposition in the bijector. Let's return to the precision matrix parameterization and see what we can do there for optimization.
Final(!) Version: using the Cholesky decomposition of the precision matrix
An alternative approach is to work with Cholesky factors of the precision matrix. Here the prior likelihood function is easy to compute, but the data log likelihood function takes more work since TFP doesn't have a version of the multivariate normal that is parameterized by precision.
Optimized prior log likelihood
We use the CholeskyWishart distribution we built above to construct the prior.
End of explanation
class MVNPrecisionCholesky(tfd.TransformedDistribution):
Multivariate normal parameterized by loc and Cholesky precision matrix.
def __init__(self, loc, precision_cholesky, name=None):
super(MVNPrecisionCholesky, self).__init__(
distribution=tfd.Independent(
tfd.Normal(loc=tf.zeros_like(loc),
scale=tf.ones_like(loc)),
reinterpreted_batch_ndims=1),
bijector=tfb.Chain([
tfb.Shift(shift=loc),
tfb.Invert(tfb.ScaleMatvecTriL(scale_tril=precision_cholesky,
adjoint=True)),
]),
name=name)
@tf.function(autograph=False)
def log_lik_data_cholesky(precisions_cholesky, replicated_data):
n = tf.shape(precisions_cholesky)[0] # number of precision matrices
rv_data = MVNPrecisionCholesky(
loc=tf.zeros([n, 2]),
precision_cholesky=precisions_cholesky)
return tf.reduce_sum(rv_data.log_prob(replicated_data), axis=0)
# check against the numpy implementation
true_precision_cholesky = np.linalg.cholesky(true_precision)
precisions = [np.eye(2, dtype=np.float32), true_precision]
precisions_cholesky = np.stack([np.linalg.cholesky(m) for m in precisions])
precisions = np.stack(precisions)
n = precisions_cholesky.shape[0]
replicated_data = np.tile(np.expand_dims(my_data, axis=1), reps=[1, 2, 1])
lik_tf = log_lik_data_cholesky(precisions_cholesky, replicated_data).numpy()
for i in range(n):
print(i)
print('numpy:', log_lik_data_numpy(precisions[i], my_data))
print('tensorflow:', lik_tf[i])
Explanation: Optimized data log likelihood
We can use TFP's bijectors to build our own version of the multivariate normal. Here is the key idea:
Suppose I have a column vector $X$ whose elements are iid samples of $N(0, 1)$. We have $\text{mean}(X) = 0$ and $\text{cov}(X) = I$
Now let $Y = A X + b$. We have $\text{mean}(Y) = b$ and $\text{cov}(Y) = A A^T$
Hence we can make vectors with mean $b$ and covariance $C$ using the affine transform $Ax+b$ to vectors of iid standard Normal samples provided $A A^T = C$. The Cholesky decomposition of $C$ has the desired property. However, there are other solutions.
Let $P = C^{-1}$ and let the Cholesky decomposition of $P$ be $B$, i.e. $B B^T = P$. Now
$P^{-1} = (B B^T)^{-1} = B^{-T} B^{-1} = B^{-T} (B^{-T})^T$
So another way to get our desired mean and covariance is to use the affine transform $Y=B^{-T}X + b$.
Our approach (courtesy of this notebook):
1. Use tfd.Independent() to combine a batch of 1-D Normal random variables into a single multi-dimensional random variable. The reinterpreted_batch_ndims parameter for Independent() specifies the number of batch dimensions that should be reinterpreted as event dimensions. In our case we create a 1-D batch of length 2 that we transform into a 1-D event of length 2, so reinterpreted_batch_ndims=1.
2. Apply a bijector to add the desired covariance: tfb.Invert(tfb.ScaleMatvecTriL(scale_tril=precision_cholesky, adjoint=True)). Note that above we're multiplying our iid normal random variables by the transpose of the inverse of the Cholesky factor of the precision matrix $(B^{-T}X)$. The tfb.Invert takes care of inverting $B$, and the adjoint=True flag performs the transpose.
3. Apply a bijector to add the desired offset: tfb.Shift(shift=shift) Note that we have to do the shift as a separate step from the initial inverted affine transform because otherwise the inverted scale is applied to the shift (since the inverse of $y=Ax+b$ is $x=A^{-1}y - A^{-1}b$).
End of explanation
def get_log_lik_cholesky(data, n_chains=1):
# The data argument that is passed in will be available to the inner function
# below so it doesn't have to be passed in as a parameter.
replicated_data = np.tile(np.expand_dims(data, axis=1), reps=[1, n_chains, 1])
@tf.function(autograph=False)
def _log_lik_cholesky(precisions_cholesky):
return (log_lik_data_cholesky(precisions_cholesky, replicated_data) +
log_lik_prior_cholesky(precisions_cholesky))
return _log_lik_cholesky
Explanation: Combined log likelihood function
Now we combine our prior and data log likelihood functions in a closure.
End of explanation
unconstrained_to_precision_cholesky = tfb.Chain([
# step 2: exponentiate the diagonals
tfb.TransformDiagonal(tfb.Exp(validate_args=VALIDATE_ARGS)),
# step 1: expand the vector to a lower triangular matrix
tfb.FillTriangular(validate_args=VALIDATE_ARGS),
])
# some checks
inv = unconstrained_to_precision_cholesky.inverse(precisions_cholesky).numpy()
fwd = unconstrained_to_precision_cholesky.forward(inv).numpy()
print('precisions_cholesky:\n', precisions_cholesky)
print('\ninv:\n', inv)
print('\nfwd(inv):\n', fwd)
Explanation: Constraining bijector
Our samples are constrained to be valid Cholesky factors, which means they must be lower triangular matrices with positive diagonals. The TransformedTransitionKernel needs a bijector that maps unconstrained tensors to/from tensors with our desired constraints. We've removed the Cholesky decomposition from the bijector's inverse, which speeds things up.
End of explanation
# The number of chains is determined by the shape of the initial values.
# Here we'll generate 3 chains, so we'll need a tensor of 3 initial values.
N_CHAINS = 3
np.random.seed(123)
initial_values_cholesky = []
for i in range(N_CHAINS):
initial_values_cholesky.append(np.array(
[[0.5 + np.random.uniform(), 0.0],
[-0.5 + np.random.uniform(), 0.5 + np.random.uniform()]],
dtype=np.float32))
initial_values_cholesky = np.stack(initial_values_cholesky)
Explanation: Initial values
We generate a tensor of initial values. We're working with Cholesky factors, so we generate some Cholesky factor initial values.
End of explanation
@tf.function(autograph=False)
def sample():
tf.random.set_seed(123)
log_lik_fn = get_log_lik_cholesky(my_data)
# Tuning acceptance rates:
dtype = np.float32
num_burnin_iter = 3000
num_warmup_iter = int(0.8 * num_burnin_iter)
num_chain_iter = 2500
# Set the target average acceptance ratio for the HMC as suggested by
# Beskos et al. (2013):
# https://projecteuclid.org/download/pdfview_1/euclid.bj/1383661192
target_accept_rate = 0.651
# Initialize the HMC sampler.
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=log_lik_fn,
step_size=0.01,
num_leapfrog_steps=3)
ttk = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=hmc, bijector=unconstrained_to_precision_cholesky)
# Adapt the step size using standard adaptive MCMC procedure. See Section 4.2
# of Andrieu and Thoms (2008):
# http://www4.ncsu.edu/~rsmith/MA797V_S12/Andrieu08_AdaptiveMCMC_Tutorial.pdf
adapted_kernel = tfp.mcmc.SimpleStepSizeAdaptation(
inner_kernel=ttk,
num_adaptation_steps=num_warmup_iter,
target_accept_prob=target_accept_rate)
states = tfp.mcmc.sample_chain(
num_results=num_chain_iter,
num_burnin_steps=num_burnin_iter,
current_state=initial_values,
kernel=adapted_kernel,
trace_fn=None,
parallel_iterations=1)
# transform samples back to their constrained form
samples = tf.linalg.matmul(states, states, transpose_b=True)
return samples
precision_samples = sample()
Explanation: Sampling
We sample N_CHAINS chains using the TransformedTransitionKernel.
End of explanation
r_hat = tfp.mcmc.potential_scale_reduction(precision_samples).numpy()
print(r_hat)
Explanation: Convergence check
A quick convergence check looks good:
End of explanation
# The output samples have shape [n_steps, n_chains, 2, 2]
# Flatten them to [n_steps * n_chains, 2, 2] via reshape:
precision_samples_reshaped = np.reshape(precision_samples, newshape=[-1, 2, 2])
Explanation: Comparing results to the analytic posterior
End of explanation
print('True posterior mean:\n', posterior_mean)
print('Mean of samples:\n', np.mean(precision_samples_reshaped, axis=0))
print('True posterior standard deviation:\n', posterior_sd)
print('Standard deviation of samples:\n', np.std(precision_samples_reshaped, axis=0))
Explanation: And again, the sample means and standard deviations match those of the analytic posterior.
End of explanation |
8,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ke Contrast
The constrast is based on the calculation of the aggregated RGB histogram of the image.
Step1: Then the width of 98% mass is calculated
Step2: Ke brightness
The mean brightness may be calculated as the mean of the value channel of the HSV representation of the image.
Step3: The value channel ranges from 0 to 255. So, to get the brightness average percentage of the image we can use.
Step4: Hue Count
Step5: Spacial distribution of edges
Step6: Edges bounding box area
Step7: Blur | Python Code:
channels = cv2.split(image)
colors = ('r', 'g', 'b')
histogram = [0.0]
for (channel, color) in zip(channels, colors):
histogram += cv2.calcHist([channel], [0], None, [256], [0, 256])
normalized_histogram = normalize(histogram, norm='l1', axis=0, copy=True, return_norm=False)
Explanation: Ke Contrast
The constrast is based on the calculation of the aggregated RGB histogram of the image.
End of explanation
def middleMassWidth(percentage, histogram):
bias = (int)((1 - percentage)/2)
accumulator = 0.0
start = 0
for index, bin_value in enumerate(histogram):
accumulator = accumulator + bin_value
if(accumulator < bias):
start = index
if(accumulator > bias + percentage):
return index - start - 1
middleMassWidth(0.98, normalized_histogram)
middleMassWidth(0.95, normalized_histogram)
Explanation: Then the width of 98% mass is calculated
End of explanation
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv_image)
print(v)
Explanation: Ke brightness
The mean brightness may be calculated as the mean of the value channel of the HSV representation of the image.
End of explanation
np.mean(v) / 256 * 100
Explanation: The value channel ranges from 0 to 255. So, to get the brightness average percentage of the image we can use.
End of explanation
hue_countable = []
for rows in hsv_image:
for cols in rows:
if cols[2]/256 > 0.15 and cols[2]/256 < 0.95 and cols[1]/256 > 0.2:
hue_countable.append(cols[0])
print(len(hue_countable))
hue_count_histogram = np.histogram(hue_countable, bins=20)
print(hue_count_histogram)
m = np.max(hue_count_histogram[0])
alpha = 0.05
N = 0
for bin in hue_count_histogram[0]:
if bin > alpha * m:
N = N + 1
print(N)
qh = 20 - N
print(qh)
Explanation: Hue Count
End of explanation
laplacian = np.array([ [0.2/1.2, 0.8/1.2, 0.2/1.2],
[0.8/1.2, -4/1.2, 0.8/1.2],
[0.2/1.2, 0.8/1.2, 0.2/1.2]])
print(laplacian)
r, g, b = cv2.split(image)
print(r)
r_laplacian = np.absolute(signal.convolve2d(r, laplacian, mode='same', boundary='fill', fillvalue=0))
g_laplacian = np.absolute(signal.convolve2d(g, laplacian, mode='same', boundary='fill', fillvalue=0))
b_laplacian = np.absolute(signal.convolve2d(b, laplacian, mode='same', boundary='fill', fillvalue=0))
mean_laplacian = np.mean([r_laplacian, g_laplacian, b_laplacian], axis=0)
mean_laplacian = mean_laplacian / mean_laplacian.max() * 255
print(np.max(mean_laplacian))
plt.imshow(mean_laplacian, cmap='gray_r')
plt.show()
resized_laplacian_image = sp.misc.imresize(mean_laplacian, [100, 100], interp='lanczos')
resized_laplacian_image = skimage.img_as_float(resized_laplacian_image)
normalize(resized_laplacian_image, norm='l1', axis=1, copy=False, return_norm=False)
resized_laplacian_image = resized_laplacian_image/100
plt.imshow(resized_laplacian_image, cmap='gray_r')
plt.show()
Explanation: Spacial distribution of edges
End of explanation
Px = np.sum(mean_normalized_laplacian, axis=0)
Py = np.sum(mean_normalized_laplacian, axis=1)
print(Px)
quality = 1 - middleMassWidth(0.98, Px) * middleMassWidth(0.98, Py)/10000
print("quality: ", quality)
Explanation: Edges bounding box area
End of explanation
fft_r = np.fft.fft2(r)
fft_g = np.fft.fft2(g)
fft_b = np.fft.fft2(b)
fft = np.mean([fft_r, fft_g, fft_b], axis=0)
fshift = np.fft.fftshift(fft)
magnitude_spectrum = 20*np.log(np.abs(fshift))
plt.plot(122),plt.imshow(magnitude_spectrum, cmap = 'gray')
plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([])
plt.show()
np.abs(magnitude_spectrum).max()
(np.sum(np.abs(fft) > 5, axis=None)) / (len(fft) * len(fft[0]))
Explanation: Blur
End of explanation |
8,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'lp' (Line Profile) Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Dataset Parameters
Line profiles have an extra dimension than LC and RV datasets which have times as their independent variable. For that reason, the parameters in the LP dataset are tagged with individual times instead of having a separate times array. This allows the flux_densities and sigmas to be per-time. Because of this, times is not a parameter, but instead must be passed when you call b.add_dataset if you want to attach actual line-profile data (flux_densities) to your dataset. At that point, in order to change the times you would need to remove and re-add the dataset. If you only want to compute synthetic line profiles, use compute_times or compute_phases instead.
Let's add a line profile dataset to the Bundle (see also the lp API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
Step3: For information on the included passband-dependent parameters (not mentioned below), see the section on the lc dataset.
times
Step4: compute_times / compute_phases
See the Compute Times & Phases tutorial.
Step5: wavelengths
Step6: components
Line profiles will be computed for each component in which the wavelengths are provided. If we wanted to expose the line profile for the binary as a whole, we'd set the wavelenghts for wavelengths@binary. If instead we wanted to expose per-star line profiles, we could set the wavelengths for both wavelengths@primary and wavelengths@secondary.
If you're passing wavelengths to the b.add_dataset call, it will default to filling the wavelengths at the system-level. To override this, pass components=['primary', 'secondary'], as well. For example
Step7: sigmas
Note that, like flux_densities, sigmas parameters are exposed per-time, according to the value of times passed to add_dataset.
Step8: profile_func
Step9: profile_rest
Step10: profile_sv
Step11: Synthetics
Step12: The model for a line profile dataset will expose flux-densities at each time and for each component where the corresponding wavelengths Parameter was not empty. Here since we used the default and exposed line-profiles for the entire system, we have a single entry per-time.
Step13: Plotting
By default, LP datasets plot as 'flux_densities' vs 'wavelengths' for a single time.
Step14: Mesh Fields
Let's add a single mesh and see which columns from the line profile dataset are available to expose as a column in the mesh. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: 'lp' (Line Profile) Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b.add_dataset('lp', times=[0,1,2], wavelengths=phoebe.linspace(549, 551, 101))
print(b.get_dataset(kind='lp', check_visible=False))
Explanation: Dataset Parameters
Line profiles have an extra dimension than LC and RV datasets which have times as their independent variable. For that reason, the parameters in the LP dataset are tagged with individual times instead of having a separate times array. This allows the flux_densities and sigmas to be per-time. Because of this, times is not a parameter, but instead must be passed when you call b.add_dataset if you want to attach actual line-profile data (flux_densities) to your dataset. At that point, in order to change the times you would need to remove and re-add the dataset. If you only want to compute synthetic line profiles, use compute_times or compute_phases instead.
Let's add a line profile dataset to the Bundle (see also the lp API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
End of explanation
print(b.get_dataset(kind='lp').times)
Explanation: For information on the included passband-dependent parameters (not mentioned below), see the section on the lc dataset.
times
End of explanation
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
print(b.get_parameter(qualifier='phases_t0'))
Explanation: compute_times / compute_phases
See the Compute Times & Phases tutorial.
End of explanation
print(b.filter(qualifier='wavelengths'))
print(b.get_parameter(qualifier='wavelengths', component='binary'))
Explanation: wavelengths
End of explanation
print(b.filter(qualifier='flux_densities'))
print(b.get_parameter(qualifier='flux_densities',
component='binary',
time=0.0))
Explanation: components
Line profiles will be computed for each component in which the wavelengths are provided. If we wanted to expose the line profile for the binary as a whole, we'd set the wavelenghts for wavelengths@binary. If instead we wanted to expose per-star line profiles, we could set the wavelengths for both wavelengths@primary and wavelengths@secondary.
If you're passing wavelengths to the b.add_dataset call, it will default to filling the wavelengths at the system-level. To override this, pass components=['primary', 'secondary'], as well. For example: b.add_dataset('lp', wavelengths=np.linspace(549,551,101), components=['primary', 'secondary']).
flux_densities
Note that observation flux_densities parameters are exposed per-time, according to the value of times passed to add_dataset.
flux_densities parameters will be exposed in the model based on compute_times/compute_phases if not empty, otherwise according to times. For more information, see the Compute Times & Phases tutorial.
End of explanation
print(b.filter(qualifier='sigmas'))
print(b.get_parameter(qualifier='sigmas',
component='binary',
time=0))
Explanation: sigmas
Note that, like flux_densities, sigmas parameters are exposed per-time, according to the value of times passed to add_dataset.
End of explanation
print(b.get_parameter(qualifier='profile_func'))
Explanation: profile_func
End of explanation
print(b.get_parameter(qualifier='profile_rest'))
Explanation: profile_rest
End of explanation
print(b.get_parameter(qualifier='profile_sv'))
Explanation: profile_sv
End of explanation
b.run_compute(irrad_method='none')
print(b.filter(context='model').twigs)
Explanation: Synthetics
End of explanation
print(b.filter(qualifier='flux_densities', context='model'))
print(b.get_parameter(qualifier='flux_densities', context='model', time=0))
Explanation: The model for a line profile dataset will expose flux-densities at each time and for each component where the corresponding wavelengths Parameter was not empty. Here since we used the default and exposed line-profiles for the entire system, we have a single entry per-time.
End of explanation
afig, mplfig = b.filter(dataset='lp01', context='model', time=0).plot(show=True)
Explanation: Plotting
By default, LP datasets plot as 'flux_densities' vs 'wavelengths' for a single time.
End of explanation
b.add_dataset('mesh', times=[0], dataset='mesh01')
print(b.get_parameter(qualifier='columns').choices)
Explanation: Mesh Fields
Let's add a single mesh and see which columns from the line profile dataset are available to expose as a column in the mesh.
End of explanation |
8,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SYDE 556/750
Step1: So everything works fine if we drive each population with the same $x$, let's switch to $\hat{x}$ in the middle
Step2: Looks pretty much the same! (just delayed, maybe)
So now we've passed one value to the other, but it's implausible
The brain doesn't decode and then re-encode
Can we skip those steps? Or combine them?
A shortcut
Let's write down what we've done
Step3: In fact, instead of computing $\omega_{ij}$ at all, it is (usually) more efficient to just do the encoding/decoding
Saves a lot of memory space, since you don't have to store a giant weight matrix
Also, you have NxM multiplies for weights, but only do ~N+M multiplies for encode/decode
Step4: This means we get the exact same effect as having a weight matrix $\omega_{ij}$ if we just take the decoded value from one population and feed that into the next population using the normal encoding method
These are numerically identical processes, since $\omega_{ij} = \alpha_j e_j \cdot d_i$
Spiking neurons
The same approach works for spiking neurons
Do exactly the same as before
The $a_i(t)$ values are spikes, and we convolve with $h(t)$
Other transformations
So this lets us take an $x$ value and feed it into another population
Passing information from one group of neurons to the next
We call this a 'Communication Channel' as you're just sending the information
What about transforming that information in some way?
Instead of $y=x$, can we do $y=f(x)$?
Let's try $y=2x$ to start
We already have a decoder for $\hat{x}$, so how do we get a decoder for $\hat{2x}$?
Two ways
Either use $2x$ when computing $\Upsilon$
Or just multiply your 'representational' decoder by $2$
Step5: What about a nonlinear function?
$y = x^2$
Step6: When you set the connection 'function' in Nengo, it solves the same decoding equation as before, but for a function.
In equations
Step7: We call standard $d_i$ "representational decoders"
We call $d_i^{f(x)}$ "transformational decoders" (or "decoders for $f(x)$")
Adding
What if we want to combine the inputs from two different populations?
Linear case
Step8: Vectors
Almost nothing changes
Step9: Summary
We can use the decoders to find connection weights between groups of neurons
$\omega_{ij}=\alpha_j e_j \cdot d_i$
Using connection weights is numerically identical to decoding and then encoding again
Which can be much more efficient to implement
Feeding two inputs into the same population results in addition
These shortcuts rely on two assumptions
Step10: Multiplication is quite powerful, and has lots of uses
Gating of signals
Attention effects
Binding
Statistical inference
Here's a simple gating example using the same network | Python Code:
%pylab inline
import numpy as np
import nengo
from nengo.dists import Uniform
from nengo.processes import WhiteSignal
from nengo.solvers import LstsqL2
T = 1.0
max_freq = 10
model = nengo.Network('Communication Channel', seed=3)
with model:
stim = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.5))
ensA = nengo.Ensemble(20, dimensions=1, neuron_type=nengo.LIFRate())
ensB = nengo.Ensemble(19, dimensions=1, neuron_type=nengo.LIFRate())
temp = nengo.Ensemble(1, dimensions=1, neuron_type=nengo.LIFRate())
nengo.Connection(stim, ensA)
stim_B = nengo.Connection(stim, ensB)
connectionA = nengo.Connection(ensA, temp) #This is just to generate the decoders
connectionB = nengo.Connection(ensB, temp) #This is just to generate the decoders
stim_p = nengo.Probe(stim)
a_rates = nengo.Probe(ensA.neurons, 'rates')
b_rates = nengo.Probe(ensB.neurons, 'rates')
sim = nengo.Simulator(model, seed=3)
sim.run(T)
x = sim.data[stim_p]
d_i = sim.data[connectionA].weights.T
A_i = sim.data[a_rates]
d_j = sim.data[connectionB].weights.T
A_j = sim.data[b_rates]
#Add noise
A_i = A_i + np.random.normal(scale=0.2*np.max(A_i), size=A_i.shape)
A_j = A_j + np.random.normal(scale=0.2*np.max(A_j), size=A_j.shape)
xhat_i = np.dot(A_i, d_i)
yhat_j = np.dot(A_j, d_j)
t = sim.trange()
figure(figsize=(8,4))
subplot(1,2,1)
plot(t, xhat_i, 'g', label='$\hat{x}$')
plot(t, x, 'b', label='$x$')
legend()
xlabel('time (seconds)')
ylabel('value')
title('first population')
subplot(1,2,2)
plot(t, yhat_j, 'g', label='$\hat{y}$')
plot(t, x, 'b', label='$y$')
legend()
xlabel('time (seconds)')
ylabel('value')
title('second population');
Explanation: SYDE 556/750: Simulating Neurobiological Systems
Accompanying Readings: Chapter 6
Transformation
The story so far:
The activity of groups of neurons can represent variables $x$
$x$ can be an aribitrary-dimension vector
Each neuron has a preferred vector $e$
Current going into each neuron is $J = \alpha e \cdot x + J^{bias}$
We can interpret neural activity via $\hat{x}=\sum a_i d_i$
For spiking neurons, we filter the spikes first: $\hat{x}=\sum a_i(t)*h(t) d_i$
To compute $d$, generate some $x$ values and find the optimal $d$ (assuming some amount of noise)
So far we've just talked about neural activity in a single population
What about connections between neurons?
<img src="files/lecture4/communication1.png">
Connecting neurons
Up till now, we've always had the current going into a neuron be something we computed from $x$
$J = \alpha e \cdot x + J^{bias}$
This will continue to be how we handle inputs
Sensory neurons, for example
Or whatever's coming from the rest of the brain that we're not modelling (yet)
But what about other groups of neurons?
How do they end up getting the amount of input current that we're injecting with $J = \alpha e \cdot x + J^{bias}$ ?
Where does that current come from?
Inputs from neurons connected to this one
Through weighted synaptic connections
Let's think about neurons in a simple case
A communication channel
Let's say we have two groups of neurons
One group represents $x$
One group represents $y$
Can we pass the value from one group of neurons to the other?
Without worrying about biological plausibility to start, we can formulate this in two steps
Drive the first population $a$ with the input, $x$, then decoded it to give $\hat{x}$
Now use $y=\hat{x}$ to drive the 2nd population $b$, and then decode that
Let's start by first constructing the two populations
Stimulate them both directly and decode to compare
End of explanation
#Have to run previous cells first
model.connections.remove(stim_B)
del stim_B
def xhat_fcn(t):
idx = int(t/sim.dt)
if idx>=1000: idx=999
return xhat_i[idx]
with model:
xhat = nengo.Node(xhat_fcn)
nengo.Connection(xhat, ensB)
xhat_p = nengo.Probe(xhat)
sim = nengo.Simulator(model, seed=3)
sim.run(T)
d_j = sim.data[connectionB].weights.T
A_j = sim.data[b_rates]
A_j = A_j + numpy.random.normal(scale=0.2*numpy.max(A_j), size=A_j.shape)
yhat_j = numpy.dot(A_j, d_j)
t = sim.trange()
figure(figsize=(8,4))
subplot(1,2,1)
plot(t, xhat_i, 'g', label='$\hat{x}$')
plot(t, x, 'b', label='$x$')
legend()
xlabel('time (seconds)')
ylabel('value')
title('$\hat{x}$')
ylim(-1,1)
subplot(1,2,2)
plot(t, yhat_j, 'g', label='$\hat{y}$')
plot(t, x, 'b', label='$x$')
legend()
xlabel('time (seconds)')
ylabel('value')
title('$\hat{y}$ (second population)')
ylim(-1,1);
Explanation: So everything works fine if we drive each population with the same $x$, let's switch to $\hat{x}$ in the middle
End of explanation
#Have to run previous cells first
n = nengo.neurons.LIFRate()
alpha_j = sim.data[ensB].gain
bias_j = sim.data[ensB].bias
encoders_j = sim.data[ensB].encoders.T
connection_weights = np.outer(alpha_j*encoders_j, d_i)
J_j = np.dot(connection_weights, sim.data[a_rates].T).T + bias_j
A_j = n.rates(J_j, gain=1, bias=0) #Gain and bias already in the previous line
A_j = A_j + numpy.random.normal(scale=0.2*numpy.max(A_j), size=A_j.shape)
xhat_j = numpy.dot(A_j, d_j)
figure(figsize=(8,4))
subplot(1,2,1)
plot(t, xhat_i, 'g', label='$\hat{x}$')
plot(t, x, 'b', label='$x$')
legend()
xlabel('Time (s)')
ylabel('Value')
title('Decode from A')
ylim(-1,1)
subplot(1,2,2)
plot(t, xhat_j, 'g', label='$\hat{y}$')
plot(t, x, 'b', label='$y$')
legend()
xlabel('Time (s)')
title('Decode from B');
ylim(-1,1);
Explanation: Looks pretty much the same! (just delayed, maybe)
So now we've passed one value to the other, but it's implausible
The brain doesn't decode and then re-encode
Can we skip those steps? Or combine them?
A shortcut
Let's write down what we've done:
Encode into $a$: $a_i = G_i[\alpha_i e_i x + J^{bias}_i]$
Decode from $a$: $\hat{x} = \sum_i a_i d_i$
Set $y = \hat{x}$
Encode into $b$: $b_j = G_j[\alpha_j e_j y + J^{bias}_j]$
Decode from $b$: $\hat{y} = \sum_j b_j d_j$
Now let's just do the substitutions:
I.e. substitute $y = \hat{x} = \sum_i a_i d_i$ into $b$
$b_j = G_j[\alpha_j e_j \sum_i a_i d_i + J^{bias}_j]$
$b_j = G_j[\sum_i \alpha_j e_j d_i a_i + J^{bias}_j]$
$b_j = G_j[\sum_i \omega_{ij}a_i + J^{bias}_j]$
where $\omega_{ij} = \alpha_j e_j \cdot d_i$ (an outer product)
In other words, we can get the entire weight matrix just by multiplying the decoders from the first population with the encoders from the second population
End of explanation
J_j = numpy.outer(numpy.dot(A_i, d_i), alpha_j*encoders_j)+bias_j
Explanation: In fact, instead of computing $\omega_{ij}$ at all, it is (usually) more efficient to just do the encoding/decoding
Saves a lot of memory space, since you don't have to store a giant weight matrix
Also, you have NxM multiplies for weights, but only do ~N+M multiplies for encode/decode
End of explanation
import nengo
from nengo.processes import WhiteNoise
from nengo.utils.matplotlib import rasterplot
T = 1.0
max_freq = 5
model = nengo.Network()
with model:
stim = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3))
ensA = nengo.Ensemble(25, dimensions=1)
ensB = nengo.Ensemble(23, dimensions=1)
nengo.Connection(stim, ensA)
nengo.Connection(ensA, ensB, transform=2) #function=lambda x: 2*x)
stim_p = nengo.Probe(stim)
ensA_p = nengo.Probe(ensA, synapse=.01)
ensB_p = nengo.Probe(ensB, synapse=.01)
ensA_spikes_p = nengo.Probe(ensA.neurons, 'spikes')
ensB_spikes_p = nengo.Probe(ensB.neurons, 'spikes')
sim = nengo.Simulator(model, seed=4)
sim.run(T)
t = sim.trange()
figure(figsize=(8, 6))
subplot(2,1,1)
ax = gca()
plot(t, sim.data[stim_p],'b')
plot(t, sim.data[ensA_p],'g')
ylabel("Output")
xlabel("Time");
rasterplot(t, sim.data[ensA_spikes_p], ax=ax.twinx(), color='k', use_eventplot=True)
#axis('tight')
ylabel("Neuron")
subplot(2,1,2)
ax = gca()
plot(t, sim.data[stim_p],'b')
plot(t, sim.data[ensB_p],'g')
ylabel("Output")
xlabel("Time");
rasterplot(t, sim.data[ensB_spikes_p], ax=ax.twinx(), color='k', use_eventplot=True)
#axis('tight')
ylabel("Neuron");
Explanation: This means we get the exact same effect as having a weight matrix $\omega_{ij}$ if we just take the decoded value from one population and feed that into the next population using the normal encoding method
These are numerically identical processes, since $\omega_{ij} = \alpha_j e_j \cdot d_i$
Spiking neurons
The same approach works for spiking neurons
Do exactly the same as before
The $a_i(t)$ values are spikes, and we convolve with $h(t)$
Other transformations
So this lets us take an $x$ value and feed it into another population
Passing information from one group of neurons to the next
We call this a 'Communication Channel' as you're just sending the information
What about transforming that information in some way?
Instead of $y=x$, can we do $y=f(x)$?
Let's try $y=2x$ to start
We already have a decoder for $\hat{x}$, so how do we get a decoder for $\hat{2x}$?
Two ways
Either use $2x$ when computing $\Upsilon$
Or just multiply your 'representational' decoder by $2$
End of explanation
import nengo
from nengo.processes import WhiteNoise
from nengo.utils.matplotlib import rasterplot
T = 1.0
max_freq = 5
model = nengo.Network()
with model:
stim = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3))
ensA = nengo.Ensemble(25, dimensions=1)
ensB = nengo.Ensemble(23, dimensions=1)
nengo.Connection(stim, ensA)
nengo.Connection(ensA, ensB, function=lambda x: x**2)
stim_p = nengo.Probe(stim)
ensA_p = nengo.Probe(ensA, synapse=.01)
ensB_p = nengo.Probe(ensB, synapse=.01)
ensA_spikes_p = nengo.Probe(ensA.neurons, 'spikes')
ensB_spikes_p = nengo.Probe(ensB.neurons, 'spikes')
sim = nengo.Simulator(model, seed=4)
sim.run(T)
t = sim.trange()
figure(figsize=(8, 6))
subplot(2,1,1)
ax = gca()
plot(t, sim.data[stim_p],'b')
plot(t, sim.data[ensA_p],'g')
ylabel("Output")
xlabel("Time");
rasterplot(t, sim.data[ensA_spikes_p], ax=ax.twinx(), color='k', use_eventplot=True)
#axis('tight')
ylabel("Neuron")
subplot(2,1,2)
ax = gca()
plot(t, sim.data[stim_p],'b')
plot(t, sim.data[ensB_p],'g')
ylabel("Output")
xlabel("Time");
rasterplot(t, sim.data[ensB_spikes_p], ax=ax.twinx(), color='k', use_eventplot=True)
#axis('tight')
ylabel("Neuron");
Explanation: What about a nonlinear function?
$y = x^2$
End of explanation
f_x = my_function(x)
gamma=np.dot(A.T,A)
upsilon_f=np.dot(A.T,f_x)
d_f = np.dot(np.linalg.pinv(gamma),upsilon)
xhat = np.dot(A, d_f)
Explanation: When you set the connection 'function' in Nengo, it solves the same decoding equation as before, but for a function.
In equations:
$ d^{f(x)} = \Gamma^{-1} \Upsilon^{f(x)} $
$ \Upsilon_i^{f(x)} = \sum_x a_i f(x) \;dx$
$ \Gamma_{ij} = \sum_x a_i a_j \;dx $
$ \hat{f}(x) =\sum_i a_i d_i^{f(x)}$
In code:
End of explanation
import nengo
from nengo.processes import WhiteNoise
from nengo.utils.matplotlib import rasterplot
T = 1.0
max_freq = 5
model = nengo.Network()
with model:
stimA = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3, seed=3))
stimB = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3, seed=5))
ensA = nengo.Ensemble(25, dimensions=1)
ensB = nengo.Ensemble(23, dimensions=1)
ensC = nengo.Ensemble(24, dimensions=1)
nengo.Connection(stimA, ensA)
nengo.Connection(stimB, ensB)
nengo.Connection(ensA, ensC)
nengo.Connection(ensB, ensC)
stimA_p = nengo.Probe(stimA)
stimB_p = nengo.Probe(stimB)
ensA_p = nengo.Probe(ensA, synapse=.01)
ensB_p = nengo.Probe(ensB, synapse=.01)
ensC_p = nengo.Probe(ensC, synapse=.01)
sim = nengo.Simulator(model)
sim.run(T)
figure(figsize=(8,6))
plot(t, sim.data[stimA_p],'g', label="$x$")
plot(t, sim.data[ensA_p],'b', label="$\hat{x}$")
plot(t, sim.data[stimB_p],'c', label="$y$")
plot(t, sim.data[ensB_p],'m--', label="$\hat{y}$")
plot(t, sim.data[stimB_p]+sim.data[stimA_p],'r', label="$x+y$")
plot(t, sim.data[ensC_p],'k--', label="$\hat{z}$")
legend(loc='best')
ylabel("Output")
xlabel("Time");
Explanation: We call standard $d_i$ "representational decoders"
We call $d_i^{f(x)}$ "transformational decoders" (or "decoders for $f(x)$")
Adding
What if we want to combine the inputs from two different populations?
Linear case: $z=x+y$
<img src="files/lecture4/adding1.png">
We want the total current going into a $z$ neuron to be $J=\alpha e \cdot (x+y) + J^{bias}$
How can we achieve this?
Again, substitute into the equation, where $z = x+y \approx \hat{x}+\hat{y}$
$J_k=\alpha_k e \cdot (\hat{x}+\hat{y}) + J_k^{bias}$
$\hat{x} = \sum_i a_i d_i$
$\hat{y} = \sum_j a_j d_j$
$J_k=\alpha_k e_k \cdot (\sum_i a_i d_i+\sum_j a_j d_j) + J_k^{bias}$
$J_k=\sum_i(\alpha_k e_k \cdot d_i a_i) + \sum_j(\alpha_k e_k \cdot d_j a_j) + J_k^{bias}$
$J_k=\sum_i(\omega_{ik} a_i) + \sum_j(\omega_{jk} a_j) + J_k^{bias}$
$\omega_{ik}=\alpha_k e_k \cdot d_i$ and $\omega_{jk}=\alpha_k e_k \cdot d_j$
Putting multiple inputs into a neuron automatically gives us addition!
End of explanation
import nengo
from nengo.processes import WhiteNoise
from nengo.utils.matplotlib import rasterplot
T = 1.0
max_freq = 5
model = nengo.Network()
with model:
stimA = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3, seed=3), size_out=2)
stimB = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3, seed=5), size_out=2)
#stimA = nengo.Node([.3,.5])
#stimB = nengo.Node([.3,-.5])
ensA = nengo.Ensemble(55, dimensions=2)
ensB = nengo.Ensemble(53, dimensions=2)
ensC = nengo.Ensemble(54, dimensions=2)
nengo.Connection(stimA, ensA)
nengo.Connection(stimB, ensB)
nengo.Connection(ensA, ensC)
nengo.Connection(ensB, ensC)
stimA_p = nengo.Probe(stimA)
stimB_p = nengo.Probe(stimB)
ensA_p = nengo.Probe(ensA, synapse=.02)
ensB_p = nengo.Probe(ensB, synapse=.02)
ensC_p = nengo.Probe(ensC, synapse=.02)
sim = nengo.Simulator(model)
sim.run(T)
figure()
plot(sim.data[ensA_p][:,0], sim.data[ensA_p][:,1], 'g', label="$\hat{x}$")
plot(sim.data[ensB_p][:,0], sim.data[ensB_p][:,1], 'm', label="$\hat{y}$")
plot(sim.data[ensC_p][:,0], sim.data[ensC_p][:,1], 'k', label="$\hat{z}$")
legend(loc='best')
figure()
plot(t, sim.data[stimA_p],'g', label="$x$")
plot(t, sim.data[ensA_p],'b', label="$\hat{x}$")
legend(loc='best')
ylabel("Output")
xlabel("Time")
figure()
plot(t, sim.data[stimB_p],'c', label="$y$")
plot(t, sim.data[ensB_p],'m--', label="$\hat{y}$")
legend(loc='best')
ylabel("Output")
xlabel("Time")
figure()
plot(t, sim.data[stimB_p]+sim.data[stimA_p],'r', label="$x+y$")
plot(t, sim.data[ensC_p],'k--', label="$\hat{z}$")
legend(loc='best')
ylabel("Output")
xlabel("Time");
Explanation: Vectors
Almost nothing changes
End of explanation
import nengo
from nengo.processes import WhiteNoise
from nengo.utils.matplotlib import rasterplot
T = 1.0
max_freq = 5
model = nengo.Network()
with model:
stimA = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.5, seed=3))
stimB = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.5, seed=5))
ensA = nengo.Ensemble(55, dimensions=1)
ensB = nengo.Ensemble(53, dimensions=1)
ensC = nengo.Ensemble(200, dimensions=2)
ensD = nengo.Ensemble(54, dimensions=1)
nengo.Connection(stimA, ensA)
nengo.Connection(stimB, ensB)
nengo.Connection(ensA, ensC, transform=[[1],[0]])
nengo.Connection(ensB, ensC, transform=[[0],[1]])
nengo.Connection(ensC, ensD, function=lambda x: x[0]*x[1])
stimA_p = nengo.Probe(stimA)
stimB_p = nengo.Probe(stimB)
ensA_p = nengo.Probe(ensA, synapse=.01)
ensB_p = nengo.Probe(ensB, synapse=.01)
ensC_p = nengo.Probe(ensC, synapse=.01)
ensD_p = nengo.Probe(ensD, synapse=.01)
sim = nengo.Simulator(model)
sim.run(T)
figure()
plot(t, sim.data[stimA_p],'g', label="$x$")
plot(t, sim.data[ensA_p],'b', label="$\hat{x}$")
legend(loc='best')
ylabel("Output")
xlabel("Time")
figure()
plot(t, sim.data[stimB_p],'c', label="$y$")
plot(t, sim.data[ensB_p],'m--', label="$\hat{y}$")
legend(loc='best')
ylabel("Output")
xlabel("Time")
figure()
plot(t, sim.data[stimB_p]*sim.data[stimA_p],'r', label="$x * y$")
plot(t, sim.data[ensD_p],'k--', label="$\hat{z}$")
legend(loc='best')
ylabel("Output")
xlabel("Time");
Explanation: Summary
We can use the decoders to find connection weights between groups of neurons
$\omega_{ij}=\alpha_j e_j \cdot d_i$
Using connection weights is numerically identical to decoding and then encoding again
Which can be much more efficient to implement
Feeding two inputs into the same population results in addition
These shortcuts rely on two assumptions:
The input to a neuron is a weighted sum of its synaptic inputs
$J_j = \sum_i a_i \omega_{ij}$
The mapping from $x$ to $J$ is of the form $J_j=\alpha_j e_j \cdot x + J_j^{bias}$
If these assumptions don't hold, you have to do some other form of optimization
If you already have a decoder for $x$, you can quickly find a decoder for any linear function of $x$
If the decoder for $x$ is $d$, the decoder for $Mx$ is $Md$
For some other function of $x$, substitute in that function $f(x)$ when finding $\Upsilon$
Taking all of this into account, the most general form of the weights is:
$\omega_{ij} = e_j M d_i^{f(x)}$
A recipe
To find weights for any linear transformation
Define the repn (enc/dec) for all variables involved in the operation.
Write the transformation in terms of these variables.
Write the transformation using the decoding expressions for all variables except the output variable.
Substitute this expression into the encoding expression of the output variable.
Volunteer for:
$z = x+y$
$z = Rx$
R is a 2D rotation matrix: $$\left[ \begin{array}{cc}
\cos \theta & \sin \theta \
-\sin \theta & \cos \theta
\end{array} \right]$$
$z = x \times y$
General nonlinear functions
What if we want to combine to compute a nonlinear function of two inputs?
E.g., $z=x \times y$
We know how to compute nonlinear functions of a vector space
E.g., $x^2$
If $x$ is a vector, you get a bunch of cross terms
E.g. if $x$ is 2D this gives $x_1^2 + 2 x_1 x_2 + x_2^2$
This means that if you combine two inputs into a 2D space, you can get out their product
End of explanation
with model:
stimB.output = lambda t: 0 if (t<.5) else .5
sim = nengo.Simulator(model)
sim.run(T)
figure()
plot(t, sim.data[stimA_p],'g', label="$x$")
plot(t, sim.data[ensA_p],'b', label="$\hat{x}$")
legend(loc='best')
ylabel("Output")
xlabel("Time")
figure()
plot(t, sim.data[stimB_p],'c', label="$y$")
plot(t, sim.data[ensB_p],'m--', label="$\hat{y}$")
legend(loc='best')
ylabel("Output")
xlabel("Time")
figure()
plot(t, sim.data[stimB_p]*sim.data[stimA_p],'r', label="$x+y$")
plot(t, sim.data[ensD_p],'k--', label="$\hat{z}$")
legend(loc='best')
ylabel("Output")
xlabel("Time");
Explanation: Multiplication is quite powerful, and has lots of uses
Gating of signals
Attention effects
Binding
Statistical inference
Here's a simple gating example using the same network
End of explanation |
8,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using machine learning techniques
Copy of <a href="https
Step1: Load data
Let us load training data and store features, labels and other data into numpy arrays.
Step2: Data inspection
Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that
Step3: Feature imputation
Let us fill missing PE values. This is the only cell that differs from the approach of Paolo Bestagini. Currently no feature engineering is used, but this should be explored in the future.
Step4: Feature augmentation
Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by
Step5: Generate training, validation and test data splits
The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that
Step6: Classification parameters optimization
Let us perform the following steps for each set of parameters
Step7: Predict labels on test data
Let us now apply the selected classification technique to test data. | Python Code:
# Import
from __future__ import division
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (20.0, 10.0)
inline_rc = dict(mpl.rcParams)
from classification_utilities import make_facies_log_plot
import pandas as pd
import numpy as np
#import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.metrics import f1_score
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from scipy.signal import medfilt
import sys, scipy, sklearn
print('Python: ' + sys.version.split('\n')[0])
print(' ' + sys.version.split('\n')[1])
print('Pandas: ' + pd.__version__)
print('Numpy: ' + np.__version__)
print('Scipy: ' + scipy.__version__)
print('Sklearn: ' + sklearn.__version__)
# Parameters
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
Explanation: Facies classification using machine learning techniques
Copy of <a href="https://home.deib.polimi.it/bestagini/">Paolo Bestagini's</a> "Try 2", augmented, by Alan Richardson (Ausar Geophysical), with an ML estimator for PE in the wells where it is missing (rather than just using the mean).
In the following, we provide a possible solution to the facies classification problem described at https://github.com/seg/2016-ml-contest.
The proposed algorithm is based on the use of random forests combined in one-vs-one multiclass strategy. In particular, we would like to study the effect of:
- Robust feature normalization.
- Feature imputation for missing feature values.
- Well-based cross-validation routines.
- Feature augmentation strategies.
Script initialization
Let us import the used packages and define some parameters (e.g., colors, labels, etc.).
End of explanation
# Load data from file
data = pd.read_csv('../facies_vectors.csv')
# Store features and labels
X = data[feature_names].values # features
y = data['Facies'].values # labels
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
Explanation: Load data
Let us load training data and store features, labels and other data into numpy arrays.
End of explanation
# Define function for plotting feature statistics
def plot_feature_stats(X, y, feature_names, facies_colors, facies_names):
# Remove NaN
nan_idx = np.any(np.isnan(X), axis=1)
X = X[np.logical_not(nan_idx), :]
y = y[np.logical_not(nan_idx)]
# Merge features and labels into a single DataFrame
features = pd.DataFrame(X, columns=feature_names)
labels = pd.DataFrame(y, columns=['Facies'])
for f_idx, facies in enumerate(facies_names):
labels[labels[:] == f_idx] = facies
data = pd.concat((labels, features), axis=1)
# Plot features statistics
facies_color_map = {}
for ind, label in enumerate(facies_names):
facies_color_map[label] = facies_colors[ind]
sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names)))
# Feature distribution
# plot_feature_stats(X, y, feature_names, facies_colors, facies_names)
# mpl.rcParams.update(inline_rc)
# Facies per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5)
plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist[0])))
ax.set_xticklabels(facies_names)
ax.set_title(w)
# Features per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0))
plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist)))
ax.set_xticklabels(feature_names)
ax.set_yticks([0, 1])
ax.set_yticklabels(['miss', 'hit'])
ax.set_title(w)
Explanation: Data inspection
Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that:
- Some features seem to be affected by a few outlier measurements.
- Only a few wells contain samples from all classes.
- PE measurements are available only for some wells.
End of explanation
def make_pe(X, seed):
reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=seed)
DataImpAll = data[feature_names].copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))
return X
Explanation: Feature imputation
Let us fill missing PE values. This is the only cell that differs from the approach of Paolo Bestagini. Currently no feature engineering is used, but this should be explored in the future.
End of explanation
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, seed=None, pe=True, N_neig=1):
seed = seed or None
if pe:
X = make_pe(X, seed)
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
# Augment features
X_aug, padded_rows = augment_features(X, well, depth)
Explanation: Feature augmentation
Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by:
- Aggregating features at neighboring depths.
- Computing feature spatial gradient.
End of explanation
# Initialize model selection methods
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
# Print splits
for s, split in enumerate(split_list):
print('Split %d' % s)
print(' training: %s' % (data['Well Name'][split['train']].unique()))
print(' validation: %s' % (data['Well Name'][split['val']].unique()))
Explanation: Generate training, validation and test data splits
The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that:
- Features from each well belongs to training or validation set.
- Training and validation sets contain at least one sample for each class.
End of explanation
# Parameters search grid (uncomment parameters for full grid search... may take a lot of time)
N_grid = [100] # [50, 100, 150]
M_grid = [10] # [5, 10, 15]
S_grid = [25] # [10, 25, 50, 75]
L_grid = [5] # [2, 3, 4, 5, 10, 25]
param_grid = []
for N in N_grid:
for M in M_grid:
for S in S_grid:
for L in L_grid:
param_grid.append({'N':N, 'M':M, 'S':S, 'L':L})
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v, clf):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
# For each set of parameters
# score_param = []
# for param in param_grid:
# # For each data split
# score_split = []
# for split in split_list:
# # Remove padded rows
# split_train_no_pad = np.setdiff1d(split['train'], padded_rows)
# # Select training and validation data from current split
# X_tr = X_aug[split_train_no_pad, :]
# X_v = X_aug[split['val'], :]
# y_tr = y[split_train_no_pad]
# y_v = y[split['val']]
# # Select well labels for validation data
# well_v = well[split['val']]
# # Train and test
# y_v_hat = train_and_test(X_tr, y_tr, X_v, well_v, param)
# # Score
# score = f1_score(y_v, y_v_hat, average='micro')
# score_split.append(score)
# # Average score for this param
# score_param.append(np.mean(score_split))
# print('F1 score = %.3f %s' % (score_param[-1], param))
# # Best set of parameters
# best_idx = np.argmax(score_param)
# param_best = param_grid[best_idx]
# score_best = score_param[best_idx]
# print('\nBest F1 score = %.3f %s' % (score_best, param_best))
Explanation: Classification parameters optimization
Let us perform the following steps for each set of parameters:
- Select a data split.
- Normalize features using a robust scaler.
- Train the classifier on training data.
- Test the trained classifier on validation data.
- Repeat for all splits and average the F1 scores.
At the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data.
End of explanation
param_best = {'S': 25, 'M': 10, 'L': 5, 'N': 100}
# Load data from file
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
y_pred = []
print('o' * 100)
for seed in range(100):
np.random.seed(seed)
# Make training data.
X_train, padded_rows = augment_features(X, well, depth, seed=seed)
y_train = y
X_train = np.delete(X_train, padded_rows, axis=0)
y_train = np.delete(y_train, padded_rows, axis=0)
param = param_best
clf = OneVsOneClassifier(RandomForestClassifier(n_estimators=param['N'], criterion='entropy',
max_features=param['M'], min_samples_split=param['S'], min_samples_leaf=param['L'],
class_weight='balanced', random_state=seed), n_jobs=-1)
# Make blind data.
X_test, _ = augment_features(X_ts, well_ts, depth_ts, seed=seed, pe=False)
# Train and test.
y_ts_hat = train_and_test(X_train, y_train, X_test, well_ts, clf)
# Collect result.
y_pred.append(y_ts_hat)
print('.', end='')
np.save('100_realizations.npy', y_pred)
Explanation: Predict labels on test data
Let us now apply the selected classification technique to test data.
End of explanation |
8,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train a Simple TensorFlow Lite for Microcontrollers model
This notebook demonstrates the process of training a 2.5 kB model using TensorFlow and converting it for use with TensorFlow Lite for Microcontrollers.
Deep learning networks learn to model patterns in underlying data. Here, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
The model created in this notebook is used in the hello_world example for TensorFlow Lite for MicroControllers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Setup Environment
Install Dependencies
Step2: Import Dependencies
Step3: Dataset
1. Generate Data
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph.
Step4: 2. Add Noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph
Step5: 3. Split the Data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
The data is split as follows
Step6: Training
1. Design the Model
We're going to build a simple neural network model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 8 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note
Step7: 2. Train the Model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 500 epochs, with 64 pieces of data in each batch. We also pass in some data for validation. As you will see when you run the cell, training can take a while to complete
Step8: 3. Plot Metrics
1. Loss (or Mean Squared Error)
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form
Step9: The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs
Step10: From the plot, we can see that loss continues to reduce until around 200 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 200 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
2. Mean Absolute Error
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers
Step11: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
3. Actual vs Predicted Outputs
To get more insight into what is happening, let's check its predictions against the test dataset we set aside earlier
Step12: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Training a Larger Model
1. Design the Model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with 16 neurons in the first layer and an additional layer of 16 neurons in the middle
Step13: 2. Train the Model
We'll now train and save the new model.
Step14: 3. Plot Metrics
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ)
Step15: Great results! From these graphs, we can see several exciting things
Step16: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to stop. This model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Generate a TensorFlow Lite Model
1. Generate Models with or without Quantization
We now have an acceptably accurate model. We'll use the TensorFlow Lite Converter to convert the model into a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of a model is called quantization. It reduces the precision of the model's weights, and possibly the activations (output of each layer) as well, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
In the following cell, we'll convert the model twice
Step17: 2. Compare Model Performance
To prove these models are accurate even after conversion and quantization, we'll compare their predictions and loss on our test dataset.
Helper functions
We define the predict (for predictions) and evaluate (for loss) functions for TFLite models. Note
Step18: 1. Predictions
Step19: 2. Loss (MSE/Mean Squared Error)
Step20: 3. Size
Step21: Summary
We can see from the predictions (graph) and loss (table) that the original TF model, the TFLite model, and the quantized TFLite model are all close enough to be indistinguishable - even though they differ in size (table). This implies that the quantized (smallest) model is ready to use!
Note
Step22: Deploy to a Microcontroller
Follow the instructions in the hello_world README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.
Reference Model | Python Code:
# Define paths to model files
import os
MODELS_DIR = 'models/'
if not os.path.exists(MODELS_DIR):
os.mkdir(MODELS_DIR)
MODEL_TF = MODELS_DIR + 'model'
MODEL_NO_QUANT_TFLITE = MODELS_DIR + 'model_no_quant.tflite'
MODEL_TFLITE = MODELS_DIR + 'model.tflite'
MODEL_TFLITE_MICRO = MODELS_DIR + 'model.cc'
Explanation: Train a Simple TensorFlow Lite for Microcontrollers model
This notebook demonstrates the process of training a 2.5 kB model using TensorFlow and converting it for use with TensorFlow Lite for Microcontrollers.
Deep learning networks learn to model patterns in underlying data. Here, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
The model created in this notebook is used in the hello_world example for TensorFlow Lite for MicroControllers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Configure Defaults
End of explanation
! pip install tensorflow==2.4.0
Explanation: Setup Environment
Install Dependencies
End of explanation
# TensorFlow is an open source machine learning library
import tensorflow as tf
# Keras is TensorFlow's high-level API for deep learning
from tensorflow import keras
# Numpy is a math library
import numpy as np
# Pandas is a data manipulation library
import pandas as pd
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# Math is Python's math library
import math
# Set seed for experiment reproducibility
seed = 1
np.random.seed(seed)
tf.random.set_seed(seed)
Explanation: Import Dependencies
End of explanation
# Number of sample datapoints
SAMPLES = 1000
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(
low=0, high=2*math.pi, size=SAMPLES).astype(np.float32)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values).astype(np.float32)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
Explanation: Dataset
1. Generate Data
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph.
End of explanation
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
Explanation: 2. Add Noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph:
End of explanation
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
Explanation: 3. Split the Data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
The data is split as follows:
1. Training: 60%
2. Validation: 20%
3. Testing: 20%
The following code will split our data and then plots each set as a different color:
End of explanation
# We'll use Keras to create a simple model architecture
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 8 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(keras.layers.Dense(8, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(keras.layers.Dense(1))
# Compile the model using the standard 'adam' optimizer and the mean squared error or 'mse' loss function for regression.
model_1.compile(optimizer='adam', loss='mse', metrics=['mae'])
Explanation: Training
1. Design the Model
We're going to build a simple neural network model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 8 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note: To learn more about how neural networks function, you can explore the Learn TensorFlow codelabs.
The code in the following cell defines our model using Keras, TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we compile it, specifying parameters that determine how it will be trained:
End of explanation
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
Explanation: 2. Train the Model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 500 epochs, with 64 pieces of data in each batch. We also pass in some data for validation. As you will see when you run the cell, training can take a while to complete:
End of explanation
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
train_loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(train_loss) + 1)
plt.plot(epochs, train_loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: 3. Plot Metrics
1. Loss (or Mean Squared Error)
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form:
End of explanation
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs:
End of explanation
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
train_mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], train_mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
Explanation: From the plot, we can see that loss continues to reduce until around 200 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 200 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
2. Mean Absolute Error
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers:
End of explanation
# Calculate and print the loss on our test dataset
test_loss, test_mae = model_1.evaluate(x_test, y_test)
# Make predictions based on our test dataset
y_test_pred = model_1.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual values')
plt.plot(x_test, y_test_pred, 'r.', label='TF predictions')
plt.legend()
plt.show()
Explanation: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
3. Actual vs Predicted Outputs
To get more insight into what is happening, let's check its predictions against the test dataset we set aside earlier:
End of explanation
model = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model.add(keras.layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second and third layer will help the network learn more complex representations
model.add(keras.layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model.add(keras.layers.Dense(1))
# Compile the model using the standard 'adam' optimizer and the mean squared error or 'mse' loss function for regression.
model.compile(optimizer='adam', loss="mse", metrics=["mae"])
Explanation: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Training a Larger Model
1. Design the Model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with 16 neurons in the first layer and an additional layer of 16 neurons in the middle:
End of explanation
# Train the model
history = model.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
# Save the model to disk
model.save(MODEL_TF)
Explanation: 2. Train the Model
We'll now train and save the new model.
End of explanation
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
train_loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(train_loss) + 1)
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
train_mae = history.history['mae']
val_mae = history.history['val_mae']
plt.plot(epochs[SKIP:], train_mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.tight_layout()
Explanation: 3. Plot Metrics
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ):
Epoch 500/500
10/10 [==============================] - 0s 10ms/step - loss: 0.0121 - mae: 0.0882 - val_loss: 0.0115 - val_mae: 0.0865
You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.01, and validation MAE has dropped from 0.33 to 0.08.
The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
End of explanation
# Calculate and print the loss on our test dataset
test_loss, test_mae = model.evaluate(x_test, y_test)
# Make predictions based on our test dataset
y_test_pred = model.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual values')
plt.plot(x_test, y_test_pred, 'r.', label='TF predicted')
plt.legend()
plt.show()
Explanation: Great results! From these graphs, we can see several exciting things:
The overall loss and MAE are much better than our previous network
Metrics are better for validation than training, which means the network is not overfitting
The reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.
This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
End of explanation
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_TF)
model_no_quant_tflite = converter.convert()
# Save the model to disk
open(MODEL_NO_QUANT_TFLITE, "wb").write(model_no_quant_tflite)
# Convert the model to the TensorFlow Lite format with quantization
def representative_dataset():
for i in range(500):
yield([x_train[i].reshape(1, 1)])
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce integer only quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open(MODEL_TFLITE, "wb").write(model_tflite)
Explanation: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to stop. This model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Generate a TensorFlow Lite Model
1. Generate Models with or without Quantization
We now have an acceptably accurate model. We'll use the TensorFlow Lite Converter to convert the model into a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of a model is called quantization. It reduces the precision of the model's weights, and possibly the activations (output of each layer) as well, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
In the following cell, we'll convert the model twice: once with quantization, once without.
End of explanation
def predict_tflite(tflite_model, x_test):
# Prepare the test data
x_test_ = x_test.copy()
x_test_ = x_test_.reshape((x_test.size, 1))
x_test_ = x_test_.astype(np.float32)
# Initialize the TFLite interpreter
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
# If required, quantize the input layer (from float to integer)
input_scale, input_zero_point = input_details["quantization"]
if (input_scale, input_zero_point) != (0.0, 0):
x_test_ = x_test_ / input_scale + input_zero_point
x_test_ = x_test_.astype(input_details["dtype"])
# Invoke the interpreter
y_pred = np.empty(x_test_.size, dtype=output_details["dtype"])
for i in range(len(x_test_)):
interpreter.set_tensor(input_details["index"], [x_test_[i]])
interpreter.invoke()
y_pred[i] = interpreter.get_tensor(output_details["index"])[0]
# If required, dequantized the output layer (from integer to float)
output_scale, output_zero_point = output_details["quantization"]
if (output_scale, output_zero_point) != (0.0, 0):
y_pred = y_pred.astype(np.float32)
y_pred = (y_pred - output_zero_point) * output_scale
return y_pred
def evaluate_tflite(tflite_model, x_test, y_true):
global model
y_pred = predict_tflite(tflite_model, x_test)
loss_function = tf.keras.losses.get(model.loss)
loss = loss_function(y_true, y_pred).numpy()
return loss
Explanation: 2. Compare Model Performance
To prove these models are accurate even after conversion and quantization, we'll compare their predictions and loss on our test dataset.
Helper functions
We define the predict (for predictions) and evaluate (for loss) functions for TFLite models. Note: These are already included in a TF model, but not in a TFLite model.
End of explanation
# Calculate predictions
y_test_pred_tf = model.predict(x_test)
y_test_pred_no_quant_tflite = predict_tflite(model_no_quant_tflite, x_test)
y_test_pred_tflite = predict_tflite(model_tflite, x_test)
# Compare predictions
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual values')
plt.plot(x_test, y_test_pred_tf, 'ro', label='TF predictions')
plt.plot(x_test, y_test_pred_no_quant_tflite, 'bx', label='TFLite predictions')
plt.plot(x_test, y_test_pred_tflite, 'gx', label='TFLite quantized predictions')
plt.legend()
plt.show()
Explanation: 1. Predictions
End of explanation
# Calculate loss
loss_tf, _ = model.evaluate(x_test, y_test, verbose=0)
loss_no_quant_tflite = evaluate_tflite(model_no_quant_tflite, x_test, y_test)
loss_tflite = evaluate_tflite(model_tflite, x_test, y_test)
# Compare loss
df = pd.DataFrame.from_records(
[["TensorFlow", loss_tf],
["TensorFlow Lite", loss_no_quant_tflite],
["TensorFlow Lite Quantized", loss_tflite]],
columns = ["Model", "Loss/MSE"], index="Model").round(4)
df
Explanation: 2. Loss (MSE/Mean Squared Error)
End of explanation
# Calculate size
size_tf = os.path.getsize(MODEL_TF)
size_no_quant_tflite = os.path.getsize(MODEL_NO_QUANT_TFLITE)
size_tflite = os.path.getsize(MODEL_TFLITE)
# Compare size
pd.DataFrame.from_records(
[["TensorFlow", f"{size_tf} bytes", ""],
["TensorFlow Lite", f"{size_no_quant_tflite} bytes ", f"(reduced by {size_tf - size_no_quant_tflite} bytes)"],
["TensorFlow Lite Quantized", f"{size_tflite} bytes", f"(reduced by {size_no_quant_tflite - size_tflite} bytes)"]],
columns = ["Model", "Size", ""], index="Model")
Explanation: 3. Size
End of explanation
# Install xxd if it is not available
!apt-get update && apt-get -qq install xxd
# Convert to a C source file, i.e, a TensorFlow Lite for Microcontrollers model
!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}
# Update variable names
REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')
!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}
Explanation: Summary
We can see from the predictions (graph) and loss (table) that the original TF model, the TFLite model, and the quantized TFLite model are all close enough to be indistinguishable - even though they differ in size (table). This implies that the quantized (smallest) model is ready to use!
Note: The quantized (integer) TFLite model is just 300 bytes smaller than the original (float) TFLite model - a tiny reduction in size! This is because the model is already so small that quantization has little effect. Complex models with more weights, can have upto a 4x reduction in size!
Generate a TensorFlow Lite for Microcontrollers Model
Convert the TensorFlow Lite quantized model into a C source file that can be loaded by TensorFlow Lite for Microcontrollers.
End of explanation
# Print the C source file
!cat {MODEL_TFLITE_MICRO}
Explanation: Deploy to a Microcontroller
Follow the instructions in the hello_world README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.
Reference Model: If you have not modified this notebook, you can follow the instructions as is, to deploy the model. Refer to the hello_world/train/models directory to access the models generated in this notebook.
New Model: If you have generated a new model, then update the values assigned to the variables defined in hello_world/model.cc with values displayed after running the following cell.
End of explanation |
8,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 9
Step1: Anytime you see a statement that starts with import, you'll recognize that the programmer is pulling in some sort of external functionality not previously available to Python by default. In this case, the numpy package provides a wide range of tools for numerical and scientific applications.
That's just one of countless examples...an infinite number that continues to nonetheless increase daily.
It's important to distinguish the hierarchy of packages that exist in the Python ecosystem.
At the first level, there are functions that are available to you by default.
Python has a bunch of functionality that comes by default--no import required. Remember writing functions to compute the maximum and minimum of a list? Turns out, those already exist by default (sorry everyone)
Step2: At the second level, there is functionality that comes with Python, but which must still be import-ed.
Quite a bit of other functionality--still built-in to the default Python environment!--requires explicit import statements to unlock. Here are just a couple of examples
Step3: Absolutely any Python installation will come with these. However, you still have to import them to access them in your program.
If you are so inclined, you can see the full Python default module index here
Step4: Dot-notation works by
specifying package_name (in this case, random)
followed by a dot
Step5: We can tweak it
Step6: You can put whatever you want after the as, and anytime you call methods from that module, you'll use the name you gave it.
Which brings us to our third and final level of the hierarchy
Step7: Now, imagine using this in a data science context. Indexing would still work as you would expect, but looping through a matrix--say, to do matrix multiplication--would be laborious and highly inefficient.
We'll demonstrate this experimentally later, but suffice to say Python lists embody the drawbacks of using an interpreted language such as Python
Step8: Now just call the array method using our list from before!
Step9: The variable arr is a NumPy array version of the previous list-of-lists!
To reference an element in the array, just use the same notation we did for lists
Step10: You can also separate dimensions by commas, if you prefer
Step11: Either notation is fine. Just remember, with indexing matrices
Step12: 2
Step13: 5
Step14: This takes two values
Step15: Sure, it works. But you might have a nagging feeling in the back of your head that there has to be an easier way...
With lists, unfortunately, there isn't one. However, with NumPy arrays, there is! And it's exactly as intuitive as you'd imagine!
Step16: NumPy knows how to perform element-wise computations across an entire NumPy array. Whether you want to add a certain quantity to every element, subtract corresponding elements of two NumPy arrays, or square every element as we just did, it allows you to do these operations on all elements at once, without writing an explicit loop!
Here's another example
Step17: Yeah, it's pretty complicated. The sad part, though, is that MOST of that complication comes from the loops you have to write!
So... let's see if vectorized computation can help!
Step18: No loops needed, far fewer lines of code, and a simple intuitive operation.
Operations involving arrays on both sides of the sign will also work (though the two arrays need to be the same length).
For example, adding two vectors together
Step19: Works exactly as you'd expect, but no [explicit] loop needed.
This becomes particularly compelling with matrix multiplication. Say you have two matrices, $A$ and $B$
Step20: If you recall from algebra, matrix multiplication $A \times B$ involves multipliying each row of $A$ by each column of $B$. But rather than write that code yourself, Python (as of version 3.5) gives us a dedicated matrix multiplication operator
Step21: In almost every case, vectorized operations are far more efficient than loops written in Python to do the same thing.
But don't take my word for it--let's test it!
Step22: We've got our functions. Now, using a handle timer tool, we can run them both on some sample data and see how fast they go!
Step23: It took about 4.23 milliseconds (that's $4.23 * 10^{-3}$ seconds) to perform 1 matrix-matrix multiplication. Certainly not objectively slow! But let's see how the NumPy version does... | Python Code:
import numpy
Explanation: Lecture 9: Vectorized Programming
CSCI 1360E: Foundations for Informatics and Analytics
Overview and Objectives
We've covered loops and lists, and how to use them to perform some basic arithmetic calculations. In this lecture, we'll see how we can use an external library to make these computations much easier and much faster.
Spoiler alert: if you've programmed in Matlab before, a lot of this will be familiar.
Understand how to use import to add functionality beyond base Python
Compare and contrast NumPy arrays to built-in Python lists
Use NumPy arrays in place of explicit loops for basic arithmetic operations
Part 1: Importing modules
With all the data structures we've discussed so far--lists, sets, tuples, dictionaries, comprehensions, generators--it's hard to believe there's anything else. But oh man, is there a big huge world of Python extensions out there.
These extensions are known as modules. You've seen at least one in play in your assignments so far:
End of explanation
x = [3, 7, 2, 9, 4]
print("Maximum: ", max(x))
print("Minimum: ", min(x))
Explanation: Anytime you see a statement that starts with import, you'll recognize that the programmer is pulling in some sort of external functionality not previously available to Python by default. In this case, the numpy package provides a wide range of tools for numerical and scientific applications.
That's just one of countless examples...an infinite number that continues to nonetheless increase daily.
It's important to distinguish the hierarchy of packages that exist in the Python ecosystem.
At the first level, there are functions that are available to you by default.
Python has a bunch of functionality that comes by default--no import required. Remember writing functions to compute the maximum and minimum of a list? Turns out, those already exist by default (sorry everyone):
End of explanation
import random # For generating random numbers.
import os # For interacting with the filesystem of your computer.
import re # For regular expressions. Unrelated: https://xkcd.com/1171/
import datetime # Helps immensely with determining the date and formatting it.
import math # Gives some basic math functions: trig, factorial, exponential, logarithms, etc.
import xml # Abandon all hope, ye who enter.
Explanation: At the second level, there is functionality that comes with Python, but which must still be import-ed.
Quite a bit of other functionality--still built-in to the default Python environment!--requires explicit import statements to unlock. Here are just a couple of examples:
End of explanation
import random
random.randint(0, 1)
Explanation: Absolutely any Python installation will come with these. However, you still have to import them to access them in your program.
If you are so inclined, you can see the full Python default module index here: https://docs.python.org/3/py-modindex.html.
It's quite a bit! These are all available to you when using Python (don't even bother trying to memorize these; in looking over this list just now, I'm amazed at how many I didn't even know existed).
Once you've imported the module, you can access all its functions via the "dot-notation":
End of explanation
import random
random.randint(0, 1)
Explanation: Dot-notation works by
specifying package_name (in this case, random)
followed by a dot: .
followed by function_name (in this case, randint, which returns a random integer between two numbers)
As a small tidbit--you can treat imported packages almost like variables, in that you can name them whatever you like, using the as keyword in the import statement.
Instead of
End of explanation
import random as r
r.randint(0, 1)
Explanation: We can tweak it
End of explanation
matrix = [[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9] ]
print(matrix)
Explanation: You can put whatever you want after the as, and anytime you call methods from that module, you'll use the name you gave it.
Which brings us to our third and final level of the hierarchy:
At the third level are packages you have to install manually, and then can be import-ed
There's an ever-expanding universe of 3rd-party modules you can install and use. Anaconda comes prepackaged with quite a few (see the column "In Installer"), and the option to manually install quite a few more.
Again, don't worry about trying to learn all these. There are simply too many. You'll come across packages as you need them. For now, we're going to focus on one specific package that is central to most modern data science:
NumPy, short for Numerical Python.
Part 2: Introduction to NumPy
NumPy, or Numerical Python, is an incredible library of basic functions and data structures that provide a robust foundation for computational scientists.
Put another way: if you're using Python and doing any kind of math, you'll probably use NumPy.
At this point, NumPy is so deeply embedded in so many other 3rd-party modules related to scientific computing that even if you're not making explicit use of it, at least one of the other modules you're using probably is.
NumPy's core: the ndarray
NumPy, or Numerical Python, is an incredible library of basic functions and data structures that provide a robust foundation for computational scientists.
For those of you who attempted the bonus question on A2 dealing with the list-of-lists matrix, here's a recap of what that would look like:
End of explanation
import numpy
Explanation: Now, imagine using this in a data science context. Indexing would still work as you would expect, but looping through a matrix--say, to do matrix multiplication--would be laborious and highly inefficient.
We'll demonstrate this experimentally later, but suffice to say Python lists embody the drawbacks of using an interpreted language such as Python: they're easy to use, but oh so slow.
By contrast, in NumPy, we have the ndarray structure (short for "n-dimensional array") that is a highly optimized version of Python lists, perfect for fast and efficient computations. To make use of NumPy arrays, import NumPy (it's installed by default in Anaconda, and on JupyterHub):
End of explanation
arr = numpy.array(matrix)
print(arr)
Explanation: Now just call the array method using our list from before!
End of explanation
arr[0]
arr[2][2]
Explanation: The variable arr is a NumPy array version of the previous list-of-lists!
To reference an element in the array, just use the same notation we did for lists:
End of explanation
arr[2, 2]
Explanation: You can also separate dimensions by commas, if you prefer:
End of explanation
a = numpy.array([45, 2, 59, -2, 70, 3, 6, 790])
print("Minimum:", numpy.min(a))
print("Cosine of 0th element: {:.2f}".format(numpy.cos(a[0])))
Explanation: Either notation is fine. Just remember, with indexing matrices: the first index is the row, the second index is the column.
NumPy's submodules
NumPy has an impressive array of utility modules that come along with it, optimized to use its ndarray data structure. I highly encourage you to use them, even if you're not using NumPy arrays.
1: Basic mathematical routines
All the core functions you could want; for example, all the built-in Python math routines (trig, logs, exponents, etc) all have NumPy versions. (numpy.sin, numpy.cos, numpy.log, numpy.exp, numpy.max, numpy.min)
End of explanation
print(numpy.random.randint(10)) # Random integer between 0 and 10
print(numpy.random.randint(10)) # Another one!
print(numpy.random.randint(10)) # Yet another one!
Explanation: 2: Fourier transforms
If you do any signal processing using Fourier transforms (which we might, later!), NumPy has an entire sub-module full of tools for this type of analysis in numpy.fft
(these are beyond the scope of 1360, but if you do any kind of image analysis or analysis of time series data, you'll more than likely make use of FFTs)
3: Linear algebra
We'll definitely be using this submodule later in the course. This is most of your vector and matrix linear algebra operations, from vector norms (numpy.linalg.norm) to singular value decomposition (numpy.linalg.svd) to matrix determinants (numpy.linalg.det).
4: Random numbers
NumPy has a phenomenal random number library in numpy.random. In addition to generating uniform random numbers in a certain range, you can also sample from any known parametric distribution.
End of explanation
a = numpy.array([1, 1])
numpy.testing.assert_allclose(a, a)
Explanation: 5: Testing
You've probably noticed in your assignments a bizarre NumPy function in the autograder cells: testing.assert_allclose.
End of explanation
# Define our list:
our_list = [5, 10, 15, 20, 25]
# Write a loop to square each element:
for i in range(len(our_list)):
our_list[i] = our_list[i] ** 2 # Set each element to be its own square.
print(our_list)
Explanation: This takes two values: an expected answer, and a computed answer. Put another way, it takes what I thought the correct answer was, and compares it to the answer provided by whatever function you write.
Provided the two values are "close enough", it silently permits Python to keep running.
If, however, the two values differ considerably, it actually crashes the program with an AssertionError (which you've probably seen!).
You can actually tweak how sensitive this definition of "close enough" is. It's a wonderful tool for testing your code!
Part 3: Vectorized Arithmetic
"Vectorized arithmetic" refers to how NumPy allows you to efficiently perform arithmetic operations on entire NumPy arrays at once, as you would with "regular" Python variables.
For example: let's say I want to square every element in a list of numbers. We've actually done this before:
End of explanation
# Define our list:
our_list = [5, 10, 15, 20, 25]
# Convert it to a NumPy array (this is IMPORTANT)
our_list = numpy.array(our_list)
our_list = our_list ** 2 # Yep, we just squared the WHOLE ARRAY. And it works how you'd expect!
print(our_list)
Explanation: Sure, it works. But you might have a nagging feeling in the back of your head that there has to be an easier way...
With lists, unfortunately, there isn't one. However, with NumPy arrays, there is! And it's exactly as intuitive as you'd imagine!
End of explanation
vector = [4.0, 15.0, 6.0, 2.0]
# To normalize this to unit length, we need to divide each element by the vector's magnitude.
# To learn it's magnitude, we need to loop through the whole vector.
# So. We need two loops!
magnitude = 0.0
for element in vector:
magnitude += element ** 2
magnitude = (magnitude ** 0.5) # square root
print("Original magnitude:", magnitude)
# Now that we have the magnitude, we need to loop through the list AGAIN,
# dividing each element of the list by the magnitude.
new_magnitude = 0.0
for index in range(len(vector)):
element = vector[index]
vector[index] = element / magnitude
new_magnitude += vector[index] ** 2
new_magnitude = (new_magnitude ** 0.5)
print("Normalized magnitude:", new_magnitude)
Explanation: NumPy knows how to perform element-wise computations across an entire NumPy array. Whether you want to add a certain quantity to every element, subtract corresponding elements of two NumPy arrays, or square every element as we just did, it allows you to do these operations on all elements at once, without writing an explicit loop!
Here's another example: let's say you have a vector and you want to normalize it to be unit length; that involves dividing every element in the vector by a constant (the magnitude of the vector). With lists, you'd have to loop through them manually.
End of explanation
import numpy as np # This tends to be the "standard" convention when importing NumPy.
import numpy.linalg as nla
vector = [4.0, 15.0, 6.0, 2.0]
np_vector = np.array(vector) # Convert to NumPy array.
magnitude = nla.norm(np_vector) # Computing the magnitude: a friggin' ONE-LINER!
print("Original magnitude:", magnitude)
np_vector /= magnitude # Vectorized division!!! No loop needed!
new_magnitude = nla.norm(np_vector)
print("Normalized magnitude:", new_magnitude)
Explanation: Yeah, it's pretty complicated. The sad part, though, is that MOST of that complication comes from the loops you have to write!
So... let's see if vectorized computation can help!
End of explanation
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
z = x + y
print(z)
Explanation: No loops needed, far fewer lines of code, and a simple intuitive operation.
Operations involving arrays on both sides of the sign will also work (though the two arrays need to be the same length).
For example, adding two vectors together:
End of explanation
A = np.array([ [1, 2], [3, 4] ])
B = np.array([ [5, 6], [7, 8] ])
Explanation: Works exactly as you'd expect, but no [explicit] loop needed.
This becomes particularly compelling with matrix multiplication. Say you have two matrices, $A$ and $B$:
End of explanation
A @ B
Explanation: If you recall from algebra, matrix multiplication $A \times B$ involves multipliying each row of $A$ by each column of $B$. But rather than write that code yourself, Python (as of version 3.5) gives us a dedicated matrix multiplication operator: the @ symbol!
End of explanation
# This function does matrix-matrix multiplication using explicit loops.
# There's nothing wrong with this! It'll just be a lot slower than NumPy...
def multiply_loops(A, B):
C = np.zeros((A.shape[0], B.shape[1]))
for i in range(A.shape[1]):
for j in range(B.shape[0]):
C[i, j] = A[i, j] * B[j, i]
return C
# And now a function that uses NumPy's matrix-matrix multiplication operator.
def multiply_vector(A, B):
return A @ B
Explanation: In almost every case, vectorized operations are far more efficient than loops written in Python to do the same thing.
But don't take my word for it--let's test it!
End of explanation
# Here's our sample data: two randomly-generated, 100x100 matrices.
X = np.random.random((100, 100))
Y = np.random.random((100, 100))
# First, using the explicit loops:
%timeit multiply_loops(X, Y)
Explanation: We've got our functions. Now, using a handle timer tool, we can run them both on some sample data and see how fast they go!
End of explanation
# Now, the NumPy multiplication:
%timeit multiply_vector(X, Y)
Explanation: It took about 4.23 milliseconds (that's $4.23 * 10^{-3}$ seconds) to perform 1 matrix-matrix multiplication. Certainly not objectively slow! But let's see how the NumPy version does...
End of explanation |
8,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integrating arbitrary ODEs
Although REBOUND is primarily an N-body integrator, it can also integrate arbitrary ordinary differential equations (ODEs). Even better
Step1: We first set up our N-body simulation. Note that we are using the Gragg-Bulirsch-Stoer integrator (BS).
Step2: We can create an ODE structure. Note that the ODE is linked to the simulation. If you run multiple simulations in parallel, you need to create an ode structure for each of them.
Step3: Next, we setup the ODE structure with the initial conditions and the right hand side (RHS) of the harmonic oscillator
Step4: To keep track of how accurate the integration of the harmonic oscillator is, we can calculate the energy which is conserved in the physical system.
Step5: Now we can run the simulation, keeping track of a few quantities along the way.
Step6: Let's plot the relative energy error over time for both the N-body and the harmonic oscillator integration.
Step7: Let us also plot the radius of the inner planet and the position coordinate of the harmonic oscillator.
Step8: The above example is using the BS integrator for both the N-body and the harmonic oscillator integration. The BS integrator has default tolerance parameters set to $10^{-5}$. You can change the relative or absolute tolerance with to get more accurate results
Step9: Note that in this example, the harmonic oscillator has a period that is shorter than any orbital timescale. Therefore the timestep is limited by the harmonic oscillator, not the N-body integration. As a result, the N-body integration has an error much smaller than the tolerance parameters.
Let us change the simple harmonic oscillator to a forced harmonic oscillator where the forcing depends on phase of a planet.
Step10: We explicitly set needs_nbody = False during initialization. We therefore need to tell REBOUND that our ODE now needs access to the particle state during the integrations
Step11: Running the integration a bit further, now with the forced harmonic oscillator
Step12: The harmonic oscillator is now getting forced by the planet. | Python Code:
import rebound
import numpy as np
import matplotlib.pyplot as plt
Explanation: Integrating arbitrary ODEs
Although REBOUND is primarily an N-body integrator, it can also integrate arbitrary ordinary differential equations (ODEs). Even better: it can integrate arbitrary ODEs in parallel with an N-body simulation. This allows you to couple various physical effects such as spin and tides to orbital dynamics.
In this example, we are integrating a two planet system and a decoupled harmonic oscillator which is governed by the following ODE:
$$ y_0(t)'' = -\frac km y_0(t)$$
or equivalently as a set of 2 first order differential equations
$$ \begin{pmatrix} y_0(t)\y_1(t)\end{pmatrix}' = \begin{pmatrix} y_1(t)\- \frac k m y_0(t)\end{pmatrix}
$$
End of explanation
sim = rebound.Simulation()
sim.add(m=1)
sim.add(a=1.2,m=1e-3,e=0.1)
sim.add(a=2.3,m=1e-3,e=0.1)
sim.integrator = "BS"
Explanation: We first set up our N-body simulation. Note that we are using the Gragg-Bulirsch-Stoer integrator (BS).
End of explanation
ode_ho = sim.create_ode(length=2, needs_nbody=False)
Explanation: We can create an ODE structure. Note that the ODE is linked to the simulation. If you run multiple simulations in parallel, you need to create an ode structure for each of them.
End of explanation
# Mass and spring constants
m = 1.
k = 10.
# Initial conditions
ode_ho.y[0] = 1.
ode_ho.y[1] = 0. # zero velocity
# RHS
def derivatives_ho(ode, yDot, y, t):
yDot[0] = y[1]
yDot[1] = -k/m*y[0]
ode_ho.derivatives = derivatives_ho
Explanation: Next, we setup the ODE structure with the initial conditions and the right hand side (RHS) of the harmonic oscillator:
End of explanation
def energy_ho(ode):
return 0.5*k*ode.y[0]**2 + 0.5*m*ode.y[1]**2
Explanation: To keep track of how accurate the integration of the harmonic oscillator is, we can calculate the energy which is conserved in the physical system.
End of explanation
times = np.linspace(0.,60.,1000)
energies_nbody = np.zeros(len(times))
energies_ho = np.zeros(len(times))
r_nbody = np.zeros(len(times))
x_ho = np.zeros(len(times))
for i, t in enumerate(times):
sim.integrate(t)
r_nbody[i] = sim.particles[1].d
x_ho[i] = ode_ho.y[0]
energies_nbody[i] = sim.calculate_energy()
energies_ho[i] = energy_ho(ode_ho)
Explanation: Now we can run the simulation, keeping track of a few quantities along the way.
End of explanation
fig, ax = plt.subplots(1,1)
ax.set_xlabel("time")
ax.set_ylabel("relative energy error")
ax.set_yscale("log")
ax.plot(times,np.abs((energies_nbody-energies_nbody[0])/energies_nbody[0]), label="N-body")
ax.plot(times,np.abs((energies_ho-energies_ho[0])/energies_ho[0]), label="harmonic oscillator")
ax.legend()
Explanation: Let's plot the relative energy error over time for both the N-body and the harmonic oscillator integration.
End of explanation
fig, ax = plt.subplots(1,1)
ax.set_xlabel("time")
ax.plot(times,r_nbody, label="planet")
ax.plot(times,x_ho, label="harmonic oscillator")
ax.legend()
Explanation: Let us also plot the radius of the inner planet and the position coordinate of the harmonic oscillator.
End of explanation
sim.ri_bs.eps_rel = 1e-8
sim.ri_bs.eps_abs = 1e-8
Explanation: The above example is using the BS integrator for both the N-body and the harmonic oscillator integration. The BS integrator has default tolerance parameters set to $10^{-5}$. You can change the relative or absolute tolerance with to get more accurate results:
End of explanation
def derivatives_ho_forced(ode, yDot, y, t):
# Now we can access particles and their orbital parameters during sub-steps
forcing = np.sin(sim.particles[1].f)
# Note that we are using the global sim variable.
# Alternatively, one can also access the simulation via
# sim = ode.contents.r.contents
yDot[0] = y[1]
yDot[1] = -k/m*y[0] + forcing
ode_ho.derivatives = derivatives_ho_forced
Explanation: Note that in this example, the harmonic oscillator has a period that is shorter than any orbital timescale. Therefore the timestep is limited by the harmonic oscillator, not the N-body integration. As a result, the N-body integration has an error much smaller than the tolerance parameters.
Let us change the simple harmonic oscillator to a forced harmonic oscillator where the forcing depends on phase of a planet.
End of explanation
ode_ho.needs_nbody = True
Explanation: We explicitly set needs_nbody = False during initialization. We therefore need to tell REBOUND that our ODE now needs access to the particle state during the integrations:
End of explanation
times = np.linspace(65.,120.,1000)
for i, t in enumerate(times):
sim.integrate(t)
r_nbody[i] = sim.particles[1].d
x_ho[i] = ode_ho.y[0]
energies_nbody[i] = sim.calculate_energy()
energies_ho[i] = energy_ho(ode_ho)
Explanation: Running the integration a bit further, now with the forced harmonic oscillator:
End of explanation
fig, ax = plt.subplots(1,1)
ax.set_xlabel("time")
ax.plot(times,r_nbody, label="planet")
ax.plot(times,x_ho, label="harmonic oscillator")
ax.legend()
Explanation: The harmonic oscillator is now getting forced by the planet.
End of explanation |
8,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.
Imports
Step1: Env setup
Step2: Object detection imports
Here are the imports from the object detection module.
Step3: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
Step4: Download Model
Step5: Load a (frozen) Tensorflow model into memory.
Step6: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
Step7: Helper code
Step8: Detection | Python Code:
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
if tf.__version__ < '1.4.0':
raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')
Explanation: Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.
Imports
End of explanation
# This is needed to display the images.
%matplotlib inline
Explanation: Env setup
End of explanation
from utils import label_map_util
from utils import visualization_utils as vis_util
Explanation: Object detection imports
Here are the imports from the object detection module.
End of explanation
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
Explanation: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
End of explanation
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
Explanation: Download Model
End of explanation
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
Explanation: Load a (frozen) Tensorflow model into memory.
End of explanation
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
Explanation: Helper code
End of explanation
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
Explanation: Detection
End of explanation |
8,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Letter Recognition - UCI</h1>
Step1: Getting the Data
This is a dataset of 20 image features for uppercase English characters.
https
Step3: Next we attach the column names.
Step4: Check for Class Imbalance
Step5: All the classes are represented in a fairly balanced manner, so looks like in this instance we don't have to address class imbalance.
Feature Correlations
Step6: The first 5 features, x_box, y_box, width, high, onpix are highly correlated with each other.
ANOVA Feature Selection
ANOVA F-test based feature selection requires all feature values to be positive.
Step7: The above condition holds in our case - none of the features have negative values.
Step8: Top 5 Features with GaussianNB Classifier
Step9: Top 10 Features with GaussianNB Classifier
Step10: Top 15 Features with GaussianNB Classifier
Step11: We get a significant (in layman terms, not in a statistical testing sense) boost in accuracy by going from top 5 to top 10 features. But the improvement in accuracy with top 15 features over top 10 features are marginal.
Pairplot | Python Code:
import pandas as pd
import numpy as np
%pylab inline
pylab.style.use('ggplot')
Explanation: <h1 align="center">Letter Recognition - UCI</h1>
End of explanation
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/letter-recognition/letter-recognition.data'
letter_df = pd.read_csv(url, header=None)
letter_df.head()
Explanation: Getting the Data
This is a dataset of 20 image features for uppercase English characters.
https://archive.ics.uci.edu/ml/datasets/Letter+Recognition
End of explanation
s = 1. lettr capital letter (26 values from A to Z)
2. x-box horizontal position of box (integer)
3. y-box vertical position of box (integer)
4. width width of box (integer)
5. high height of box (integer)
6. onpix total # on pixels (integer)
7. x-bar mean x of on pixels in box (integer)
8. y-bar mean y of on pixels in box (integer)
9. x2bar mean x variance (integer)
10. y2bar mean y variance (integer)
11. xybar mean x y correlation (integer)
12. x2ybr mean of x * x * y (integer)
13. xy2br mean of x * y * y (integer)
14. x-ege mean edge count left to right (integer)
15. xegvy correlation of x-ege with y (integer)
16. y-ege mean edge count bottom to top (integer)
17. yegvx correlation of y-ege with x (integer)
lines = [l.strip() for l in s.split('\n')]
feature_names = [l.split()[1] for l in lines]
feature_names = [f.replace('-', '_') for f in feature_names]
letter_df.columns = feature_names
letter_df.head()
Explanation: Next we attach the column names.
End of explanation
letter_counts = letter_df['lettr'].value_counts()
letter_counts.sort_index(ascending=False).plot(kind='barh')
Explanation: Check for Class Imbalance
End of explanation
features_df = letter_df.drop('lettr', axis=1)
letters = letter_df['lettr']
import seaborn as sns
f_corrs = features_df.corr()
fig, ax = pylab.subplots(figsize=(12, 12))
sns.heatmap(f_corrs, annot=True, ax=ax)
Explanation: All the classes are represented in a fairly balanced manner, so looks like in this instance we don't have to address class imbalance.
Feature Correlations
End of explanation
features_df[features_df < 0].sum(axis=0)
Explanation: The first 5 features, x_box, y_box, width, high, onpix are highly correlated with each other.
ANOVA Feature Selection
ANOVA F-test based feature selection requires all feature values to be positive.
End of explanation
from sklearn.feature_selection import f_classif
t_stats, p_vals = f_classif(features_df, letters)
f_test_results = pd.DataFrame(np.column_stack([t_stats, p_vals]),
index=features_df.columns.copy(),
columns=['test_statistic', 'p_value'])
f_test_results.plot(kind='bar', subplots=True)
Explanation: The above condition holds in our case - none of the features have negative values.
End of explanation
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score, StratifiedKFold
from sklearn.naive_bayes import GaussianNB
estimator = GaussianNB()
selector = SelectKBest(f_classif, k=5)
pipeline = Pipeline([
('selector', selector),
('model', estimator)
])
cross_validator = StratifiedKFold(n_splits=10, shuffle=True)
scores = cross_val_score(pipeline, features_df, letters,
cv=cross_validator, scoring='f1_macro')
score_1 = pd.Series(scores)
score_1.plot(kind='bar')
Explanation: Top 5 Features with GaussianNB Classifier
End of explanation
estimator = GaussianNB()
selector = SelectKBest(f_classif, k=10)
pipeline = Pipeline([
('selector', selector),
('model', estimator)
])
cross_validator = StratifiedKFold(n_splits=10, shuffle=True)
scores = cross_val_score(pipeline, features_df, letters,
cv=cross_validator, scoring='f1_macro')
score_2 = pd.Series(scores)
combined_scores = pd.concat([score_1, score_2], axis=1, keys=['cv_top5', 'cv_top10'])
combined_scores.plot(kind='bar')
Explanation: Top 10 Features with GaussianNB Classifier
End of explanation
estimator = GaussianNB()
selector = SelectKBest(f_classif, k=15)
pipeline = Pipeline([
('selector', selector),
('model', estimator)
])
cross_validator = StratifiedKFold(n_splits=10, shuffle=True)
scores = cross_val_score(pipeline, features_df, letters,
cv=cross_validator, scoring='f1_macro')
score_3 = pd.Series(scores)
combined_scores2 = pd.concat([score_2, score_3], axis=1, keys=['cv_top10', 'cv_top15'])
combined_scores2.plot(kind='bar')
Explanation: Top 15 Features with GaussianNB Classifier
End of explanation
top_5_feature_names = f_test_results.nlargest(5, columns='test_statistic').index
pairplot_df = features_df.loc[:, top_5_feature_names].copy()
pairplot_df['letter'] = letters
sns.pairplot(pairplot_df, hue='letter')
from sklearn.svm import SVC
estimator = SVC(C=100.0, kernel='rbf')
selector = SelectKBest(f_classif, k=5)
pipeline = Pipeline([
('selector', selector),
('model', estimator)
])
cross_validator = StratifiedKFold(n_splits=10, shuffle=True)
scores = cross_val_score(pipeline, features_df, letters,
cv=cross_validator, scoring='f1_macro')
svm_5 = pd.Series(scores)
svm_5.plot(kind='bar', title='10 Fold CV with SVM (top 5 features)')
combined_3 = pd.concat([score_3, svm_5], axis=1, keys=['Gaussian_15', 'svm_5'])
combined_3.plot(kind='bar')
estimator = SVC(C=100.0, kernel='rbf')
selector = SelectKBest(f_classif, k=10)
pipeline = Pipeline([
('selector', selector),
('model', estimator)
])
cross_validator = StratifiedKFold(n_splits=10, shuffle=True)
scores = cross_val_score(pipeline, features_df, letters,
cv=cross_validator, scoring='f1_macro')
svm_10 = pd.Series(scores)
combined_4 = pd.concat([svm_5, svm_10], axis=1, keys=['svm_5', 'svm_10'])
combined_4.plot(kind='bar')
estimator = SVC(C=100.0, kernel='rbf')
selector = SelectKBest(f_classif, k=15)
pipeline = Pipeline([
('selector', selector),
('model', estimator)
])
cross_validator = StratifiedKFold(n_splits=10, shuffle=True)
scores = cross_val_score(pipeline, features_df, letters,
cv=cross_validator, scoring='f1_macro')
svm_15 = pd.Series(scores)
combined_5 = pd.concat([svm_10, svm_15], axis=1, keys=['svm_10', 'svm_15'])
combined_5.plot(kind='bar')
Explanation: We get a significant (in layman terms, not in a statistical testing sense) boost in accuracy by going from top 5 to top 10 features. But the improvement in accuracy with top 15 features over top 10 features are marginal.
Pairplot
End of explanation |
8,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Module
As your code grows more and more complex, it is useful to collect all code in a an external file. Here we store all the functions form our notebook in a single file grm.py. Nothing new happens, we just copy all code cells from our notebook into a new file. Below, we can have a look at an html version of th file, which we created using PyCharm's exporting capabilites.
Step1: As our module is already quite complex, it is better to study the structure using PyCharm which provides tools to quickly grasp the structure of a module. So, check it out.
Namespace and Scope
Now that our Python code grows more and more complex, we need to discuss the concept of a Namespace. Roughly speaking, a name in Python is a mapping to an object. In Python this describes pretty much everything
Step2: Now, the tricky part is that we have multiple independent namespaces in Python, and names can be reused for different namespaces (only the objects are unique), for example
Step3: The Scope in Python defines the “hierarchy level” in which we search namespaces for certain “name-to-object” mappings.
Step4: What rules are applied to resolve conflicting scopes?
Local can be inside a function or class method, for example.
Enclosed can be its enclosing function, e.g., if a function is wrapped inside another function.
Global refers to the uppermost level of the executing script itself, and
Built-in are special names that Python reserves for itself.
This introduction draws heavily on a couple of very useful online tutorials
Step5: Now we turn to our very own module, we just have to import it as any other library first. How does Python know where to look for for our module?
Whenever the interpreter encounters an import statement, it searches for a build-in module (e.g. os, sys) of the same name. If unsuccessful, the interpreter searches in a list of directories given by the variable sys.path ...
Step6: and the current working directory. However, our module in the modules subdirectory, so we need to add it manually to the search path.
Step7: Please see here for additional information.
Step8: Returning to grm.py
Step9: Given our work on the notebook version, we had a very clear idea about the names defined in the module. In cases where you don't
Step10: What is the deal with all the leading and trailing underscores? Let us check out the Style Guide for Python Code.
There are many ways to import a module and then work with it.
Step11: Cleanup
Step12: Additional Resources
Tutorial on Python Modules and Packages
Formatting | Python Code:
from IPython.core.display import HTML, display
display(HTML('material/images/grm.html'))
Explanation: Module
As your code grows more and more complex, it is useful to collect all code in a an external file. Here we store all the functions form our notebook in a single file grm.py. Nothing new happens, we just copy all code cells from our notebook into a new file. Below, we can have a look at an html version of th file, which we created using PyCharm's exporting capabilites.
End of explanation
display(HTML('material/images/namespace1.html'))
Explanation: As our module is already quite complex, it is better to study the structure using PyCharm which provides tools to quickly grasp the structure of a module. So, check it out.
Namespace and Scope
Now that our Python code grows more and more complex, we need to discuss the concept of a Namespace. Roughly speaking, a name in Python is a mapping to an object. In Python this describes pretty much everything: lists, dictionaries, functions, classes, etc. Think of a namespace as a dictionary, where the dictionary keys represent the names and the dictionary values the object itself.
End of explanation
display(HTML('material/images/namespace2.html'))
Explanation: Now, the tricky part is that we have multiple independent namespaces in Python, and names can be reused for different namespaces (only the objects are unique), for example:
End of explanation
i = 1
def foo():
i = 5
print(i, 'in foo()')
print(i, 'global')
foo()
Explanation: The Scope in Python defines the “hierarchy level” in which we search namespaces for certain “name-to-object” mappings.
End of explanation
# Unix Pattern Extensions
import glob
# Operating System Interfaces
import os
# System-specific Parameters and Functions
import sys
Explanation: What rules are applied to resolve conflicting scopes?
Local can be inside a function or class method, for example.
Enclosed can be its enclosing function, e.g., if a function is wrapped inside another function.
Global refers to the uppermost level of the executing script itself, and
Built-in are special names that Python reserves for itself.
This introduction draws heavily on a couple of very useful online tutorials: Python Course, Beginners Guide to Namespaces, and Guide to Python Namespaces.
Interacting with the Module
Let us import a couple of the standard libraries to get started.
End of explanation
print '\n Search Path:'
for dir_ in sys.path:
print ' ' + dir_
Explanation: Now we turn to our very own module, we just have to import it as any other library first. How does Python know where to look for for our module?
Whenever the interpreter encounters an import statement, it searches for a build-in module (e.g. os, sys) of the same name. If unsuccessful, the interpreter searches in a list of directories given by the variable sys.path ...
End of explanation
sys.path.insert(0, 'material/module')
Explanation: and the current working directory. However, our module in the modules subdirectory, so we need to add it manually to the search path.
End of explanation
%ls -l
Explanation: Please see here for additional information.
End of explanation
# Import grm.py file
import grm
# Process initializtion file
init_dict = grm.process('material/msc/init.ini')
# Simulate dataset
grm.simulate(init_dict)
# Estimate model
rslt = grm.estimate(init_dict)
# Inspect results
grm.inspect(rslt, init_dict)
# Output results to terminal
%cat results.grm.txt
Explanation: Returning to grm.py:
End of explanation
# The built-in function dir() returns the names
# defined in a module.
print '\n Names: \n'
for function in dir(grm):
print ' ' + function
Explanation: Given our work on the notebook version, we had a very clear idea about the names defined in the module. In cases where you don't:
End of explanation
# Import all public objects in the grm
# module, but keep the namespaces
# separate.
#
import grm as gr
init_dict = gr.process('material/msc/init.ini')
# Imports only the estimate() and simulate()
# functions directly into our namespace.
#
from grm import process
init_dict = process('material/msc/init.ini')
try:
data = simulate(init_dict)
except NameError:
pass
# Imports all pubilc objects directly
# into our namespace.
#
from grm import *
init_dict = process('material/msc/init.ini')
data = simulate(init_dict)
Explanation: What is the deal with all the leading and trailing underscores? Let us check out the Style Guide for Python Code.
There are many ways to import a module and then work with it.
End of explanation
# Create list of all files generated by the module
files = glob.glob('*.grm.*')
# Remove files
for file_ in files:
os.remove(file_)
Explanation: Cleanup
End of explanation
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1OKmNHN').read())
Explanation: Additional Resources
Tutorial on Python Modules and Packages
Formatting
End of explanation |
8,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make the ONCdb
Here are step-by-step instructions on how to generate the ONCdb from VizieR catalogs.
Step 1
Step1: Step 2
Step2: Step 3 | Python Code:
# Initialize a database
onc = astrocat.Catalog()
# Ingest a VizieR catalog by supplying a path, catalog name, and column name of a unique identifier
onc.ingest_data(DIR_PATH+'/raw_data/viz_acs.tsv', 'ACS', 'ONCacs', count=10)
# The raw dataset is stored as an attribute
print(onc.ACS)
# Add another one! (This is a test file with a fake match of the ACS catalog)
onc.ingest_data(DIR_PATH+'/raw_data/viz_wfpc2.tsv', 'WFPC2', 'ONCpc2', count=10)
print(onc.WFPC2)
Explanation: Make the ONCdb
Here are step-by-step instructions on how to generate the ONCdb from VizieR catalogs.
Step 1: Initialize the database and ingest the raw data
End of explanation
# Now let's group the sources by some critical distance in arcseconds
# and assign IDs for our new custom database sources
onc.group_sources()
# Summary of what we've done
onc.info
# Take a look again
onc.catalog
# # Now let's correct the WFPC2 sources for some systematic offset
# onc.correct_offsets('WFPC2', truth='ACS')
# # And now the corrected data
# print('Corrected and original WFPC2 sources:')
# print(onc.catalog[onc.catalog['cat_name']=='WFPC2'][['oncID','ra_corr','dec_corr','_RAJ2000','_DEJ2000']])
onc.info
Explanation: Step 2: Cross-match the sources
End of explanation
# Generate the ONCdb
mo.generate_ONCdb(onc)
# Check that it worked
db = astrodb.Database(DIR_PATH+'/orion.sql')
db.query("SELECT * FROM browse", fmt='table')
Explanation: Step 3: Generate the SQL database
End of explanation |
8,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traverse a Square - Part 3 - Loops
Compared to the original programme, the programme that contains the variables is easier to modify. But it still contains a lot of repetition.
Let's remind ourselves of it
Step1: Let's pick that apart.
First, consider the range(0,3) statement.
The range(M, N) statement creates a so-called iterator that returns a sequence of numbers in the range M to N-1 in sequence, one number each time the statement is called.
(In fact, we could use a simpler range() statement, range(N), which by default iterates through numbers on the range 0..N-1.)
The for statement is a construct that will repeated 'pull out' each element from an iterator one item at a time, pass, setting a variable (in the above case, count) to the extracted value on each pass, and then running the code block "contained" in the loop.
Containment in the loop is indicated by indenting lines of code immediately following the the for statement.
Using a for loop, do you think you could write a program that drives the robot around a square and that only requires you to state the "traverse side" and "turn" actions once?
Load in the set up requirements, and then have a go at writing the programme.
Step2: How did you get on?
You can see my attempt below - note how I set the variable outside the loop so they are only set once.
Also note how I abstracted out the number of sides into a variable and added a variable to allow me to set the speed of moving along the side.
Step3: There's Often Another Way - The while Loop
One of the joys - or maybe it's frustrations?! - of writing computer programmes is that there are often multiple ways of achieving the same effect.
For example, as an alternative to using a for loop, we could use another loop construct, the while loop.
The while loop is a conditional statement that starts by checking the truth or falsity of a condition; if that condition evaluates as "true", the while loop executes the code that is contained within it, otherwise the programme continues by executing the next statement after the while block.
If the while loop executes the code contained within it, once that code has finshed executing the while loop checks the state of the condition again, and the process repeats.
So what is a logical, or Boolean, condition?
Conditions and Conditional Statements
Boolean or logical conditions are logical statements that evaluate as a Boolean True or False value. Conditional statements are statements that test a logical condition and then make a decision as what to do next on based on whether the condition evaluates as True or False.
In it's simplest form, we can use a while statement to create an infinite loop by explicitly setting the tested condition to be True.
python
while True
Step4: You may have noticed that the value of count, the final line of the cell, is not displayed.
This is because the programme never gets that far. When you stop the code executing, you stop it whilst it is still inside the while loop.
Check the count value to see how many times the while loop iterated round
Step5: Rather than looping an infinite number of times, we could test a condition based on how many times we have already been round the loop.
In a code cell, explore the Boolean nature of the following examples, one at a time, or use ones of your own devising
Step6: Using a conditional test based on testing a count variable that increments each time a while loop loops round, see if you can use a while loop to count up from 1 to 4, displaying the count each time round the loop using a print() statement.
If the while loop looks like it's stuck in an infinite loop, stop it executing using the stop button in the notebook toolbar.
Step7: How did you do?
Here's one way of doing it
Step8: Using break
If the condition tested by the while loop evaluates as False, the programme moves on to the next statement, if any, after the while block.
However, we can also break out of the while loop using another sort of conditional statement, the if statement, and invoking the break command, which moves the programme flow out of the while loop and onto the next statement, if any, after the while block.
Step9: See if you can use this form of loop to get the simulated robot to traverse a square.
As you write your program, try to remember not to repeat yourself... | Python Code:
for count in range(0,3):
print(count)
print("And the final value of `count` is", count)
Explanation: Traverse a Square - Part 3 - Loops
Compared to the original programme, the programme that contains the variables is easier to modify. But it still contains a lot of repetition.
Let's remind ourselves of it:
```python
import time
side_length_time=1
turn_speed=1.8
turn_time=0.45
side 1
robot.move_forward()
time.sleep(side_length_time)
turn 1
robot.rotate_left(turn_speed)
time.sleep(turn_time)
side 2
robot.move_forward()
time.sleep(side_length_time)
turn 2
robot.rotate_left(turn_speed)
time.sleep(turn_time)
side 3
robot.move_forward()
time.sleep(side_length_time)
turn 3
robot.rotate_left(turn_speed)
time.sleep(turn_time)
side 4
robot.move_forward()
time.sleep(side_length_time)
```
One of the principles of good computer programming is often referred to using the acronym DRY, which stands for Don't Repeat Yourself.
So what bits of repetition can you see in our simple square traversing program?
I think the following block of code repeats:
```python
side
robot.move_forward()
time.sleep(side_length_time)
turn
robot.rotate_left(turn_speed)
time.sleep(turn_time)
```
So how can we get out programme to repeat that block of code the required number of times?
Introducing the for loop
In common with many programming languages, Python supports a construct known as a for loop. This provides a means to repeat a block of code a required number of times.
The loop construction can be illustrated by running the following code cell:
End of explanation
%run 'Set-up.ipynb'
%run 'Loading scenes.ipynb'
%run 'vrep_models/PioneerP3DX.ipynb'
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
#Your code here
Explanation: Let's pick that apart.
First, consider the range(0,3) statement.
The range(M, N) statement creates a so-called iterator that returns a sequence of numbers in the range M to N-1 in sequence, one number each time the statement is called.
(In fact, we could use a simpler range() statement, range(N), which by default iterates through numbers on the range 0..N-1.)
The for statement is a construct that will repeated 'pull out' each element from an iterator one item at a time, pass, setting a variable (in the above case, count) to the extracted value on each pass, and then running the code block "contained" in the loop.
Containment in the loop is indicated by indenting lines of code immediately following the the for statement.
Using a for loop, do you think you could write a program that drives the robot around a square and that only requires you to state the "traverse side" and "turn" actions once?
Load in the set up requirements, and then have a go at writing the programme.
End of explanation
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
side_speed=2
side_length_time=1
turn_speed=1.8
turn_time=0.45
number_of_sides=4
for sides in range(0,number_of_sides):
#side
robot.move_forward(side_speed)
time.sleep(side_length_time)
#turn
robot.rotate_left(turn_speed)
time.sleep(turn_time)
#We could put additional code here
#This code would execute once the for loop has completely finished
Explanation: How did you get on?
You can see my attempt below - note how I set the variable outside the loop so they are only set once.
Also note how I abstracted out the number of sides into a variable and added a variable to allow me to set the speed of moving along the side.
End of explanation
count=0
while True:
count=count+1
count
Explanation: There's Often Another Way - The while Loop
One of the joys - or maybe it's frustrations?! - of writing computer programmes is that there are often multiple ways of achieving the same effect.
For example, as an alternative to using a for loop, we could use another loop construct, the while loop.
The while loop is a conditional statement that starts by checking the truth or falsity of a condition; if that condition evaluates as "true", the while loop executes the code that is contained within it, otherwise the programme continues by executing the next statement after the while block.
If the while loop executes the code contained within it, once that code has finshed executing the while loop checks the state of the condition again, and the process repeats.
So what is a logical, or Boolean, condition?
Conditions and Conditional Statements
Boolean or logical conditions are logical statements that evaluate as a Boolean True or False value. Conditional statements are statements that test a logical condition and then make a decision as what to do next on based on whether the condition evaluates as True or False.
In it's simplest form, we can use a while statement to create an infinite loop by explicitly setting the tested condition to be True.
python
while True:
#an infinite loop
#The pass statement is a null statement ("do nothing")
pass
This loop that will execute the contained code block indefinitely (an infinite number of times). In the above case, the programme will do nothing, forever.
You can stop an infinite loop running in a code cell by pressing the Stop button on the notebook toolbar.
For example, run the following while loop and then stop it:
End of explanation
count
Explanation: You may have noticed that the value of count, the final line of the cell, is not displayed.
This is because the programme never gets that far. When you stop the code executing, you stop it whilst it is still inside the while loop.
Check the count value to see how many times the while loop iterated round:
End of explanation
#Test some Boolean conditions, one at a time
Explanation: Rather than looping an infinite number of times, we could test a condition based on how many times we have already been round the loop.
In a code cell, explore the Boolean nature of the following examples, one at a time, or use ones of your own devising:
```python
Inequality tests
4 > 3
5 < 2
5 >= 3
print(3 <= 2)
We can test variables
apples=1
pears=2
apples > pears
apples + 1 == pears
Equality tests - this is not "double assignment"
1==1
1==2
apples==1
And "not equal" tests...
4 != 5
4 != 4
Integer numbers have a truth value: 0 is False, others are true
1==True
0==False
True==False
1==False
```
End of explanation
#Use a while loop to print out a count going from 1 to 4
Explanation: Using a conditional test based on testing a count variable that increments each time a while loop loops round, see if you can use a while loop to count up from 1 to 4, displaying the count each time round the loop using a print() statement.
If the while loop looks like it's stuck in an infinite loop, stop it executing using the stop button in the notebook toolbar.
End of explanation
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
Explanation: How did you do?
Here's one way of doing it:
python
count=1
while count<=4:
print(count)
Now see if you can use a while loop to get our simulated robot to trace out a square.
End of explanation
count=1
#Looks like an infinite loop...
while True:
print(count)
#but with a break out clause
if count==4:
break
count = count+1
Explanation: Using break
If the condition tested by the while loop evaluates as False, the programme moves on to the next statement, if any, after the while block.
However, we can also break out of the while loop using another sort of conditional statement, the if statement, and invoking the break command, which moves the programme flow out of the while loop and onto the next statement, if any, after the while block.
End of explanation
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
Explanation: See if you can use this form of loop to get the simulated robot to traverse a square.
As you write your program, try to remember not to repeat yourself...
End of explanation |
8,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
REINFORCE in TensorFlow
Just like we did before for q-learning, this time we'll design a neural network to learn CartPole-v0 via policy gradient (REINFORCE).
Step1: Building the policy network
For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.
For numerical stability, please do not include the softmax layer into your network architecture.
We'll use softmax or log-softmax where appropriate.
Step2: Loss function and updates
We now need to define objective and update over policy gradient.
Our objective function is
$$ J \approx { 1 \over N } \sum {s_i,a_i} \pi\theta (a_i | s_i) \cdot G(s_i,a_i) $$
Following the REINFORCE algorithm, we can define our objective as follows
Step5: Computing cumulative rewards
Step7: Playing the game
Step9: Results & video | Python Code:
# This code creates a virtual display to draw game images on.
# If you are running locally, just ignore it
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0")
# gym compatibility: unwrap TimeLimit
if hasattr(env,'env'):
env=env.env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
Explanation: REINFORCE in TensorFlow
Just like we did before for q-learning, this time we'll design a neural network to learn CartPole-v0 via policy gradient (REINFORCE).
End of explanation
import tensorflow as tf
tf.reset_default_graph()
# create input variables. We only need <s,a,R> for REINFORCE
states = tf.placeholder('float32', (None,)+state_dim, name="states")
actions = tf.placeholder('int32', name="action_ids")
cumulative_rewards = tf.placeholder('float32', name="cumulative_returns")
import keras
import keras.layers as L
#sess = tf.InteractiveSession()
#keras.backend.set_session(sess)
#<define network graph using raw tf or any deep learning library>
#network = keras.models.Sequential()
#network.add(L.InputLayer(state_dim))
#network.add(L.Dense(200, activation='relu'))
#network.add(L.Dense(200, activation='relu'))
#network.add(L.Dense(n_actions, activation='linear'))
network = keras.models.Sequential()
network.add(L.Dense(256, activation="relu", input_shape=state_dim, name="layer_1"))
network.add(L.Dense(n_actions, activation="linear", name="layer_2"))
print(network.summary())
#question: counting from the beginning of the model, the logits are in layer #9: model.layers[9].output
#logits = network.layers[2].output #<linear outputs (symbolic) of your network>
logits = network(states)
policy = tf.nn.softmax(logits)
log_policy = tf.nn.log_softmax(logits)
# utility function to pick action in one given state
def get_action_proba(s):
return policy.eval({states: [s]})[0]
Explanation: Building the policy network
For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.
For numerical stability, please do not include the softmax layer into your network architecture.
We'll use softmax or log-softmax where appropriate.
End of explanation
# select log-probabilities for chosen actions, log pi(a_i|s_i)
indices = tf.stack([tf.range(tf.shape(log_policy)[0]), actions], axis=-1)
log_policy_for_actions = tf.gather_nd(log_policy, indices)
# REINFORCE objective function
# hint: you need to use log_policy_for_actions to get log probabilities for actions taken
J = tf.reduce_mean((log_policy_for_actions * cumulative_rewards), axis=-1)# <policy objective as in the last formula. Please use mean, not sum.>
# regularize with entropy
entropy = tf.reduce_mean(policy*log_policy) # <compute entropy. Don't forget the sign!>
# all network weights
all_weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) #<a list of all trainable weights in your network>
# weight updates. maximizing J is same as minimizing -J. Adding negative entropy.
loss = -J - 0.1*entropy
update = tf.train.AdamOptimizer().minimize(loss, var_list=all_weights)
Explanation: Loss function and updates
We now need to define objective and update over policy gradient.
Our objective function is
$$ J \approx { 1 \over N } \sum {s_i,a_i} \pi\theta (a_i | s_i) \cdot G(s_i,a_i) $$
Following the REINFORCE algorithm, we can define our objective as follows:
$$ \hat J \approx { 1 \over N } \sum {s_i,a_i} log \pi\theta (a_i | s_i) \cdot G(s_i,a_i) $$
When you compute gradient of that function over network weights $ \theta $, it will become exactly the policy gradient.
End of explanation
def get_cumulative_rewards(rewards, # rewards at each step
gamma=0.99 # discount for reward
):
take a list of immediate rewards r(s,a) for the whole session
compute cumulative rewards R(s,a) (a.k.a. G(s,a) in Sutton '16)
R_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
The simple way to compute cumulative rewards is to iterate from last to first time tick
and compute R_t = r_t + gamma*R_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
#<your code here>
cumulative_rewards = np.zeros((len(rewards)))
cumulative_rewards[-1] = rewards[-1]
for t in range(len(rewards)-2, -1, -1):
cumulative_rewards[t] = rewards[t] + gamma * cumulative_rewards[t + 1]
return cumulative_rewards #< array of cumulative rewards>
assert len(get_cumulative_rewards(range(100))) == 100
assert np.allclose(get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9),
[1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(get_cumulative_rewards([0, 0, 1, -2, 3, -4, 0], gamma=0.5),
[0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(get_cumulative_rewards([0, 0, 1, 2, 3, 4, 0], gamma=0),
[0, 0, 1, 2, 3, 4, 0])
print("looks good!")
def train_step(_states, _actions, _rewards):
given full session, trains agent with policy gradient
_cumulative_rewards = get_cumulative_rewards(_rewards)
update.run({states: _states, actions: _actions,
cumulative_rewards: _cumulative_rewards})
Explanation: Computing cumulative rewards
End of explanation
def generate_session(t_max=1000):
play env with REINFORCE agent and train at the session end
# arrays to record session
states, actions, rewards = [], [], []
s = env.reset()
for t in range(t_max):
# action probabilities array aka pi(a|s)
action_probas = get_action_proba(s)
a = np.random.choice(a=len(action_probas), p=action_probas) #<pick random action using action_probas>
new_s, r, done, info = env.step(a)
# record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done:
break
train_step(states, actions, rewards)
# technical: return session rewards to print them later
return sum(rewards)
s = tf.InteractiveSession()
s.run(tf.global_variables_initializer())
for i in range(100):
rewards = [generate_session() for _ in range(100)] # generate new sessions
print("mean reward:%.3f" % (np.mean(rewards)))
if np.mean(rewards) > 300:
print("You Win!") # but you can train even further
break
Explanation: Playing the game
End of explanation
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),
directory="videos", force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos/")))
HTML(
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
.format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices
from submit import submit_cartpole
submit_cartpole(generate_session, "tonatiuh_rangel@hotmail.com", "Cecc5rcVxaVUYtsQ")
# That's all, thank you for your attention!
# Not having enough? There's an actor-critic waiting for you in the honor section.
# But make sure you've seen the videos first.
Explanation: Results & video
End of explanation |
8,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing binary decision trees
The goal of this notebook is to implement your own binary decision tree classifier. You will
Step1: Load LendingClub Loans dataset
We will be using a dataset from the LendingClub. A parsed and cleaned form of the dataset is availiable here. Make sure you download the dataset before running the following command.
Step2: The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
In order to make this more intuitive and consistent with the lectures, we reassign the target to be
Step3: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
features
Step4: Now, let's look at the head of the dataset.
Step5: Performing one-hot encoding with Pandas
Before performing analysis on the data, we need to perform one-hot encoding for all of the categorical data. Once the one-hot encoding is performed on all of the data, we will split the data into a training set and a validation set.
Step6: Let's explore what the "grade_A" column looks like.
Step7: This column is set to 1 if the loan grade is A and 0 otherwise.
Loading the training and test datasets
Loading the JSON files with the indicies from the training data and the test data into a list.
Step8: Using the list of the training data indicies and the test data indicies to get a DataFrame with the training data and a DataFrame with the test data.
Step9: Decision tree implementation
In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections.
Function to count number of mistakes while predicting majority class
Recall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node.
Now, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree.
Note
Step10: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
Step11: Function to pick best feature to split on
The function best_splitting_feature takes 3 arguments
Step12: Now, creating a list of the features we are considering for the decision tree to test the above function. Not including the 0th element on the list since it corresponds to the "safe loans" column, the label we are trying to predict.
Step13: To test your best_splitting_feature function, run the following code
Step14: Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values
Step15: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions
Step16: Here is a recursive function to count the nodes in your tree
Step17: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step18: Build the tree!
Now that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree.
Warning
Step19: Making predictions with a decision tree
As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True.
Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
Step20: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
Step21: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step22: Quiz question
Step23: Quiz question
Step24: Quiz question
Step25: Evaluating your decision tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows
Step26: Now, let's use this function to evaluate the classification error on the test set.
Step27: Quiz Question
Step28: Printing out a decision stump
As discussed in the lecture, we can print out a single decision stump (printing out the entire tree is left as an exercise to the curious reader).
Step29: Quiz Question
Step30: Exploring the intermediate left subtree
The tree is a recursive dictionary, so we do have access to all the nodes! We can use
* my_decision_tree['left'] to go left
* my_decision_tree['right'] to go right
Step31: Exploring the left subtree of the left subtree
Step32: Quiz question
Step33: Quiz question | Python Code:
import json
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
Explanation: Implementing binary decision trees
The goal of this notebook is to implement your own binary decision tree classifier. You will:
Use DataFrames to do some feature engineering.
Transform categorical variables into binary variables.
Write a function to compute the number of misclassified examples in an intermediate node.
Write a function to find the best feature to split on.
Build a binary decision tree from scratch.
Make predictions using the decision tree.
Evaluate the accuracy of the decision tree.
Visualize the decision at the root node.
Important Note: In this assignment, we will focus on building decision trees where the data contain only binary (0 or 1) features. This allows us to avoid dealing with:
* Multiple intermediate nodes in a split
* The thresholding issues of real-valued features.
Importing Libraries
End of explanation
loans = pd.read_csv("lending-club-data_assign_2.csv")
Explanation: Load LendingClub Loans dataset
We will be using a dataset from the LendingClub. A parsed and cleaned form of the dataset is availiable here. Make sure you download the dataset before running the following command.
End of explanation
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.drop('bad_loans', 1)
Explanation: The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
In order to make this more intuitive and consistent with the lectures, we reassign the target to be:
* +1 as a safe loan,
* -1 as a risky (bad) loan.
We put this in a new column called safe_loans.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
Explanation: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
features:
grade of the loan
the length of the loan term
the home ownership status: own, mortgage, rent
number of years of employment.
Since we are building a binary decision tree, we will have to convert these categorical features to a binary representation in a subsequent section using 1-hot encoding.
End of explanation
loans.head(5)
Explanation: Now, let's look at the head of the dataset.
End of explanation
loans_one_hot_enc = pd.get_dummies(loans)
Explanation: Performing one-hot encoding with Pandas
Before performing analysis on the data, we need to perform one-hot encoding for all of the categorical data. Once the one-hot encoding is performed on all of the data, we will split the data into a training set and a validation set.
End of explanation
loans_one_hot_enc["grade_A"].head(5)
Explanation: Let's explore what the "grade_A" column looks like.
End of explanation
with open('module-5-assignment-2-train-idx.json', 'r') as f:
train_idx_lst = json.load(f)
train_idx_lst = [int(entry) for entry in train_idx_lst]
with open('module-5-assignment-2-test-idx.json', 'r') as f:
test_idx_lst = json.load(f)
test_idx_lst = [int(entry) for entry in test_idx_lst]
Explanation: This column is set to 1 if the loan grade is A and 0 otherwise.
Loading the training and test datasets
Loading the JSON files with the indicies from the training data and the test data into a list.
End of explanation
train_data = loans_one_hot_enc.ix[train_idx_lst]
test_data = loans_one_hot_enc.ix[test_idx_lst]
Explanation: Using the list of the training data indicies and the test data indicies to get a DataFrame with the training data and a DataFrame with the test data.
End of explanation
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
N_count_plu_1 = (labels_in_node == 1).sum()
# Count the number of -1's (risky loans)
N_count_neg_1 = (labels_in_node == -1).sum()
# Return the number of mistakes that the majority classifier makes.
return min(N_count_plu_1, N_count_neg_1)
Explanation: Decision tree implementation
In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections.
Function to count number of mistakes while predicting majority class
Recall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node.
Now, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree.
Note: Keep in mind that in order to compute the number of mistakes for a majority classifier, we only need the label (y values) of the data points in the node.
Steps to follow :
* Step 1: Calculate the number of safe loans and risky loans.
* Step 2: Since we are assuming majority class prediction, all the data points that are not in the majority class are considered mistakes.
* Step 3: Return the number of mistakes.
Now, let us write the function intermediate_node_num_mistakes which computes the number of misclassified examples of an intermediate node given the set of labels (y values) of the data points contained in the node. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.
End of explanation
# Test case 1
example_labels = np.array([-1, -1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 1 failed... try again!'
# Test case 2
example_labels = np.array([-1, -1, 1, 1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 2 failed... try again!'
# Test case 3
example_labels = np.array([-1, -1, -1, -1, -1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 3 failed... try again!'
Explanation: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
End of explanation
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
left_mistakes = intermediate_node_num_mistakes(left_split[target].values)
# Calculate the number of misclassified examples in the right split.
right_mistakes = intermediate_node_num_mistakes(right_split[target].values)
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
error = (left_mistakes + right_mistakes)/num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
if error < best_error:
best_error = error
best_feature = feature
return best_feature # Return the best feature we found
Explanation: Function to pick best feature to split on
The function best_splitting_feature takes 3 arguments:
1. The data (SFrame of data which includes all of the feature columns and label column)
2. The features to consider for splits (a list of strings of column names to consider for splits)
3. The name of the target/label column (string)
The function will loop through the list of possible features, and consider splitting on each of them. It will calculate the classification error of each split and return the feature that had the smallest classification error when split on.
Recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Follow these steps:
* Step 1: Loop over each feature in the feature list
* Step 2: Within the loop, split the data into two groups: one group where all of the data has feature value 0 or False (we will call this the left split), and one group where all of the data has feature value 1 or True (we will call this the right split). Make sure the left split corresponds with 0 and the right split corresponds with 1 to ensure your implementation fits with our implementation of the tree building process.
* Step 3: Calculate the number of misclassified examples in both groups of data and use the above formula to compute the classification error.
* Step 4: If the computed error is smaller than the best error found so far, store this feature and its error.
This may seem like a lot, but we have provided pseudocode in the comments in order to help you implement the function correctly.
Note: Remember that since we are only dealing with binary features, we do not have to consider thresholds for real-valued features. This makes the implementation of this function much easier.
Fill in the places where you find ## YOUR CODE HERE. There are five places in this function for you to fill in.
End of explanation
feature_lst = train_data.columns.values.tolist()[1:]
print feature_lst
Explanation: Now, creating a list of the features we are considering for the decision tree to test the above function. Not including the 0th element on the list since it corresponds to the "safe loans" column, the label we are trying to predict.
End of explanation
if best_splitting_feature(train_data, feature_lst, 'safe_loans') == 'term_ 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
Explanation: To test your best_splitting_feature function, run the following code:
End of explanation
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True }
# Count the number of data points that are +1 and -1 in this node.
num_ones = (target_values == 1).sum()
num_minus_ones = (target_values == -1).sum()
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1
else:
leaf['prediction'] = -1
# Return the leaf node
return leaf
Explanation: Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'splitting_feature' : The feature that this node splits on.
}
First, we will write a function that creates a leaf node given a set of target values. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.
End of explanation
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target].values
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1
# (Check if there are mistakes at current node.
# Recall you wrote a function intermediate_node_num_mistakes to compute this.)
if intermediate_node_num_mistakes(target_values) == 0: ## YOUR CODE HERE
print "Stopping condition 1 reached."
# If not mistakes at current node, make current node a leaf node
return create_leaf(target_values)
# Stopping condition 2 (check if there are remaining features to consider splitting on)
if remaining_features == 0: ## YOUR CODE HERE
print "Stopping condition 2 reached."
# If there are no remaining features to consider, make current node a leaf node
return create_leaf(target_values)
# Additional stopping condition (limit tree depth)
if current_depth >= max_depth : ## YOUR CODE HERE
print "Reached maximum depth. Stopping for now."
# If the max tree depth has been reached, make current node a leaf node
return create_leaf(target_values)
# Find the best splitting feature (recall the function best_splitting_feature implemented above)
splitting_feature = best_splitting_feature(data, remaining_features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target])
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target])
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth)
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions:
1. Stopping condition 1: All data points in a node are from the same class.
2. Stopping condition 2: No more features to split on.
3. Additional stopping condition: In addition to the above two stopping conditions covered in lecture, in this assignment we will also consider a stopping condition based on the max_depth of the tree. By not letting the tree grow too deep, we will save computational effort in the learning process.
Now, we will write down the skeleton of the learning algorithm. Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Explanation: Here is a recursive function to count the nodes in your tree:
End of explanation
small_data_decision_tree = decision_tree_create(train_data, feature_lst, 'safe_loans', max_depth = 3)
if count_nodes(small_data_decision_tree) == 13:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there : 13'
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
my_decision_tree = decision_tree_create(train_data, feature_lst, 'safe_loans', max_depth = 6)
Explanation: Build the tree!
Now that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree.
Warning: This code block may take 1-2 minutes to learn.
End of explanation
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
Explanation: Making predictions with a decision tree
As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True.
Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
End of explanation
test_data.iloc[0]
print 'Predicted class: %s ' % classify(my_decision_tree, test_data.iloc[0])
Explanation: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
End of explanation
classify(my_decision_tree, test_data.iloc[0], annotate=True)
Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
End of explanation
print "term_ 36 months"
Explanation: Quiz question: What was the feature that my_decision_tree first split on while making the prediction for test_data.iloc[0]?
End of explanation
print "grade_D"
Explanation: Quiz question: What was the first feature that lead to a right split of test_data[0]?
End of explanation
print "grade_D"
Explanation: Quiz question: What was the last feature split on before reaching a leaf node for test_data[0]?
End of explanation
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
predictions = data.apply(lambda x: classify(tree, x, annotate=False) , axis = 1)
# Once you've made the predictions, calculate the classification error and return it
number_mistakes = (predictions != data['safe_loans'].values).sum()
total_examples = float(len(predictions))
classification_error = number_mistakes/total_examples
return classification_error
Explanation: Evaluating your decision tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Now, write a function called evaluate_classification_error that takes in as input:
1. tree (as described above)
2. data (an SFrame)
This function should return a prediction (class label) for each row in data using the decision tree. Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
End of explanation
evaluate_classification_error(my_decision_tree, test_data)
Explanation: Now, let's use this function to evaluate the classification error on the test set.
End of explanation
print "Classification error of my_decision_tree on \
the test_data: %.2f" %(evaluate_classification_error(my_decision_tree, test_data))
Explanation: Quiz Question: Rounded to 2nd decimal point, what is the classification error of my_decision_tree on the test_data?
End of explanation
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('_')
print ' %s' % name
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0] [{0} == 1] '.format(split_name)
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(my_decision_tree)
Explanation: Printing out a decision stump
As discussed in the lecture, we can print out a single decision stump (printing out the entire tree is left as an exercise to the curious reader).
End of explanation
print "term_ 36 months"
Explanation: Quiz Question: What is the feature that is used for the split at the root node?
End of explanation
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
Explanation: Exploring the intermediate left subtree
The tree is a recursive dictionary, so we do have access to all the nodes! We can use
* my_decision_tree['left'] to go left
* my_decision_tree['right'] to go right
End of explanation
print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])
Explanation: Exploring the left subtree of the left subtree
End of explanation
print "term_ 36 months, grade_A, grade_B"
Explanation: Quiz question: What is the path of the first 3 feature splits considered along the left-most branch of my_decision_tree?
End of explanation
print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['right']['right'], my_decision_tree['right']['splitting_feature'])
print "term_ 36 months, grade_D, no third feature because second split resulted in leaf"
Explanation: Quiz question: What is the path of the first 3 feature splits considered along the right-most branch of my_decision_tree?
End of explanation |
8,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Behavior of the median filter with noised sine waves
DW 2015.11.12
Step1: 1. Create all needed arrays and data.
Step2: Figure 1. Behavior of the median filter with given window length and different S/N ratio.
Step3: Figure 1.1 Behavior of the median filter with given window length and different S/N ratio.
Step4: Figure 2
Step5: Figure 2.1 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import medfilt
import sys
# Add a new path with needed .py files.
sys.path.insert(0, 'C:\Users\Dominik\Documents\GitRep\kt-2015-DSPHandsOn\MedianFilter\Python')
import functions
import gitInformation
%matplotlib inline
gitInformation.printInformation()
Explanation: Behavior of the median filter with noised sine waves
DW 2015.11.12
End of explanation
# Sine wave, 16 wave numbers, 16*128 samples.
x = np.linspace(0, 2, 16*128)
data = np.sin(16*np.pi*x)
# Different noises with different standard deviations (spread or "width")
# will be saved in, so we can generate different signal to noise ratios
diff_noise = np.zeros((140,len(data)))
# Noised sine waves.
noised_sines = np.zeros((140,len(data)))
# Median filtered wave.
medfilter = np.zeros((140,len(data)))
# Filtered sine waves (noised_sines - medfilter)
filtered_sines = np.zeros((140,len(data)))
# Behavior of the median filter. Save the max values of the filtered waves in it.
behav = np.zeros(140)
# Lists with used window lengths and Signal to noise ratios
wl = [17,33,65,97, 129, 161, 193, 225, 257, 289, 321, 353, 385, 417, 449]
sn = [1, 1.5, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Explanation: 1. Create all needed arrays and data.
End of explanation
# Calculate and save all values.
# Because the for loop doesn't count from 1 to 10 for example,
# we need a counter to iterate through the array.
# The counter is assigne to -1, so we can iterate from 0 to len(values)
count = -1
count2 = -1
values = np.zeros((len(sn), len(wl)))
for w in wl[:11]:
count = count + 1
for x in sn:
count2 = count2 + 1
for i in range (len(diff_noise)):
# Create different noises, with x we change the signal to noise
# ratio from 10 to 1.
diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data))
# Add noise to each sine wave, to create a realisitc signal.
noised_sines[i, :] = data + diff_noise[i, :]
# Filter the all noised sine waves.
medfilter[i, :] = medfilt(noised_sines[i, :], w)
# Subtract the filtered wave from the noised sine waves.
filtered_sines[i, :] = noised_sines[i, :] - medfilter[i, :]
# Calculate the root mean square (RMS) of each sine wave
behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :])))
# Calculate the mean of the bahvior, so we can see how
# the signal to noise ratio effects the median filter
# with different window lengths.
mean = np.mean(behav)
# Save the result in the 'values' array
values[count2:count2+1:,count] = mean
# Set coun2 back to -1, so we can iterate again from 0 to len(values).
# Otherwise the counter would get higher and is out of range.
count2 = - 1
# Save the array, because the calculation take some time.
# Load the array with "values = np.loadtxt('values.txt')".
np.savetxt("values.txt", values)
values = np.loadtxt("values.txt")
viridis_data = np.loadtxt('viridis_data.txt')
plasma_data = np.loadtxt('plasma_data.txt')
# viris_data and plasma_data taken from
# https://github.com/BIDS/colormap/blob/master/colormaps.py
fig = plt.figure(figsize=(20, 7))
for p in range(0,11):
ax = plt.subplot(2, 5, p)
plt.axis([0, 11, 0, 1.5])
plt.plot(sn,values[:,p], 'o-', color=viridis_data[(p*25)-25,:])
plt.savefig('Behavior with given SN ratio and different wl.png',dpi=300)
fig = plt.figure()
values3 = np.zeros((len(sn),len(wl)))
for p in range(6):
ax = plt.subplot()
values3[:,p] = values[::,p]/0.7069341
plt.axis([0, 11, 0, 2])
plt.ylabel('Normalized RMS', size = 14)
plt.xlabel('S/N Ratio', size = 14)
plt.hlines(1,1,10, color = 'b', linestyle = '--')
plt.plot(sn,values3[:,p], color=plasma_data[(p*40),:])
plt.savefig('Behavior with given SN ratio and different wl3.png',dpi=300)
Explanation: Figure 1. Behavior of the median filter with given window length and different S/N ratio.
End of explanation
# Alternative we subtract the filtered wave from the original sine wave,
# not from the noised sine wave.
count = -1
count2 = -1
values = np.zeros((len(sn), len(wl)))
for w in wl[:11]:
count = count + 1
for x in sn:
count2 = count2 + 1
for i in range (len(diff_noise)):
diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data))
noised_sines[i, :] = data + diff_noise[i, :]
medfilter[i, :] = medfilt(noised_sines[i, :], w)
filtered_sines[i, :] = data - medfilter[i, :]
behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :])))
mean = np.mean(behav)
values[count2:count2+1:,count] = mean
count2 = - 1
np.savetxt("valuesA.txt", values)
valuesA = np.loadtxt("valuesA.txt")
fig = plt.figure()
values3 = np.zeros((len(sn),len(wl)))
for p in range(6):
ax = plt.subplot()
values3[::,p] = valuesA[::,p]/0.7069341
plt.axis([0, 11, 0, 2])
plt.ylabel('Normalized RMS', size = 14)
plt.xlabel('S/N Ratio', size = 14)
plt.hlines(1,1,101, color = 'b', linestyle = '--')
plt.plot(sn,values3[::,p], color=plasma_data[(p*40),:])
plt.savefig('Behavior with given SN ratio and different wl3A.png',dpi=300)
Explanation: Figure 1.1 Behavior of the median filter with given window length and different S/N ratio.
End of explanation
values = np.zeros((len(wl), len(sn)))
count = -1
count2 = -1
for x in sn:
count = count + 1
for w in wl:
count2 = count2 + 1
for i in range (len(diff_noise)):
diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data))
noised_sines[i, :] = data + diff_noise[i, :]
medfilter[i, :] = medfilt(noised_sines[i, :], w)
filtered_sines[i, :] = noised_sines[i, :] - medfilter[i, :]
behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :])))
mean = np.mean(behav)
values[count2:count2+1:,-count] = mean
count2 = -1
np.savetxt("values2.txt", values)
values2 = np.loadtxt("values2.txt")
fig = plt.figure(figsize=(30,7))
for p in range(11):
ax = plt.subplot(2,5,p)
plt.axis([0, 450, 0, 1.5])
xticks = np.arange(0, max(wl), 64)
ax.set_xticks(xticks)
x_label = [r"${%s\pi}$" % (v) for v in range(len(xticks))]
ax.set_xticklabels(x_label)
plt.plot(wl,values2[::,p], color=viridis_data[(p*25),:])
plt.savefig('Behavior with given wl and different SN ratio.png',dpi=300)
fig = plt.figure()
values4 = np.zeros((len(wl), len(sn)))
for p in range (10):
# Normalize the RMS with the RMS of a normal sine wave
values4[::,p] = values2[::,p]/0.7069341
ax = plt.subplot()
plt.axis([0, 450, 0, 2])
# Set xticks at each 64th point
xticks = np.arange(0, max(wl) + 1, 64)
ax.set_xticks(xticks)
# x_label = pi at each 64th point
x_label = [r"${%s\pi}$" % (v) for v in range(len(xticks))]
ax.set_xticklabels(x_label)
plt.ylabel('Normalized RMS', size = 14)
plt.xlabel('Window length', size = 14)
plt.plot(wl,values4[::,p], color=viridis_data[(p*25)-25,:])
plt.hlines(1,1,max(wl), color = 'b', linestyle = '--')
plt.savefig('Behavior with given wl and different SN ratio3.png',dpi=300)
Explanation: Figure 2: Behavior of the median filter with given window length and different S/N ratio
End of explanation
# Alternative
values = np.zeros((len(wl), len(sn)))
count = -1
count2 = -1
for x in sn:
count = count + 1
for w in wl:
count2 = count2 + 1
for i in range (len(diff_noise)):
diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data))
noised_sines[i, :] = data + diff_noise[i, :]
medfilter[i, :] = medfilt(noised_sines[i, :], w)
filtered_sines[i, :] = data - medfilter[i, :]
behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :])))
mean = np.mean(behav)
values[count2:count2+1:,-count] = mean
count2 = -1
np.savetxt("values2A.txt", values)
values2A = np.loadtxt("values2A.txt")
fig = plt.figure()
values4 = np.zeros((len(wl), len(sn)))
for i in range (11):
# Normalize the RMS with the RMS of a normal sine wave
values4[::,i] = values2A[::,i]/0.7069341
ax = plt.subplot()
plt.axis([0, 450, 0, 2])
# Set xticks at each 64th point
xticks = np.arange(0, max(wl) + 1, 64)
ax.set_xticks(xticks)
# x_labe = pi at each 64th point
x_label = [r"${%s\pi}$" % (v) for v in range(len(xticks))]
ax.set_xticklabels(x_label)
plt.ylabel('Normalized RMS', size = 14)
plt.xlabel('Window length', size = 14)
plt.plot(wl,values4[::,i], color=viridis_data[(i*25)-25,:])
plt.hlines(1,1,max(wl), color = 'b', linestyle = '--')
plt.savefig('Behavior with given wl and different SN ratio2A.png',dpi=300)
Explanation: Figure 2.1: Behavior of the median filter with given window length and different S/N ratio
End of explanation |
8,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Memprediksi jenis kelamin dari nama bahasa Indonesia menggunakan Machine Learning
Loading dataset
Step1: Cleansing dataset
Step2: Split Dataset
Dataset yang adalah akan dipecah menjadi dua bagian, 70% data akan digunakan sebagai data training untuk melatih mesin. Kemudian 30% sisanya akan digunakan sebagai data testing untuk mengevaluasi akurasi predisksi machine learning.
Step3: Dataset telah dipecah menjadi 2 bagian, mari kita cek distribusi nya.
Step4: Terlihat hasilnya, dataset yang telah dipecah dua tetap dapat mempertahankan persentase distribusi jenis kelamin seperti pada dataset asli.
Features Extraction
Proses features extraction, berpengaruh terhadap hasil akurasi yang didapatkan nantinya. Disini saya kan menggunakan metode simple yaitu CountVectorizer yang akan membuat matrix frekwensi kemunculan dari suatu karakter di tiap nama yang diberikan, dengan opsi analisa ngram_range 2 - 6 hanya di dalam satu kata saja.
Misal Muhammad Irfani Sahnur, menghasilkan n-gram
Step5: Logistic Regression
Percobaan pertama menggunakan algoritma Logistic Regression. Data hasil feature extraction akan diinput sebagai data training.
Step6: Akurasi prediksi menggunakan data test yang didapat cukup lumayan berada pada tingkat 93.6%
Step7: Detail akurasi metriks
Step8: Testing prediksi jenis kelamin
Step9: Menggunakan Pipeline
Scikit memiliki fitur untuk memudahkan proses diatas dengan mengguanakan Pipeline. Penulisan kode jadi lebih simple dan rapih, berikut konversi kode diatas jika menggunakan Pipeline
Step10: Tingkat akurasi persis sama, dan lebih mudah dalam penulisan kode nya.
Mari kita lakukan kembali testing prediksi jenis kelamin
Step11: Naive Bayes
Algoritma berikutnya yang akan digunakan adalah Naive Bayes. Lansung saja kita coba
Step12: Dengan algoritman Naive Bayes, tingkat akurasi yang didapatkan sedikit saja lebih rendah dari Logistic Regression yaitu 93.3%. Mari kita lakukan kembali testing prediksi jenis kelamin
Step13: Random Forest
Algoritma terakhir yang akan digunakan adalah Random Forest. Lansung saja kita coba
Step14: Dengan algoritman Random Forest, tingkat akurasi yang didapatkan lebih rendah dari dua algoritma sebelumnya, itu sebesar 93.12%.
Algoritma ini juga mempunyai kekurangan, yaitu performance yang yang lebih lambat.
Ok, Mari kita lakukan kembali testing prediksi jenis kelamin | Python Code:
import pandas as pd # pandas is a dataframe library
df = pd.read_csv("./data/data-pemilih-kpu.csv", encoding = 'utf-8-sig')
#dimensi dataset terdiri dari 13137 baris dan 2 kolom
df.shape
#melihat 5 baris pertama dataset
df.head(5)
#melihat 5 baris terakhir dataset
df.tail(5)
Explanation: Memprediksi jenis kelamin dari nama bahasa Indonesia menggunakan Machine Learning
Loading dataset
End of explanation
# mengecek apakah ada data yang berisi null
df.isnull().values.any()
# mengecek jumlah baris data yang berisi null
len(df[pd.isnull(df).any(axis=1)])
# menghapus baris null dan recheck kembali
df = df.dropna(how='all')
len(df[pd.isnull(df).any(axis=1)])
# mengecek dimensi dataset
df.shape
# mengubah isi kolom jenis kelamin dari text menjadi integer (Laki-laki = 1; Perempuan= 0)
jk_map = {"Laki-Laki" : 1, "Perempuan" : 0}
df["jenis_kelamin"] = df["jenis_kelamin"].map(jk_map)
# cek kembali data apakah telah berubah
df.head(5)
# Mengecek distribusi jenis kelamin pada dataset
num_obs = len(df)
num_true = len(df.loc[df['jenis_kelamin'] == 1])
num_false = len(df.loc[df['jenis_kelamin'] == 0])
print("Jumlah Pria: {0} ({1:2.2f}%)".format(num_true, (num_true/num_obs) * 100))
print("Jumlah Wanita: {0} ({1:2.2f}%)".format(num_false, (num_false/num_obs) * 100))
Explanation: Cleansing dataset
End of explanation
from sklearn.model_selection import train_test_split
feature_col_names = ["nama"]
predicted_class_names = ["jenis_kelamin"]
X = df[feature_col_names].values
y = df[predicted_class_names].values
split_test_size = 0.30
text_train, text_test, y_train, y_test = train_test_split(X, y, test_size=split_test_size, stratify=y, random_state=42)
Explanation: Split Dataset
Dataset yang adalah akan dipecah menjadi dua bagian, 70% data akan digunakan sebagai data training untuk melatih mesin. Kemudian 30% sisanya akan digunakan sebagai data testing untuk mengevaluasi akurasi predisksi machine learning.
End of explanation
print("Dataset Asli Pria : {0} ({1:0.2f}%)".format(len(df.loc[df['jenis_kelamin'] == 1]), (len(df.loc[df['jenis_kelamin'] == 1])/len(df.index)) * 100.0))
print("Dataset Asli Wanita : {0} ({1:0.2f}%)".format(len(df.loc[df['jenis_kelamin'] == 0]), (len(df.loc[df['jenis_kelamin'] == 0])/len(df.index)) * 100.0))
print("")
print("Dataset Training Pria : {0} ({1:0.2f}%)".format(len(y_train[y_train[:] == 1]), (len(y_train[y_train[:] == 1])/len(y_train) * 100.0)))
print("Dataset Training Wanita : {0} ({1:0.2f}%)".format(len(y_train[y_train[:] == 0]), (len(y_train[y_train[:] == 0])/len(y_train) * 100.0)))
print("")
print("Dataset Test Pria : {0} ({1:0.2f}%)".format(len(y_test[y_test[:] == 1]), (len(y_test[y_test[:] == 1])/len(y_test) * 100.0)))
print("Dataset Test Wanita : {0} ({1:0.2f}%)".format(len(y_test[y_test[:] == 0]), (len(y_test[y_test[:] == 0])/len(y_test) * 100.0)))
Explanation: Dataset telah dipecah menjadi 2 bagian, mari kita cek distribusi nya.
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(analyzer = 'char_wb', ngram_range=(2,6))
vectorizer.fit(text_train.ravel())
X_train = vectorizer.transform(text_train.ravel())
X_test = vectorizer.transform(text_test.ravel())
Explanation: Terlihat hasilnya, dataset yang telah dipecah dua tetap dapat mempertahankan persentase distribusi jenis kelamin seperti pada dataset asli.
Features Extraction
Proses features extraction, berpengaruh terhadap hasil akurasi yang didapatkan nantinya. Disini saya kan menggunakan metode simple yaitu CountVectorizer yang akan membuat matrix frekwensi kemunculan dari suatu karakter di tiap nama yang diberikan, dengan opsi analisa ngram_range 2 - 6 hanya di dalam satu kata saja.
Misal Muhammad Irfani Sahnur, menghasilkan n-gram :
* mu
* ham
* mad
* nur
* dst
End of explanation
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X_train, y_train.ravel())
Explanation: Logistic Regression
Percobaan pertama menggunakan algoritma Logistic Regression. Data hasil feature extraction akan diinput sebagai data training.
End of explanation
# dataset training
print(clf.score(X_train, y_train))
# dataset test
print(clf.score(X_test, y_test))
Explanation: Akurasi prediksi menggunakan data test yang didapat cukup lumayan berada pada tingkat 93.6%
End of explanation
from sklearn import metrics
clf_predict = clf.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, clf_predict)))
print(metrics.confusion_matrix(y_test, clf_predict, labels=[1, 0]) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, clf_predict, labels=[1,0]))
Explanation: Detail akurasi metriks
End of explanation
jk_label = {1:"Laki-Laki", 0:"Perempuan"}
test_predict = vectorizer.transform(["niky felina"])
res = clf.predict(test_predict)
print(jk_label[int(res)])
Explanation: Testing prediksi jenis kelamin
End of explanation
from sklearn.pipeline import Pipeline
clf_lg = Pipeline([('vect', CountVectorizer(analyzer = 'char_wb', ngram_range=(2,6))),
('clf', LogisticRegression()),
])
_ = clf_lg.fit(text_train.ravel(), y_train.ravel())
predicted = clf_lg.predict(text_test.ravel())
np.mean(predicted == y_test.ravel())
Explanation: Menggunakan Pipeline
Scikit memiliki fitur untuk memudahkan proses diatas dengan mengguanakan Pipeline. Penulisan kode jadi lebih simple dan rapih, berikut konversi kode diatas jika menggunakan Pipeline
End of explanation
result = clf_lg.predict(["muhammad irfani sahnur"])
print(jk_label[result[0]])
Explanation: Tingkat akurasi persis sama, dan lebih mudah dalam penulisan kode nya.
Mari kita lakukan kembali testing prediksi jenis kelamin
End of explanation
from sklearn.naive_bayes import MultinomialNB
clf_nb = Pipeline([('vect', CountVectorizer(analyzer = 'char_wb', ngram_range=(2,6))),
('clf', MultinomialNB()),
])
clf_nb = clf_nb.fit(text_train.ravel(), y_train.ravel())
predicted = clf_nb.predict(text_test.ravel())
np.mean(predicted == y_test.ravel())
Explanation: Naive Bayes
Algoritma berikutnya yang akan digunakan adalah Naive Bayes. Lansung saja kita coba
End of explanation
result = clf_nb.predict(["Alifah Rahmah"])
print(jk_label[result[0]])
Explanation: Dengan algoritman Naive Bayes, tingkat akurasi yang didapatkan sedikit saja lebih rendah dari Logistic Regression yaitu 93.3%. Mari kita lakukan kembali testing prediksi jenis kelamin
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf_rf = Pipeline([('vect', CountVectorizer(analyzer = 'char_wb', ngram_range=(2,6))),
('clf', RandomForestClassifier(n_estimators=90, n_jobs=-1)),
])
clf_rf = clf_rf.fit(text_train.ravel(), y_train.ravel())
predicted = clf_rf.predict(text_test.ravel())
np.mean(predicted == y_test.ravel())
Explanation: Random Forest
Algoritma terakhir yang akan digunakan adalah Random Forest. Lansung saja kita coba
End of explanation
result = clf_rf.predict(["Yuni ahmad"])
print(jk_label[result[0]])
Explanation: Dengan algoritman Random Forest, tingkat akurasi yang didapatkan lebih rendah dari dua algoritma sebelumnya, itu sebesar 93.12%.
Algoritma ini juga mempunyai kekurangan, yaitu performance yang yang lebih lambat.
Ok, Mari kita lakukan kembali testing prediksi jenis kelamin
End of explanation |
8,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> 2d. Distributed training and monitoring </h1>
In this notebook, we refactor to call train_and_evaluate instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.
Step1: <h2> Input </h2>
Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
Step2: <h2> Create features out of input data </h2>
For now, pass these through. (same as previous lab)
Step3: <h2> Serving input function </h2>
Step4: <h2> tf.estimator.train_and_evaluate </h2>
Step5: <h2>Run training</h2> | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
from google.cloud import bigquery
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
Explanation: <h1> 2d. Distributed training and monitoring </h1>
In this notebook, we refactor to call train_and_evaluate instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.
End of explanation
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.compat.v1.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
Explanation: <h2> Input </h2>
Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
End of explanation
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
Explanation: <h2> Create features out of input data </h2>
For now, pass these through. (same as previous lab)
End of explanation
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.compat.v1.placeholder(tf.float32, [None]),
'pickuplat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflon' : tf.compat.v1.placeholder(tf.float32, [None]),
'passengers' : tf.compat.v1.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
Explanation: <h2> Serving input function </h2>
End of explanation
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
Explanation: <h2> tf.estimator.train_and_evaluate </h2>
End of explanation
OUTDIR = './taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.compat.v1.summary.FileWriterCache.clear()
train_and_evaluate(OUTDIR, num_train_steps = 500)
Explanation: <h2>Run training</h2>
End of explanation |
8,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Formatando-arrays-para-impressão" data-toc-modified-id="Formatando-arrays-para-impressão-1"><span class="toc-item-num">1 </span>Formatando arrays para impressão</a></div><div class="lev2 toc-item"><a href="#Imprimindo-arrays-de-ponto-flutuante" data-toc-modified-id="Imprimindo-arrays-de-ponto-flutuante-11"><span class="toc-item-num">1.1 </span>Imprimindo arrays de ponto flutuante</a></div><div class="lev2 toc-item"><a href="#Imprimindo-arrays-binários" data-toc-modified-id="Imprimindo-arrays-binários-12"><span class="toc-item-num">1.2 </span>Imprimindo arrays binários</a></div>
# Formatando arrays para impressão
## Imprimindo arrays de ponto flutuante
Ao se imprimir arrays com valores em ponto flutuante, o NumPy em geral, imprime o array com muitas as casas decimais e com notação científica, o que dificulta a visualização.
Step1: É possível diminuir o número de casas decimais e suprimir a notação exponencial utilizando a função <b>set_printoption</b> do numpy
Step2: Imprimindo arrays binários
Array booleanos são impressos com as palavras <b>True</b> e <b>False</b>, como no exemplo a seguir
Step3: Para facilitar a visualização destes arrays, é possível converter os valores para inteiros utilizando o método <b>astype(int)</b> | Python Code:
import numpy as np
A = np.exp(np.linspace(0.1,10,32)).reshape(4,8)/3000.
print('A: \n', A)
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Formatando-arrays-para-impressão" data-toc-modified-id="Formatando-arrays-para-impressão-1"><span class="toc-item-num">1 </span>Formatando arrays para impressão</a></div><div class="lev2 toc-item"><a href="#Imprimindo-arrays-de-ponto-flutuante" data-toc-modified-id="Imprimindo-arrays-de-ponto-flutuante-11"><span class="toc-item-num">1.1 </span>Imprimindo arrays de ponto flutuante</a></div><div class="lev2 toc-item"><a href="#Imprimindo-arrays-binários" data-toc-modified-id="Imprimindo-arrays-binários-12"><span class="toc-item-num">1.2 </span>Imprimindo arrays binários</a></div>
# Formatando arrays para impressão
## Imprimindo arrays de ponto flutuante
Ao se imprimir arrays com valores em ponto flutuante, o NumPy em geral, imprime o array com muitas as casas decimais e com notação científica, o que dificulta a visualização.
End of explanation
np.set_printoptions(suppress=True, precision=3)
print('A: \n', A)
Explanation: É possível diminuir o número de casas decimais e suprimir a notação exponencial utilizando a função <b>set_printoption</b> do numpy:
End of explanation
A = np.random.rand(5,10) > 0.5
print('A = \n', A)
Explanation: Imprimindo arrays binários
Array booleanos são impressos com as palavras <b>True</b> e <b>False</b>, como no exemplo a seguir:
End of explanation
print('A = \n', A.astype(int))
Explanation: Para facilitar a visualização destes arrays, é possível converter os valores para inteiros utilizando o método <b>astype(int)</b>:
End of explanation |
8,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Migrate SessionRunHook to Keras callbacks
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: TensorFlow 1
Step4: TensorFlow 2 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import tensorflow.compat.v1 as tf1
import time
from datetime import datetime
from absl import flags
features = [[1., 1.5], [2., 2.5], [3., 3.5]]
labels = [[0.3], [0.5], [0.7]]
eval_features = [[4., 4.5], [5., 5.5], [6., 6.5]]
eval_labels = [[0.8], [0.9], [1.]]
Explanation: Migrate SessionRunHook to Keras callbacks
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/sessionrunhook_callback">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/sessionrunhook_callback.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/sessionrunhook_callback.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/sessionrunhook_callback.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In TensorFlow 1, to customize the behavior of training, you use tf.estimator.SessionRunHook with tf.estimator.Estimator. This guide demonstrates how to migrate from SessionRunHook to TensorFlow 2's custom callbacks with the tf.keras.callbacks.Callback API, which works with Keras Model.fit for training (as well as Model.evaluate and Model.predict). You will learn how to do this by implementing a SessionRunHook and a Callback task that measures examples per second during training.
Examples of callbacks are checkpoint saving (tf.keras.callbacks.ModelCheckpoint) and TensorBoard summary writing. Keras callbacks are objects that are called at different points during training/evaluation/prediction in the built-in Keras Model.fit/Model.evaluate/Model.predict APIs. You can learn more about callbacks in the tf.keras.callbacks.Callback API docs, as well as the Writing your own callbacks and Training and evaluation with the built-in methods (the Using callbacks section) guides.
Setup
Start with imports and a simple dataset for demonstration purposes:
End of explanation
def _input_fn():
return tf1.data.Dataset.from_tensor_slices(
(features, labels)).batch(1).repeat(100)
def _model_fn(features, labels, mode):
logits = tf1.layers.Dense(1)(features)
loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
class LoggerHook(tf1.train.SessionRunHook):
Logs loss and runtime.
def begin(self):
self._step = -1
self._start_time = time.time()
self.log_frequency = 10
def before_run(self, run_context):
self._step += 1
def after_run(self, run_context, run_values):
if self._step % self.log_frequency == 0:
current_time = time.time()
duration = current_time - self._start_time
self._start_time = current_time
examples_per_sec = self.log_frequency / duration
print('Time:', datetime.now(), ', Step #:', self._step,
', Examples per second:', examples_per_sec)
estimator = tf1.estimator.Estimator(model_fn=_model_fn)
# Begin training.
estimator.train(_input_fn, hooks=[LoggerHook()])
Explanation: TensorFlow 1: Create a custom SessionRunHook with tf.estimator APIs
The following TensorFlow 1 examples show how to set up a custom SessionRunHook that measures examples per second during training. After creating the hook (LoggerHook), pass it to the hooks parameter of tf.estimator.Estimator.train.
End of explanation
class CustomCallback(tf.keras.callbacks.Callback):
def on_train_begin(self, logs = None):
self._step = -1
self._start_time = time.time()
self.log_frequency = 10
def on_train_batch_begin(self, batch, logs = None):
self._step += 1
def on_train_batch_end(self, batch, logs = None):
if self._step % self.log_frequency == 0:
current_time = time.time()
duration = current_time - self._start_time
self._start_time = current_time
examples_per_sec = self.log_frequency / duration
print('Time:', datetime.now(), ', Step #:', self._step,
', Examples per second:', examples_per_sec)
callback = CustomCallback()
dataset = tf.data.Dataset.from_tensor_slices(
(features, labels)).batch(1).repeat(100)
model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)])
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
model.compile(optimizer, "mse")
# Begin training.
result = model.fit(dataset, callbacks=[callback], verbose = 0)
# Provide the results of training metrics.
result.history
Explanation: TensorFlow 2: Create a custom Keras callback for Model.fit
In TensorFlow 2, when you use the built-in Keras Model.fit (or Model.evaluate) for training/evaluation, you can configure a custom tf.keras.callbacks.Callback, which you then pass to the callbacks parameter of Model.fit (or Model.evaluate). (Learn more in the Writing your own callbacks guide.)
In the example below, you will write a custom tf.keras.callbacks.Callback that logs various metrics—it will measure examples per second, which should be comparable to the metrics in the previous SessionRunHook example.
End of explanation |
8,127 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Given a list of variant length features, for example: | Problem:
import pandas as pd
import numpy as np
import sklearn
features = load_data()
from sklearn.preprocessing import MultiLabelBinarizer
new_features = MultiLabelBinarizer().fit_transform(features)
rows, cols = new_features.shape
for i in range(rows):
for j in range(cols):
if new_features[i, j] == 1:
new_features[i, j] = 0
else:
new_features[i, j] = 1 |
8,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Table of Contents
<p><div class="lev1 toc-item"><a href="#Looping-the-Property-Extraction" data-toc-modified-id="Looping-the-Property-Extraction-1"><span class="toc-item-num">1 </span>Looping the Property Extraction</a></div><div class="lev1 toc-item"><a href="#Functions" data-toc-modified-id="Functions-2"><span class="toc-item-num">2 </span>Functions</a></div><div class="lev1 toc-item"><a href="#Filling-arrays-in-a-loop" data-toc-modified-id="Filling-arrays-in-a-loop-3"><span class="toc-item-num">3 </span>Filling arrays in a loop</a></div><div class="lev1 toc-item"><a href="#Saving-your-results-to-disk" data-toc-modified-id="Saving-your-results-to-disk-4"><span class="toc-item-num">4 </span>Saving your results to disk</a></div><div class="lev1 toc-item"><a href="#Data-Summary" data-toc-modified-id="Data-Summary-5"><span class="toc-item-num">5 </span>Data Summary</a></div><div class="lev1 toc-item"><a href="#Plotting-Data" data-toc-modified-id="Plotting-Data-6"><span class="toc-item-num">6 </span>Plotting Data</a></div><div class="lev2 toc-item"><a href="#Histograms" data-toc-modified-id="Histograms-61"><span class="toc-item-num">6.1 </span>Histograms</a></div><div class="lev2 toc-item"><a href="#Box-Plots" data-toc-modified-id="Box-Plots-62"><span class="toc-item-num">6.2 </span>Box-Plots</a></div><div class="lev1 toc-item"><a href="#The-End" data-toc-modified-id="The-End-7"><span class="toc-item-num">7 </span>The End</a></div>
# Looping the Property Extraction
In the last exercise we saw how to extract the properties of individual objects in an image. We will now combine all the actions of reading, thresholding, labeling and measuring into a loop.
To make it easy to understand I have pasted a possible method of doing this, try to understand as much as you can by looking at the program and reading the comments. We will briefly explore parts of the code subsequently.
Step3: Functions
As you read through the program, most things will seem familiar except this section
Step4: This is a function definition.. That's right we can define our own functions. This function in particular takes a normal image as the input, performs thresholding, removes small objects, performs labeling and returns the labelled image.
The function might seem unnecessrary when you can just put then code in your main program. However functions allow you to break up the program into smaller bits which are easier to manage and test. You should employ functions every chance you get. Cleaner, easier to read code is much easier to understand.
If you come from other programming languages, you will be a bit irritated by the fact that Python does not use explicit start and end curly brackets to define where a function starts and ends. Instead, like with for-loops, the indentation determines the function definition.
Some functions return an object and some don't. The funtion print() for example doesn't actually return anything, but isntead just prints a statement on the screen. In contrast our function returns a labeled image. This is defined by the line
Step5: These arrays are then filled in the loop with the extracted properties. The "filling" is done by actually appending new values to the old array and calling the this new appended array the same name as the old array. This renaming allows us to loop over without much of a headache.
For example the out_area array is filled as below
Step6: Note
Step7: We can peek into what the dataframe looks like with the head function. The n=5 specifies that the first 5 rows should be shown.
Step8: Data Summary
So far read the images and stored them in a dataframe. We can do further statistical analysis is external software, however Python is suffcient for doing some quick analysis to get a genereal read on your data. The pandas package makes it very easy to plot data and to get basic stats like mean, sd and percentiles.
Advanced statistical analysis is also possible in Python however it is out of the scope of this document.
The describe function returns basic stats about the data. Notice that the groupby() function is used to get stats specific to the cell types. The 'Area' string corresponds to the column name on which the describe function acts
Step9: Plotting Data
The pandas dataframe also makes it easy to plot different kinds of plots once the data frame has been defined. Below we describe how to plot histograms and box-plots. These are rough plots and the looks can be improved with some tinkering. Read the matplotlib package's tutorials for how to go about making the exact kind of plots you want.
Histograms
One of the basic ways to explore data is to compare histograms of the data. As before with the describe() function, the hist() function can be pointed to specific columsn for plotting and grouping. Grouping means the program uses the values of in the column 'CellType' to separate the data into Type1 and Type2 and to to add labels to the subplots.
The 'sharey' and 'sharex' parameters of the hist() function are set to true so that we can have the same axes for both the groups.
Step10: Box-Plots
Box plots can be generated in a similar manner to the hist() function. | Python Code:
# Import required packages
# File handling
import os
import glob
# Array handling
import numpy as np
# Image handling
from skimage.io import imread
# Image thresholding and measurement
from skimage.filters import threshold_otsu
from skimage.morphology import remove_small_objects
from skimage.measure import label, regionprops
# Instruction to jupyter notebook to show images within document
%matplotlib inline
# Function to label images
def an_thresh_label_image(in_img):
Funtion to take in image and return a labelled image for use in regionprops.
The remove_small_objects() function is applied to remove small bright objects
thresh_val = threshold_otsu(in_img)
thresh_img = in_img>thresh_val
thresh_img = remove_small_objects(thresh_img)
label_img = label(thresh_img)
return label_img
###### MAIN PROGRAM ######
# Define file paths and image pattern
root_root = '/home/aneesh/Images/Source/' # Replace as per local configuration
file_patt = '/*.tif' # pattern of file that is searched for by glob.glob()
# Get list of folders with images
folds = os.listdir(root_root)
# Get a list of lists (corresponding to number of folders) of full file paths
file_paths = [glob.glob(root_root+fold+file_patt) for fold in folds]
# Define empty arrays to be filled in the loop
out_celltype = np.array([])
out_area = np.array([])
out_maj_axis_length = np.array([])
# Iterate over lists in the file_paths list
for i, fold in enumerate(file_paths):
# Iterate over individual file paths in the lists
for j,file_path in enumerate(fold):
# Read Image
in_img = imread(file_path, as_grey=True)
# Label Image
lbl_im = an_thresh_label_image(in_img)
# Extract properties
r_props = regionprops(label_image=lbl_im, intensity_image=in_img)
# Store Area
out_area = np.append(out_area, np.array([rp.area for rp in r_props]))
# Store Major Axis Length
out_maj_axis_length = np.append(out_maj_axis_length, np.array([rp.major_axis_length for rp in r_props]))
# Store cell type corresponding to each object
out_celltype = np.append(out_celltype,[folds[i]]*len(r_props))
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Looping-the-Property-Extraction" data-toc-modified-id="Looping-the-Property-Extraction-1"><span class="toc-item-num">1 </span>Looping the Property Extraction</a></div><div class="lev1 toc-item"><a href="#Functions" data-toc-modified-id="Functions-2"><span class="toc-item-num">2 </span>Functions</a></div><div class="lev1 toc-item"><a href="#Filling-arrays-in-a-loop" data-toc-modified-id="Filling-arrays-in-a-loop-3"><span class="toc-item-num">3 </span>Filling arrays in a loop</a></div><div class="lev1 toc-item"><a href="#Saving-your-results-to-disk" data-toc-modified-id="Saving-your-results-to-disk-4"><span class="toc-item-num">4 </span>Saving your results to disk</a></div><div class="lev1 toc-item"><a href="#Data-Summary" data-toc-modified-id="Data-Summary-5"><span class="toc-item-num">5 </span>Data Summary</a></div><div class="lev1 toc-item"><a href="#Plotting-Data" data-toc-modified-id="Plotting-Data-6"><span class="toc-item-num">6 </span>Plotting Data</a></div><div class="lev2 toc-item"><a href="#Histograms" data-toc-modified-id="Histograms-61"><span class="toc-item-num">6.1 </span>Histograms</a></div><div class="lev2 toc-item"><a href="#Box-Plots" data-toc-modified-id="Box-Plots-62"><span class="toc-item-num">6.2 </span>Box-Plots</a></div><div class="lev1 toc-item"><a href="#The-End" data-toc-modified-id="The-End-7"><span class="toc-item-num">7 </span>The End</a></div>
# Looping the Property Extraction
In the last exercise we saw how to extract the properties of individual objects in an image. We will now combine all the actions of reading, thresholding, labeling and measuring into a loop.
To make it easy to understand I have pasted a possible method of doing this, try to understand as much as you can by looking at the program and reading the comments. We will briefly explore parts of the code subsequently.
End of explanation
# Function to label images
def an_thresh_label_image(in_img):
Funtion to take in image and return a labelled image for use in regionprops.
The remove_small_objects() function is applied to remove small bright objects
thresh_val = threshold_otsu(in_img)
thresh_img = in_img>thresh_val
thresh_img = remove_small_objects(thresh_img)
label_img = label(thresh_img)
return label_img
Explanation: Functions
As you read through the program, most things will seem familiar except this section:
End of explanation
# Define empty arrays to be filled in the loop
out_celltype = np.array([])
out_area = np.array([])
out_maj_axis_length = np.array([])
Explanation: This is a function definition.. That's right we can define our own functions. This function in particular takes a normal image as the input, performs thresholding, removes small objects, performs labeling and returns the labelled image.
The function might seem unnecessrary when you can just put then code in your main program. However functions allow you to break up the program into smaller bits which are easier to manage and test. You should employ functions every chance you get. Cleaner, easier to read code is much easier to understand.
If you come from other programming languages, you will be a bit irritated by the fact that Python does not use explicit start and end curly brackets to define where a function starts and ends. Instead, like with for-loops, the indentation determines the function definition.
Some functions return an object and some don't. The funtion print() for example doesn't actually return anything, but isntead just prints a statement on the screen. In contrast our function returns a labeled image. This is defined by the line:
return label_image
This function gets called in the inner loop of the main program after the image is read. It is important to understand that what is done in a function stays in the function. This means that the variables defined, the names of the variable exist only in the context of the function. Even the values returned are returned without name. Thus when the function is called we store the result in the variable 'lbl_im.
Filling arrays in a loop
The next lines where we define the paths and get the file lists should also be familiar. Since we want to extract and work with the properties extracted from the images we need to store them in some fashion. For this we employ arrays. At the start of the loops we define empty arrays:
End of explanation
out_area = np.append(out_area, np.array([rp.area for rp in r_props]))
Explanation: These arrays are then filled in the loop with the extracted properties. The "filling" is done by actually appending new values to the old array and calling the this new appended array the same name as the old array. This renaming allows us to loop over without much of a headache.
For example the out_area array is filled as below:
End of explanation
import pandas as pd
# path to save csv file
save_path = '/home/aneesh/Images/Analysis/'
# create data frame using the created arrays as input. Each array is named by the strings in quotes.
props_df = pd.DataFrame.from_items([("CellType",out_celltype), ("Area",out_area), ("MajAxisLength",out_maj_axis_length)])
# combine the save path with file name and save to csv format
props_df.to_csv(save_path+'cell_props.csv')
Explanation: Note: The way of filling arrays that we have employed works well enough for our small example but might be unsatisfactory due to the memory reallocation overheads for larger datasets. There are ways around this such as "preallocation" which should be looked at if needed.
Saving your results to disk
Now that we have all the properties we want saved in 3 arrays, we would like to save the results to a file which we can open for statistical analysis such as Excel, Origin or GraphPad. The go-to format for such files is the csv format. In python the "pandas" package gives a very handy way of creating data types called "DataFrames" to collect information in a spreadsheet like system and also save it.
This is done as below:
End of explanation
props_df.head(n=5)
Explanation: We can peek into what the dataframe looks like with the head function. The n=5 specifies that the first 5 rows should be shown.
End of explanation
area_summary = pd.DataFrame(props_df.groupby(["CellType"])['Area'].describe())
area_summary
mal_summary = pd.DataFrame(props_df.groupby(["CellType"])['MajAxisLength'].describe())
mal_summary
Explanation: Data Summary
So far read the images and stored them in a dataframe. We can do further statistical analysis is external software, however Python is suffcient for doing some quick analysis to get a genereal read on your data. The pandas package makes it very easy to plot data and to get basic stats like mean, sd and percentiles.
Advanced statistical analysis is also possible in Python however it is out of the scope of this document.
The describe function returns basic stats about the data. Notice that the groupby() function is used to get stats specific to the cell types. The 'Area' string corresponds to the column name on which the describe function acts
End of explanation
# Import matplotlib's pyplot to be able to add the title to plots
import matplotlib.pyplot as plt
# Plot for area
axes = props_df.hist(column="Area", by="CellType", sharey=True, sharex=True)
plt.suptitle('Area') # Add title
# Plot for Major Axis Length
props_df.hist(column="MajAxisLength", by="CellType", sharey=True, sharex=True)
plt.suptitle('Major Axis Length') # Add title
Explanation: Plotting Data
The pandas dataframe also makes it easy to plot different kinds of plots once the data frame has been defined. Below we describe how to plot histograms and box-plots. These are rough plots and the looks can be improved with some tinkering. Read the matplotlib package's tutorials for how to go about making the exact kind of plots you want.
Histograms
One of the basic ways to explore data is to compare histograms of the data. As before with the describe() function, the hist() function can be pointed to specific columsn for plotting and grouping. Grouping means the program uses the values of in the column 'CellType' to separate the data into Type1 and Type2 and to to add labels to the subplots.
The 'sharey' and 'sharex' parameters of the hist() function are set to true so that we can have the same axes for both the groups.
End of explanation
props_df.boxplot(column="Area", by="CellType")
props_df.boxplot(column="MajAxisLength", by="CellType")
Explanation: Box-Plots
Box plots can be generated in a similar manner to the hist() function.
End of explanation |
8,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-l', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CAS
Source ID: FGOALS-F3-L
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
8,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
\title{Phase Lock Loop Components in myHDL}
\author{Steven K Armour}
\maketitle
This notebook is an exploration into the building and testing the Phase Lock Detector and the frequency divider components of an all Digital Phase Lock Loop. Here the Digitial Oscillator and the low pass filter are left for there own exploratory analysis to then allow the reader to design and implement there own PLL.
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top
Step2: The Phase Lock Loop
The phase lock loop (PLL) is one of the Six classical feedback topologies in electrical engineering. The others being the Voltage-Voltage (Series-Shunt), Voltage-Current(Shunt-Shunt), Current-Current (Shunt-Shunt), Current-Voltage (Series-Series), and the Autoregressive Moving Average (ARMA, aka IIR Filter). The PLL differs from the rest in that it is temporal compersion feedback system that only works for oscillatory information. Looking at the diagram below for the generic classical PLL we see that it is made of Five (six if a frequency divider is added to the reference clock) components. Two of which the Phase Detector and the output frequency divider that make up the feedback loop are somewhat unique to the PLL and are the primary concern of this notebook.
<img src='PLL.png'>
The other components that are part of the forward path are the filter that helps take out some of the "phase noise" error that creeps into the PLL and is typical of the Low Pass variety. The reference oscillator that since the PLL controls phase and phase is a measure of temporal displacement against a reference and the Controlled Oscillator. In brief, the controlled oscillator is made to accelerate its temporal progression (how fast it oscillates) relative to the reference oscillator the error provided by the feedback loop. Where the error is then given as
$$e(t)=K_{PD}(\Phi_{\text{ref}}(t)-\Phi_{\text{ocl}}(t)/N)$$ Where
$K_{PD}$ is the transfer function of the Phase Detector that will be expanded upon shortly. the noting that the Transfer Function for the Voltage control oscillator is $K_{CO}$ and that of the filter is $H(s)$ the resulting open loop and closed loop gain can be found to be
$$A(s)=\dfrac{K_{PD} H(s) K_{CO}}{Ns}$$
$$G(s)=\dfrac{K_{PD} H(s) K_{CO}}{s+K_{PD} H(s) \dfrac{K_{CO}}{N}}$$
Phase Detectors
The phase detector (PD) is the only must-have component to a PLL that makes a PLL a PLL via calculating the phase offset error from a reference frequency source and the controlled frequency source via the negative feedback loop. It is the job of the PD to ensure that the waveform of the output is within some portion is in sync with the reference waveform thereby creating a phase lock. Where if the reface waveform and the controlled waveform are out of sync then the loop is said to be unlocked. While for analog PLL there are a variety of PDs which are mostly based on mixers (see Wiki Phase detector) for digital Phase Detectors there are basically only two kinds. The very primitive Negated XOR and the DFlipFlop Stat machine variety
Negated XOR
<img src='NXOR_PD.png'>
NXOR_PD myHDL implementation
Step4: NXOR_PD myHDL Testing
Step5: Verilog implementation
Step7: Sequntial Phase Detector
<img src="SeqPD.png">
The Sequential PD is the most common digital phase detector around and while there are variations of this architecture that can be found they all based on this simple but ingenious architecture. The architecture consists of two DFF where unlike in a typical state machine where a master clock control the flipping of the DFFs and the Data line into the DFFs is part of the State machine feedback loop. Here the DFFs are each on an independent clock where here the upper DFF is tied to the Reference Clock and the lower one is tied to the Feedback Clock. Therefore each one will output a high signal at a rate determined by the frequency of there respective clocks.
But that is not the end to the cleverness of this topology. The outputs of the DFFs are continuously compared by an AND gate such that only when the DFFs are in sync ($\omega_{\text{REF}}=\omega_{\text{FB}}$) will a very brief high spike will show up on both Next state lines of the DFF before the two DFFs are reset. For the other two conditions possible, only one of the two lines will have a high-value present. If $\omega_{\text{REF}}>\omega_{\text{FB}}$ then the lower output will be zero. And conversely if $\omega_{\text{REF}}<\omega_{\text{FB}}$ the upper output will be zero.
The above conditions as described by Razavi yield the following state machine for the Sequential Phase Detector
<img src='SeqPDSM.png'>
Note that the actual phase detection is the average of the two output lines. Wich not shown here, where one method to find the average is by scaling the boolean outputs to digital words and then pass the resultant words to an average and then to a low pass filter in order to implement the PLL.
Seq_PD myHDL implementation
Step9: Seq_PD myHDL testing
Step10: Test when $\omega_{\text{REF}}>\omega_{\text{FB}}$
Step11: Test when $\omega_{\text{REF}}=\omega_{\text{FB}}$
Step12: Test when $\omega_{\text{REF}}<\omega_{\text{FB}}$
Step13: Verilog implementation¶
Step15: Fractional Frequency Dividers
Frequency Dividers (more properly called fractional frequency dividers) are and are not a type of counter. In a traditional counter the counter would be run off a master clock to increment the counter and when the specified count is reached an indication signal would be given while also resetting the counter. In a fractional frequency divider, we can increment a counter but the incrimination is run off the input clock which may or may not be the master clock and the output of the counter reaching its count would be a then the new divided output clock. In addition, we can create "fractional" dividers such as $2/3$ via cascading a $1/2$ and a $1/3$ and modulating between them from another clock source to control the modulation.
The reason that "fractional" is in quotes in regards to $2/3$ is that the frequency divider is not actually dividing by $2/3$ but is instead switching between a $1/2$ and a $1/3$ divider. Thus $2/3$ divider is a notational misnomer. But by cascading fixed and variable dividers with counters to control the modulation based on the clock being cascaded from large fractional division can be optioned
So as stated frequency dividers are and are not a counter through some of the more advanced programmable frequency dividers such as http
Step16: Divide by 2 myHDL testing
Step17: Verilog implementation
Step19: Divide by 3
<img src="FD3.png">
Divide by 3 myHDL implementation
Step20: Divide by 3 myHDL testing
Step21: Verilog implementation
Step23: Divide by 2/3
<img src="FD23.png">
This is a simple hybridization of the $1/2$ and the $1/3$ frequency divider that is modulated between the two by a ModControl Signal at the or gate to create a $2/3$ frequency divider
Divide by 2/3 myHDL implementation
Step24: Divide by 2/3 myHDL testing
Step25: Verilog implementation
Step27: Divide by 4/5
<img src='FD45.png'>
Divide by 4/5 myHDL implementation
Step28: Divide by 4/5 myHDL testing
Step29: Verilog implementation
Step31: Divide by 6 via cascaded 2 and 3
Divide by 6 myHDL implementation
Step32: Divide by 6 myHDL testing
Step33: Verilog implementation | Python Code:
from myhdl import *
from myhdlpeek import Peeker
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
Explanation: \title{Phase Lock Loop Components in myHDL}
\author{Steven K Armour}
\maketitle
This notebook is an exploration into the building and testing the Phase Lock Detector and the frequency divider components of an all Digital Phase Lock Loop. Here the Digitial Oscillator and the low pass filter are left for there own exploratory analysis to then allow the reader to design and implement there own PLL.
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#References" data-toc-modified-id="References-1"><span class="toc-item-num">1 </span>References</a></span></li><li><span><a href="#Libraries-used-and-aux-functions" data-toc-modified-id="Libraries-used-and-aux-functions-2"><span class="toc-item-num">2 </span>Libraries used and aux functions</a></span></li><li><span><a href="#The-Phase-Lock-Loop" data-toc-modified-id="The-Phase-Lock-Loop-3"><span class="toc-item-num">3 </span>The Phase Lock Loop</a></span></li><li><span><a href="#Phase-Detectors" data-toc-modified-id="Phase-Detectors-4"><span class="toc-item-num">4 </span>Phase Detectors</a></span><ul class="toc-item"><li><span><a href="#Negated-XOR" data-toc-modified-id="Negated-XOR-4.1"><span class="toc-item-num">4.1 </span>Negated XOR</a></span><ul class="toc-item"><li><span><a href="#NXOR_PD-myHDL-implementation" data-toc-modified-id="NXOR_PD-myHDL-implementation-4.1.1"><span class="toc-item-num">4.1.1 </span>NXOR_PD myHDL implementation</a></span></li><li><span><a href="#NXOR_PD-myHDL-Testing" data-toc-modified-id="NXOR_PD-myHDL-Testing-4.1.2"><span class="toc-item-num">4.1.2 </span>NXOR_PD myHDL Testing</a></span></li><li><span><a href="#Verilog-implementation" data-toc-modified-id="Verilog-implementation-4.1.3"><span class="toc-item-num">4.1.3 </span>Verilog implementation</a></span></li></ul></li><li><span><a href="#Sequntial--Phase-Detector" data-toc-modified-id="Sequntial--Phase-Detector-4.2"><span class="toc-item-num">4.2 </span>Sequntial Phase Detector</a></span><ul class="toc-item"><li><span><a href="#Seq_PD-myHDL-implementation" data-toc-modified-id="Seq_PD-myHDL-implementation-4.2.1"><span class="toc-item-num">4.2.1 </span>Seq_PD myHDL implementation</a></span></li><li><span><a href="#Seq_PD-myHDL-testing" data-toc-modified-id="Seq_PD-myHDL-testing-4.2.2"><span class="toc-item-num">4.2.2 </span>Seq_PD myHDL testing</a></span><ul class="toc-item"><li><span><a href="#Test-when-$\omega_{\text{REF}}>\omega_{\text{FB}}$" data-toc-modified-id="Test-when-$\omega_{\text{REF}}>\omega_{\text{FB}}$-4.2.2.1"><span class="toc-item-num">4.2.2.1 </span>Test when $\omega_{\text{REF}}>\omega_{\text{FB}}$</a></span></li><li><span><a href="#Test-when-$\omega_{\text{REF}}=\omega_{\text{FB}}$" data-toc-modified-id="Test-when-$\omega_{\text{REF}}=\omega_{\text{FB}}$-4.2.2.2"><span class="toc-item-num">4.2.2.2 </span>Test when $\omega_{\text{REF}}=\omega_{\text{FB}}$</a></span></li><li><span><a href="#Test-when-$\omega_{\text{REF}}<\omega_{\text{FB}}$" data-toc-modified-id="Test-when-$\omega_{\text{REF}}<\omega_{\text{FB}}$-4.2.2.3"><span class="toc-item-num">4.2.2.3 </span>Test when $\omega_{\text{REF}}<\omega_{\text{FB}}$</a></span></li></ul></li><li><span><a href="#Verilog-implementation¶" data-toc-modified-id="Verilog-implementation¶-4.2.3"><span class="toc-item-num">4.2.3 </span>Verilog implementation¶</a></span></li></ul></li></ul></li><li><span><a href="#Fractional-Frequency-Dividers" data-toc-modified-id="Fractional-Frequency-Dividers-5"><span class="toc-item-num">5 </span>Fractional Frequency Dividers</a></span><ul class="toc-item"><li><span><a href="#Divide-by-2" data-toc-modified-id="Divide-by-2-5.1"><span class="toc-item-num">5.1 </span>Divide by 2</a></span><ul class="toc-item"><li><span><a href="#Divide-by-2-myHDL-implementation" data-toc-modified-id="Divide-by-2-myHDL-implementation-5.1.1"><span class="toc-item-num">5.1.1 </span>Divide by 2 myHDL implementation</a></span></li><li><span><a href="#Divide-by-2-myHDL-testing" data-toc-modified-id="Divide-by-2-myHDL-testing-5.1.2"><span class="toc-item-num">5.1.2 </span>Divide by 2 myHDL testing</a></span></li><li><span><a href="#Verilog-implementation" data-toc-modified-id="Verilog-implementation-5.1.3"><span class="toc-item-num">5.1.3 </span>Verilog implementation</a></span></li></ul></li><li><span><a href="#Divide-by-3" data-toc-modified-id="Divide-by-3-5.2"><span class="toc-item-num">5.2 </span>Divide by 3</a></span><ul class="toc-item"><li><span><a href="#Divide-by-3-myHDL-implementation" data-toc-modified-id="Divide-by-3-myHDL-implementation-5.2.1"><span class="toc-item-num">5.2.1 </span>Divide by 3 myHDL implementation</a></span></li><li><span><a href="#Divide-by-3-myHDL-testing" data-toc-modified-id="Divide-by-3-myHDL-testing-5.2.2"><span class="toc-item-num">5.2.2 </span>Divide by 3 myHDL testing</a></span></li><li><span><a href="#Verilog-implementation" data-toc-modified-id="Verilog-implementation-5.2.3"><span class="toc-item-num">5.2.3 </span>Verilog implementation</a></span></li></ul></li><li><span><a href="#Divide-by-2/3" data-toc-modified-id="Divide-by-2/3-5.3"><span class="toc-item-num">5.3 </span>Divide by 2/3</a></span><ul class="toc-item"><li><span><a href="#Divide-by-2/3-myHDL-implementation" data-toc-modified-id="Divide-by-2/3-myHDL-implementation-5.3.1"><span class="toc-item-num">5.3.1 </span>Divide by 2/3 myHDL implementation</a></span></li><li><span><a href="#Divide-by-2/3-myHDL-testing" data-toc-modified-id="Divide-by-2/3-myHDL-testing-5.3.2"><span class="toc-item-num">5.3.2 </span>Divide by 2/3 myHDL testing</a></span></li><li><span><a href="#Verilog-implementation" data-toc-modified-id="Verilog-implementation-5.3.3"><span class="toc-item-num">5.3.3 </span>Verilog implementation</a></span></li></ul></li><li><span><a href="#Divide-by-4/5" data-toc-modified-id="Divide-by-4/5-5.4"><span class="toc-item-num">5.4 </span>Divide by 4/5</a></span><ul class="toc-item"><li><span><a href="#Divide-by-4/5-myHDL-implementation" data-toc-modified-id="Divide-by-4/5-myHDL-implementation-5.4.1"><span class="toc-item-num">5.4.1 </span>Divide by 4/5 myHDL implementation</a></span></li><li><span><a href="#Divide-by-4/5-myHDL-testing" data-toc-modified-id="Divide-by-4/5-myHDL-testing-5.4.2"><span class="toc-item-num">5.4.2 </span>Divide by 4/5 myHDL testing</a></span></li><li><span><a href="#Verilog-implementation" data-toc-modified-id="Verilog-implementation-5.4.3"><span class="toc-item-num">5.4.3 </span>Verilog implementation</a></span></li></ul></li><li><span><a href="#Divide-by-6-via-cascaded-2-and-3" data-toc-modified-id="Divide-by-6-via-cascaded-2-and-3-5.5"><span class="toc-item-num">5.5 </span>Divide by 6 via cascaded 2 and 3</a></span><ul class="toc-item"><li><span><a href="#Divide-by-6-myHDL-implementation" data-toc-modified-id="Divide-by-6-myHDL-implementation-5.5.1"><span class="toc-item-num">5.5.1 </span>Divide by 6 myHDL implementation</a></span></li><li><span><a href="#Divide-by-6-myHDL-testing" data-toc-modified-id="Divide-by-6-myHDL-testing-5.5.2"><span class="toc-item-num">5.5.2 </span>Divide by 6 myHDL testing</a></span></li><li><span><a href="#Verilog-implementation" data-toc-modified-id="Verilog-implementation-5.5.3"><span class="toc-item-num">5.5.3 </span>Verilog implementation</a></span></li></ul></li></ul></li><li><span><a href="#ToDo" data-toc-modified-id="ToDo-6"><span class="toc-item-num">6 </span>ToDo</a></span></li></ul></div>
References
@misc{allen_2003,
title={LECTURE 170 APPLICATIONS OF PLLS AND FREQUENCY DIVIDERS (PRESCALERS)},
author={Allen, Phillip E.},
year={2003}
},
@phdthesis{gal_2012,
title={Design of Fractional-N Phase Locked Loops For Frequency Synthesis From 30 To 40 GHz},
school={McGill University},
author={Gal, George},
year={2012}
},
@misc{niknejad_2014,
title={Phase Locked Loops (PLL) and Frequency Synthesis},
author={Niknejad, Ali M.},
year={2014}
},
@book{razavi_2009,
place={Upper Saddle River, NJ},
edition={1},
title={RF microelectronics},
publisher={Prentice Hall},
author={Razavi, Behzad},
year={2009},
pages={Chapter 8}
}
@book{craninckx_steyaert_1998,
place={New York},
title={Wireless CMOS frequency synthesizer design},
publisher={Springer},
author={Craninckx, J and Steyaert, M},
year={1998}
pages={42-46}
}
Libraries used and aux functions
End of explanation
@block
def NXORPD(clkREF, clkFB, LOCK):
Negated XOR Phase Detector
I/O:
clkREF (bool; in): Ref clock
clkFB (bool; in): Compere clock
LOCK (bool; out): Negated XOR (LOCK) result
@always_comb
def logic():
LOCK.next= not (clkREF^clkFB)
return instances()
Explanation: The Phase Lock Loop
The phase lock loop (PLL) is one of the Six classical feedback topologies in electrical engineering. The others being the Voltage-Voltage (Series-Shunt), Voltage-Current(Shunt-Shunt), Current-Current (Shunt-Shunt), Current-Voltage (Series-Series), and the Autoregressive Moving Average (ARMA, aka IIR Filter). The PLL differs from the rest in that it is temporal compersion feedback system that only works for oscillatory information. Looking at the diagram below for the generic classical PLL we see that it is made of Five (six if a frequency divider is added to the reference clock) components. Two of which the Phase Detector and the output frequency divider that make up the feedback loop are somewhat unique to the PLL and are the primary concern of this notebook.
<img src='PLL.png'>
The other components that are part of the forward path are the filter that helps take out some of the "phase noise" error that creeps into the PLL and is typical of the Low Pass variety. The reference oscillator that since the PLL controls phase and phase is a measure of temporal displacement against a reference and the Controlled Oscillator. In brief, the controlled oscillator is made to accelerate its temporal progression (how fast it oscillates) relative to the reference oscillator the error provided by the feedback loop. Where the error is then given as
$$e(t)=K_{PD}(\Phi_{\text{ref}}(t)-\Phi_{\text{ocl}}(t)/N)$$ Where
$K_{PD}$ is the transfer function of the Phase Detector that will be expanded upon shortly. the noting that the Transfer Function for the Voltage control oscillator is $K_{CO}$ and that of the filter is $H(s)$ the resulting open loop and closed loop gain can be found to be
$$A(s)=\dfrac{K_{PD} H(s) K_{CO}}{Ns}$$
$$G(s)=\dfrac{K_{PD} H(s) K_{CO}}{s+K_{PD} H(s) \dfrac{K_{CO}}{N}}$$
Phase Detectors
The phase detector (PD) is the only must-have component to a PLL that makes a PLL a PLL via calculating the phase offset error from a reference frequency source and the controlled frequency source via the negative feedback loop. It is the job of the PD to ensure that the waveform of the output is within some portion is in sync with the reference waveform thereby creating a phase lock. Where if the reface waveform and the controlled waveform are out of sync then the loop is said to be unlocked. While for analog PLL there are a variety of PDs which are mostly based on mixers (see Wiki Phase detector) for digital Phase Detectors there are basically only two kinds. The very primitive Negated XOR and the DFlipFlop Stat machine variety
Negated XOR
<img src='NXOR_PD.png'>
NXOR_PD myHDL implementation
End of explanation
#clear peeker and create test signals
Peeker.clear()
clkREF=Signal(bool(0)); Peeker(clkREF, 'clkREF')
clkFB=Signal(bool(0)); Peeker(clkFB, 'clkFB')
LOCK=Signal(bool(0)); Peeker(LOCK, 'LOCK')
#this clk is a Witness clock refrance
clk=Signal(bool(0)); Peeker(clk, 'clk')
#bind the signals to the DUT
DUT=NXORPD(clkREF, clkFB, LOCK)
def NXORPD_TB(RefDelay=2, FBDelay=4):
Negated XOR Phase Detector Testbench
Args:
RefDelay (int; 2): refrance clock delay cyles compared to refrance
FBDelay (int; 4): feedback clock delay cyles compared to refrance
#witness clock
@always(delay(1))
def clkGen():
clk.next=not clk
#refrance clock
@always(delay(RefDelay))
def RefClkGen():
clkREF.next=not clkREF
#feedback clock
@always(delay(FBDelay))
def FBClkGen():
clkFB.next=not clkFB
#run the simulation
@instance
def stimulus():
for i in range(100):
yield clk.posedge
raise StopSimulation()
return instances()
sim=Simulation(DUT, NXORPD_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=20)
#pull the sim data into a PD dataframe
NXORPDRes=Peeker.to_dataframe()
#reorder the collums
NXORPDRes=NXORPDRes.reindex(columns=['clk', 'clkREF', 'clkFB', 'LOCK']);NXORPDRes
#show the top ten
NXORPDRes.head(10)
#review what the Ref and FB clock values where when the PD was locked
NXORPDRes[NXORPDRes['clk']==1][NXORPDRes['LOCK']==1].head(10)
#review what the Ref and FB clock values where when the PD was unlocked
NXORPDRes[NXORPDRes['clk']==1][NXORPDRes['LOCK']==0].head(10)
Explanation: NXOR_PD myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('NXORPD');
Explanation: Verilog implementation
End of explanation
@block
def SeqPD(clkREF, clkFB, UpOut, DownOut):
Sequential DFF Phase Detector
I/O:
clkREF (bool; in): Ref clock to Upper DFF
clkFB (bool; in): Compare clock to Lower DFF
UpOut (bool; ouput): Upper DFF ouput
DownOut (bool; output): Lower DFF ouput
#and clear internal feedback sig
clr = ResetSignal(0, active=0, async=True)
#upper DFF
@always(clkREF.posedge, clr.posedge)
def UpD():
if clr:
UpOut.next=0
else:
UpOut.next=1
#lower DFF
@always(clkFB.posedge, clr.posedge)
def DownD():
if clr:
DownOut.next=0
else:
DownOut.next=1
#and clear
@always_comb
def clrLogic():
clr.next= UpOut and DownOut
return instances()
Explanation: Sequntial Phase Detector
<img src="SeqPD.png">
The Sequential PD is the most common digital phase detector around and while there are variations of this architecture that can be found they all based on this simple but ingenious architecture. The architecture consists of two DFF where unlike in a typical state machine where a master clock control the flipping of the DFFs and the Data line into the DFFs is part of the State machine feedback loop. Here the DFFs are each on an independent clock where here the upper DFF is tied to the Reference Clock and the lower one is tied to the Feedback Clock. Therefore each one will output a high signal at a rate determined by the frequency of there respective clocks.
But that is not the end to the cleverness of this topology. The outputs of the DFFs are continuously compared by an AND gate such that only when the DFFs are in sync ($\omega_{\text{REF}}=\omega_{\text{FB}}$) will a very brief high spike will show up on both Next state lines of the DFF before the two DFFs are reset. For the other two conditions possible, only one of the two lines will have a high-value present. If $\omega_{\text{REF}}>\omega_{\text{FB}}$ then the lower output will be zero. And conversely if $\omega_{\text{REF}}<\omega_{\text{FB}}$ the upper output will be zero.
The above conditions as described by Razavi yield the following state machine for the Sequential Phase Detector
<img src='SeqPDSM.png'>
Note that the actual phase detection is the average of the two output lines. Wich not shown here, where one method to find the average is by scaling the boolean outputs to digital words and then pass the resultant words to an average and then to a low pass filter in order to implement the PLL.
Seq_PD myHDL implementation
End of explanation
#create the test signals
Peeker.clear()
clkREF=Signal(bool(0)); Peeker(clkREF, 'clkREF')
clkFB=Signal(bool(0)); Peeker(clkFB, 'clkFB')
UpOut=Signal(bool(0)); Peeker(UpOut, 'UpOut')
DownOut=Signal(bool(0)); Peeker(DownOut, 'DownOut')
#bind the test signals to the DUT
DUT=SeqPD(clkREF, clkFB, UpOut, DownOut)
def SeqPD_TB(RefDelay, FBDelay):
Test bench for Sequantial Testbench
Args:
RefDelay (int; 2): refrance clock delay cyles
FBDelay (int; 4): feedback clock delay cyles
#refrance clock
@always(delay(RefDelay))
def RefClkGen():
clkREF.next=not clkREF
#feedback clock
@always(delay(FBDelay))
def FBClkGen():
clkFB.next=not clkFB
#run the simulation
@instance
def stimulus():
for i in range(50):
yield clkREF.posedge
raise StopSimulation()
return instances()
Explanation: Seq_PD myHDL testing
End of explanation
#create the test signals
Peeker.clear()
clkREF=Signal(bool(0)); Peeker(clkREF, 'clkREF')
clkFB=Signal(bool(0)); Peeker(clkFB, 'clkFB')
UpOut=Signal(bool(0)); Peeker(UpOut, 'UpOut')
DownOut=Signal(bool(0)); Peeker(DownOut, 'DownOut')
#bind the test signals to the DUT
DUT=SeqPD(clkREF, clkFB, UpOut, DownOut)
sim=Simulation(DUT, SeqPD_TB(3, 2), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=20)
Explanation: Test when $\omega_{\text{REF}}>\omega_{\text{FB}}$
End of explanation
#create the test signals
Peeker.clear()
clkREF=Signal(bool(0)); Peeker(clkREF, 'clkREF')
clkFB=Signal(bool(0)); Peeker(clkFB, 'clkFB')
UpOut=Signal(bool(0)); Peeker(UpOut, 'UpOut')
DownOut=Signal(bool(0)); Peeker(DownOut, 'DownOut')
#bind the test signals to the DUT
DUT=SeqPD(clkREF, clkFB, UpOut, DownOut)
sim=Simulation(DUT, SeqPD_TB(1, 1), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=20)
Explanation: Test when $\omega_{\text{REF}}=\omega_{\text{FB}}$
End of explanation
#create the test signals
Peeker.clear()
clkREF=Signal(bool(0)); Peeker(clkREF, 'clkREF')
clkFB=Signal(bool(0)); Peeker(clkFB, 'clkFB')
UpOut=Signal(bool(0)); Peeker(UpOut, 'UpOut')
DownOut=Signal(bool(0)); Peeker(DownOut, 'DownOut')
#bind the test signals to the DUT
DUT=SeqPD(clkREF, clkFB, UpOut, DownOut)
sim=Simulation(DUT, SeqPD_TB(2, 3), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=20)
Explanation: Test when $\omega_{\text{REF}}<\omega_{\text{FB}}$
End of explanation
DUT.convert()
VerilogTextReader('SeqPD');
Explanation: Verilog implementation¶
End of explanation
@block
def Div2FD(clkIN, clkOUT, rst):
1/2 fractial freancy divider
I/O:
clkIN (input bool): input clock signal
clkOUT (ouput bool): 1/2 ouput clock signal
rst (input bool): reset signal
W12=Signal(bool(0))
@always(clkIN.posedge)
def D1():
if rst:
W12.next=0
else:
W12.next=clkOUT
@always(clkIN.posedge)
def D2():
if rst:
clkOUT.next=0
else:
clkOUT.next= not W12
return instances()
Explanation: Fractional Frequency Dividers
Frequency Dividers (more properly called fractional frequency dividers) are and are not a type of counter. In a traditional counter the counter would be run off a master clock to increment the counter and when the specified count is reached an indication signal would be given while also resetting the counter. In a fractional frequency divider, we can increment a counter but the incrimination is run off the input clock which may or may not be the master clock and the output of the counter reaching its count would be a then the new divided output clock. In addition, we can create "fractional" dividers such as $2/3$ via cascading a $1/2$ and a $1/3$ and modulating between them from another clock source to control the modulation.
The reason that "fractional" is in quotes in regards to $2/3$ is that the frequency divider is not actually dividing by $2/3$ but is instead switching between a $1/2$ and a $1/3$ divider. Thus $2/3$ divider is a notational misnomer. But by cascading fixed and variable dividers with counters to control the modulation based on the clock being cascaded from large fractional division can be optioned
So as stated frequency dividers are and are not a counter through some of the more advanced programmable frequency dividers such as http://tremaineconsultinggroup.com/fractional-divider-in-verilog/ (source code: https://bitbucket.org/BrianTremaine/fractional_divide/src/a68c67979c80a453d4f7dfd82f5bf90d604393f1/hardware/frac_divider.v?at=master&fileviewer=file-view-default) ) are much more like counters then the primitive ones that will be discussed here
Divide by 2
<img src="FD2.png">
Divide by 2 myHDL implementation
End of explanation
Peeker.clear()
clkIN=Signal(bool(0)); Peeker(clkIN, 'clkIN')
clkOUT=Signal(bool(0)); Peeker(clkOUT, 'clkOUT')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=Div2FD(clkIN, clkOUT, rst)
def Div2FD_TB():
#input clock source
@always(delay(1))
def ClkGen():
clkIN.next=not clkIN
#run the simulation
@instance
def stimulus():
for i in range(21):
yield clkIN.posedge
raise StopSimulation()
return instances()
sim=Simulation(DUT, Div2FD_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=21)
Explanation: Divide by 2 myHDL testing
End of explanation
DUT.convert()
VerilogTextReader('Div2FD');
Explanation: Verilog implementation
End of explanation
@block
def Div3FD(clkIN, clkOUT, rst):
1/3 fractial freancy divider
I/O:
clkIN (input bool): input clock signal
clkOUT (ouput bool): 1/2 ouput clock signal
rst (input bool): reset signal
W1A, WA2=[Signal(bool(0)) for _ in range(2)]
@always(clkIN.posedge)
def D1():
if rst:
W1A.next=0
else:
W1A.next=clkOUT
@always(clkIN.posedge)
def D2():
if rst:
clkOUT.next=0
else:
clkOUT.next= not WA2
@always_comb
def And():
WA2.next=W1A and clkOUT
return instances()
Explanation: Divide by 3
<img src="FD3.png">
Divide by 3 myHDL implementation
End of explanation
Peeker.clear()
clkIN=Signal(bool(0)); Peeker(clkIN, 'clkIN')
clkOUT=Signal(bool(0)); Peeker(clkOUT, 'clkOUT')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=Div3FD(clkIN, clkOUT, rst)
def Div3FD_TB():
#input clock source
@always(delay(1))
def ClkGen():
clkIN.next=not clkIN
#run the simulation
@instance
def stimulus():
for i in range(31):
yield clkIN.posedge
raise StopSimulation()
return instances()
sim=Simulation(DUT, Div3FD_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=19)
Explanation: Divide by 3 myHDL testing
End of explanation
DUT.convert()
VerilogTextReader('Div3FD');
Explanation: Verilog implementation
End of explanation
@block
def Div23FD(clkIN, ModControl, clkOUT, rst):
2/3 fractial freancy divider
I/O:
clkIN (input bool): input clock signal
ModControl (input bool): modulation switching signal
low is 1/3 high is 1/2
clkOUT (ouput bool): 1/2 ouput clock signal
rst (input bool): reset signal
W1O, WOA, WA2=[Signal(bool(0)) for _ in range(3)]
@always(clkIN.posedge)
def D1():
if rst:
W1O.next=0
else:
W1O.next=clkOUT
@always(clkIN.posedge)
def D2():
if rst:
clkOUT.next=0
else:
clkOUT.next=not WA2
@always_comb
def OR():
WOA.next=W1O or ModControl
@always_comb
def AND():
WA2.next=clkOUT and WOA
return instances()
Explanation: Divide by 2/3
<img src="FD23.png">
This is a simple hybridization of the $1/2$ and the $1/3$ frequency divider that is modulated between the two by a ModControl Signal at the or gate to create a $2/3$ frequency divider
Divide by 2/3 myHDL implementation
End of explanation
Peeker.clear()
clkIN=Signal(bool(0)); Peeker(clkIN, 'clkIN')
ModControl=Signal(bool(1)); Peeker(ModControl, 'ModControl')
clkOUT=Signal(bool(0)); Peeker(clkOUT, 'clkOUT')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=Div23FD(clkIN, ModControl, clkOUT, rst)
def Div23FD_TB():
#input clock source
@always(delay(1))
def ClkGen():
clkIN.next=not clkIN
#run the simulation
@instance
def stimulus():
for i in range(31):
yield clkIN.posedge
if i>4:
ModControl.next=0
raise StopSimulation()
return instances()
sim=Simulation(DUT, Div23FD_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=8)
Peeker.to_wavedrom(start_time=10, stop_time=23)
Peeker.to_wavedrom(start_time=0, stop_time=23)
Explanation: Divide by 2/3 myHDL testing
End of explanation
DUT.convert()
VerilogTextReader('Div23SFD');
Explanation: Verilog implementation
End of explanation
@block
def Div45FD(clkIN, ModControl, clkOUT, rst):
4/5 fractial freancy divider
I/O:
clkIN (input bool): input clock signal
ModControl (input bool): modulation switching signal
low is 1/4 high is 1/5
clkOUT (ouput bool): 1/2 ouput clock signal
rst (input bool): reset signal
WNA11, W12, W2NA1, W2NA2, WNA23, W3NA1=[Signal(bool(0)) for _ in range(6)]
@always_comb
def NAND1():
WNA11.next=not(W2NA1 and W3NA1 )
@always(clkIN.posedge)
def D1():
if rst:
W12.next=0
clkOUT.next=0
else:
W12.next=WNA11
clkOUT.next= WNA11
@always(clkIN.posedge)
def D2():
if rst:
W2NA1.next=0
W2NA2.next=0
else:
W2NA1.next=W12
W2NA2.next=not W12
@always_comb
def NAND2():
WNA23.next=not(W2NA2 and ModControl)
@always(clkIN.posedge)
def D3():
if rst:
W3NA1.next=0
else:
W3NA1.next=WNA23
return instances()
Explanation: Divide by 4/5
<img src='FD45.png'>
Divide by 4/5 myHDL implementation
End of explanation
Peeker.clear()
clkIN=Signal(bool(0)); Peeker(clkIN, 'clkIN')
ModControl=Signal(bool(0)); Peeker(ModControl, 'ModControl')
clkOUT=Signal(bool(0)); Peeker(clkOUT, 'clkOUT')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=Div45FD(clkIN, ModControl, clkOUT, rst)
def Div45FD_TB():
#input clock source
@always(delay(1))
def ClkGen():
clkIN.next=not clkIN
#run the simulation
@instance
def stimulus():
for i in range(40):
yield clkIN.posedge
if i>12:
ModControl.next=1
raise StopSimulation()
return instances()
sim=Simulation(DUT, Div45FD_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=20)
Peeker.to_wavedrom(start_time=20, stop_time=40)
Peeker.to_wavedrom(start_time=0, stop_time=40)
Explanation: Divide by 4/5 myHDL testing
End of explanation
DUT.convert()
VerilogTextReader('Div45FD');
Explanation: Verilog implementation
End of explanation
@block
def Div6FD(clkIN, clkOUT, rst):
1/6 fractial freancy divider via cascaded 2 and 3 dividers
using two `Div23SFD`
I/O:
clkIN (input bool): input clock signal
clkOUT (ouput bool): 1/6 ouput clock signal
rst (input bool): reset signal
clkMid=Signal(bool(0))
MC2=Signal(bool(1)); MC3=Signal(bool(0))
D2=Div23FD(clkIN, MC2, clkMid, rst)
D3=Div23FD(clkMid, MC3, clkOUT, rst)
return instances()
Explanation: Divide by 6 via cascaded 2 and 3
Divide by 6 myHDL implementation
End of explanation
Peeker.clear()
clkIN=Signal(bool(0)); Peeker(clkIN, 'clkIN')
clkOUT=Signal(bool(0)); Peeker(clkOUT, 'clkOUT')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=Div6FD(clkIN, clkOUT, rst)
def Div6FD_TB():
#input clock source
@always(delay(1))
def ClkGen():
clkIN.next=not clkIN
#run the simulation
@instance
def stimulus():
for i in range(31):
yield clkIN.posedge
raise StopSimulation()
return instances()
sim=Simulation(DUT, Div6FD_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=21)
Explanation: Divide by 6 myHDL testing
End of explanation
DUT.convert()
VerilogTextReader('Div6FD');
Explanation: Verilog implementation
End of explanation |
8,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a name="top"></a>
DaViTpy - models
This notebook introduces useful space science models included in davitpy.
Currently we have ported/wrapped the following models to python
Step1: <a name="igrf"/>IGRF - International Geomagnetic Reference Field
<a href="#top">[top]</a>
Step2: <a name="iri"/>IRI - International Reference Ionosphere
<a href="#top">[top]</a>
JF switches to turn off/on (True/False) several options
[0]
Step3: <a name="tsyg"/>Tsyganenko (Geopack and T96)
<a href="#top">[top]</a>
The "Porcelain" way (recommended)
Step4: The "Plumbing" way
Step5: <a name="msis"/>MSIS - Mass Spectrometer and Incoherent Scatter Radar
<a href="#top">[top]</a>
The fortran subroutine needed is gtd7
Step6: <a name="hwm"/>HWM07
Step7: <a name="hwm"/>AACGM--Altitude Adjusted Corrected Geomagnetic Coordinates</a>
<a href="http
Step8: models.aacgm.aacgmConvArr(lat,lon,alt,flg)
convert between geographic coords and aacgm (array form)
Input arguments
Step9: models.aacgm.mltFromEpoch(epoch,mlon)
calculate magnetic local time from epoch time and mag lon
Input arguments
Step10: models.aacgm.mltFromYmdhms(yr,mo,dy,hr,mt,sc,mlon)
calculate magnetic local time from year, month, day, hour, minute, second and mag lon
Input arguments
Step11: models.aacgm.mltFromYrsec(yr,yrsec,mlon)
calculate magnetic local time from seconds elapsed in the year and mag lon
Input arguments | Python Code:
%pylab inline
from datetime import datetime as dt
from davitpy.models import *
from davitpy import utils
Explanation: <a name="top"></a>
DaViTpy - models
This notebook introduces useful space science models included in davitpy.
Currently we have ported/wrapped the following models to python:
<a href="#igrf">IGRF-11</a>
<a href="#iri">IRI</a>
<a href="#tsyg">TSYGANENKO (T96)</a>
<a href="#msis">MSIS (NRLMSISE00)</a>
<a href="#hwm">HWM-07</a>
<a href="#hwm">AACGM</a>
End of explanation
# INPUTS
itype = 1 # Geodetic coordinates
pyDate = dt(2006,2,23)
date = utils.dateToDecYear(pyDate) # decimal year
alt = 300. # altitude
stp = 5.
xlti, xltf, xltd = -90.,90.,stp # latitude start, stop, step
xlni, xlnf, xlnd = -180.,180.,stp # longitude start, stop, step
ifl = 0 # Main field
# Call fortran subroutine
lat,lon,d,s,h,x,y,z,f = igrf.igrf11(itype,date,alt,ifl,xlti,xltf,xltd,xlni,xlnf,xlnd)
# Check that it worked by plotting magnetic dip angle contours on a map
from mpl_toolkits.basemap import Basemap
from mpl_toolkits.axes_grid1 import make_axes_locatable
from numpy import meshgrid
# Set figure
fig = figure(figsize=(10,5))
ax = fig.add_subplot(111)
rcParams.update({'font.size': 14})
# Set-up the map background
map = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\
llcrnrlon=-180,urcrnrlon=180,resolution='c')
map.drawmapboundary()
map.drawcoastlines(color='0.5')
# draw parallels and meridians.
map.drawparallels(np.arange(-80.,81.,20.))
map.drawmeridians(np.arange(-180.,181.,20.))
# The igrf output needs to be reshaped to be plotted
dip = s.reshape((180./stp+1,360./stp+1))
dec = d.reshape((180./stp+1,360./stp+1))
lo = lon[0:(360./stp+1)]
la = lat[0::(360./stp+1)]
x,y = meshgrid(lo,la)
v = arange(0,90,20)
# Plot dip angle contours and labels
cs = map.contour(x, y, abs(dip), v, latlon=True, linewidths=1.5, colors='k')
labs = plt.clabel(cs, inline=1, fontsize=10)
# Plot declination and colorbar
im = map.pcolormesh(x, y, dec, vmin=-40, vmax=40, cmap='coolwarm')
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", "3%", pad="3%")
colorbar(im, cax=cax)
cax.set_ylabel('Magnetic field declination')
cticks = cax.get_yticklabels()
cticks = [t.__dict__['_text'] for t in cticks]
cticks[0], cticks[-1] = 'W', 'E'
_ = cax.set_yticklabels(cticks)
savefig('dipdec.png')
# Check that it worked by plotting magnetic dip angle contours on a map
from mpl_toolkits.basemap import Basemap
from mpl_toolkits.axes_grid1 import make_axes_locatable
from numpy import meshgrid
# Set figure
fig = figure(figsize=(10,5))
ax = fig.add_subplot(111)
rcParams.update({'font.size': 14})
# Set-up the map background
map = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\
llcrnrlon=-180,urcrnrlon=180,resolution='c')
map.drawmapboundary()
map.drawcoastlines(color='0.5')
# draw parallels and meridians.
map.drawparallels(np.arange(-80.,81.,20.))
map.drawmeridians(np.arange(-180.,181.,20.))
# The igrf output needs to be reshaped to be plotted
babs = f.reshape((180./stp+1,360./stp+1))
lo = lon[0:(360./stp+1)]
la = lat[0::(360./stp+1)]
x,y = meshgrid(lo,la)
v = arange(0,90,20)
# Plot declination and colorbar
im = map.pcolormesh(x, y, babs, cmap='jet')
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", "3%", pad="3%")
colorbar(im, cax=cax)
cax.set_ylabel('Magnetic field intensity [nT]')
savefig('babs.png')
Explanation: <a name="igrf"/>IGRF - International Geomagnetic Reference Field
<a href="#top">[top]</a>
End of explanation
# Inputs
jf = [True]*50
jf[2:6] = [False]*4
jf[20] = False
jf[22] = False
jf[27:30] = [False]*3
jf[32] = False
jf[34] = False
jmag = 0.
alati = 40.
along = -80.
iyyyy = 2012
mmdd = 806
dhour = 12.
heibeg, heiend, heistp = 80., 500., 10.
oarr = np.zeros(100)
# Call fortran subroutine
outf,oarr = iri.iri_sub(jf,jmag,alati,along,iyyyy,mmdd,dhour,heibeg,heiend,heistp,oarr)
# Check that it worked by plotting vertical electron density profile
figure(figsize=(5,8))
alt = np.arange(heibeg,heiend,heistp)
ax = plot(outf[0,0:len(alt)],alt)
xlabel(r'Electron density [m$^{-3}$]')
ylabel('Altitude [km]')
grid(True)
rcParams.update({'font.size': 12})
Explanation: <a name="iri"/>IRI - International Reference Ionosphere
<a href="#top">[top]</a>
JF switches to turn off/on (True/False) several options
[0] : True
Ne computed
Ne not computed
[1] : True
Te, Ti computed
Te, Ti not computed
[2] : True
Ne & Ni computed
Ni not computed
[3] : False
B0 - Table option
B0 - other models jf[30]
[4] : False
foF2 - CCIR
foF2 - URSI
[5] : False
Ni - DS-95 & DY-85
Ni - RBV-10 & TTS-03
[6] : True
Ne - Tops: f10.7<188
f10.7 unlimited
[7] : True
foF2 from model
foF2 or NmF2 - user input
[8] : True
hmF2 from model
hmF2 or M3000F2 - user input
[9] : True
Te - Standard
Te - Using Te/Ne correlation
[10] : True
Ne - Standard Profile
Ne - Lay-function formalism
[11] : True
Messages to unit 6
to meesages.text on unit 11
[12] : True
foF1 from model
foF1 or NmF1 - user input
[13] : True
hmF1 from model
hmF1 - user input (only Lay version)
[14] : True
foE from model
foE or NmE - user input
[15] : True
hmE from model
hmE - user input
[16] : True
Rz12 from file
Rz12 - user input
[17] : True
IGRF dip, magbr, modip
old FIELDG using POGO68/10 for 1973
[18] : True
F1 probability model
critical solar zenith angle (old)
[19] : True
standard F1
standard F1 plus L condition
[20] : False
ion drift computed
ion drift not computed
[21] : True
ion densities in %
ion densities in m-3
[22] : False
Te_tops (Aeros,ISIS)
Te_topside (TBT-2011)
[23] : True
D-region: IRI-95
Special: 3 D-region models
[24] : True
F107D from APF107.DAT
F107D user input (oarr[41])
[25] : True
foF2 storm model
no storm updating
[26] : True
IG12 from file
IG12 - user
[27] : False
spread-F probability
not computed
[28] : False
IRI01-topside
new options as def. by JF[30]
[29] : False
IRI01-topside corr.
NeQuick topside model
[28,29]:
[t,t] IRIold,
[f,t] IRIcor,
[f,f] NeQuick,
[t,f] Gulyaeva
[30] : True
B0,B1 ABT-2009
B0 Gulyaeva h0.5
[31] : True
F10.7_81 from file
PF10.7_81 - user input (oarr[45])
[32] : False
Auroral boundary model on
Auroral boundary model off
[33] : True
Messages on
Messages off
[34] : False
foE storm model
no foE storm updating
[..] : ....
[50] : ....
End of explanation
lats = range(10, 90, 10)
lons = zeros(len(lats))
rhos = 6372.*ones(len(lats))
trace = tsyganenko.tsygTrace(lats, lons, rhos)
print trace
ax = trace.plot()
Explanation: <a name="tsyg"/>Tsyganenko (Geopack and T96)
<a href="#top">[top]</a>
The "Porcelain" way (recommended)
End of explanation
# Inputs
# Date and time
year = 2000
doy = 1
hr = 1
mn = 0
sc = 0
# Solar wind speed
vxgse = -400.
vygse = 0.
vzgse = 0.
# Execution parameters
lmax = 5000
rlim = 60.
r0 = 1.
dsmax = .01
err = .000001
# Direction of the tracing
mapto = 1
# Magnetic activity [SW pressure (nPa), Dst, ByIMF, BzIMF]
parmod = np.zeros(10)
parmod[0:4] = [2, -8, -2, -5]
# Start point (rh in Re)
lat = 50.
lon = 0.
rh = 0.
# This has to be called first
tsyganenko.tsygFort.recalc_08(year,doy,hr,mn,sc,vxgse,vygse,vzgse)
# Convert lat,lon to geographic cartesian and then gsw
r,theta,phi, xgeo, ygeo, zgeo = tsyganenko.tsygFort.sphcar_08(1., np.radians(90.-lat), np.radians(lon), 0., 0., 0., 1)
xgeo,ygeo,zgeo,xgsw,ygsw,zgsw = tsyganenko.tsygFort.geogsw_08(xgeo, ygeo, zgeo,0,0,0,1)
# Trace field line
xfgsw,yfgsw,zfgsw,xarr,yarr,zarr,l = tsyganenko.tsygFort.trace_08(xgsw,ygsw,zgsw,mapto,dsmax,err,
rlim,r0,0,parmod,'T96_01','IGRF_GSW_08',lmax)
# Convert back to spherical geographic coords
xfgeo,yfgeo,zfgeo,xfgsw,yfgsw,zfgsw = tsyganenko.tsygFort.geogsw_08(0,0,0,xfgsw,yfgsw,zfgsw,-1)
gcR, gdcolat, gdlon, xgeo, ygeo, zgeo = tsyganenko.tsygFort.sphcar_08(0., 0., 0., xfgeo, yfgeo, zfgeo, -1)
print '** START: {:6.3f}, {:6.3f}, {:6.3f}'.format(lat, lon, 1.)
print '** STOP: {:6.3f}, {:6.3f}, {:6.3f}'.format(90.-np.degrees(gdcolat), np.degrees(gdlon), gcR)
# A quick checking plot
from mpl_toolkits.mplot3d import proj3d
import numpy as np
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
# Plot coordinate system
ax.plot3D([0,1],[0,0],[0,0],'b')
ax.plot3D([0,0],[0,1],[0,0],'g')
ax.plot3D([0,0],[0,0],[0,1],'r')
# First plot a nice sphere for the Earth
u = np.linspace(0, 2 * np.pi, 179)
v = np.linspace(0, np.pi, 179)
tx = np.outer(np.cos(u), np.sin(v))
ty = np.outer(np.sin(u), np.sin(v))
tz = np.outer(np.ones(np.size(u)), np.cos(v))
ax.plot_surface(tx,ty,tz,rstride=10, cstride=10, color='grey', alpha=.5, zorder=2, linewidth=0.5)
# Then plot the traced field line
latarr = [10.,20.,30.,40.,50.,60.,70.,80.]
lonarr = [0., 180.]
rh = 0.
for lon in lonarr:
for lat in latarr:
r,theta,phi, xgeo, ygeo, zgeo = tsyganenko.tsygFort.sphcar_08(1., np.radians(90.-lat), np.radians(lon), 0., 0., 0., 1)
xgeo,ygeo,zgeo,xgsw,ygsw,zgsw = tsyganenko.tsygFort.geogsw_08(xgeo, ygeo, zgeo,0,0,0,1)
xfgsw,yfgsw,zfgsw,xarr,yarr,zarr,l = tsyganenko.tsygFort.trace_08(xgsw,ygsw,zgsw,mapto,dsmax,err,
rlim,r0,0,parmod,'T96_01','IGRF_GSW_08',lmax)
for i in xrange(l):
xgeo,ygeo,zgeo,dum,dum,dum = tsyganenko.tsygFort.geogsw_08(0,0,0,xarr[i],yarr[i],zarr[i],-1)
xarr[i],yarr[i],zarr[i] = xgeo,ygeo,zgeo
ax.plot3D(xarr[0:l],yarr[0:l],zarr[0:l], zorder=3, linewidth=2, color='y')
# Set plot limits
lim=4
ax.set_xlim3d([-lim,lim])
ax.set_ylim3d([-lim,lim])
ax.set_zlim3d([-lim,lim])
Explanation: The "Plumbing" way
End of explanation
# Inputs
import datetime as dt
myDate = dt.datetime(2012, 7, 5, 12, 35)
glat = 40.
glon = -80.
mass = 48
# First, MSIS needs a bunch of input which can be obtained from tabulated values
# This function was written to access these values (not provided with MSIS by default)
solInput = msis.getF107Ap(myDate)
# Also, to switch to SI units:
msis.meters(True)
# Other input conversion
iyd = (myDate.year - myDate.year/100*100)*100 + myDate.timetuple().tm_yday
sec = myDate.hour*24. + myDate.minute*60.
stl = sec/3600. + glon/15.
altitude = linspace(0., 500., 100)
temp = zeros(shape(altitude))
dens = zeros(shape(altitude))
N2dens = zeros(shape(altitude))
O2dens = zeros(shape(altitude))
Odens = zeros(shape(altitude))
Ndens = zeros(shape(altitude))
Ardens = zeros(shape(altitude))
Hdens = zeros(shape(altitude))
Hedens = zeros(shape(altitude))
for ia,alt in enumerate(altitude):
d,t = msis.gtd7(iyd, sec, alt, glat, glon, stl, solInput['f107a'], solInput['f107'], solInput['ap'], mass)
temp[ia] = t[1]
dens[ia] = d[5]
N2dens[ia] = d[2]
O2dens[ia] = d[3]
Ndens[ia] = d[7]
Odens[ia] = d[1]
Hdens[ia] = d[6]
Hedens[ia] = d[0]
Ardens[ia] = d[4]
figure(figsize=(16,8))
#rcParams.update({'font.size': 12})
subplot(131)
plot(temp, altitude)
gca().set_xscale('log')
xlabel('Temp. [K]')
ylabel('Altitude [km]')
subplot(132)
plot(dens, altitude)
gca().set_xscale('log')
gca().set_yticklabels([])
xlabel(r'Mass dens. [kg/m$^3$]')
subplot(133)
plot(Odens, altitude, 'r-',
O2dens, altitude, 'r--',
Ndens, altitude, 'g-',
N2dens, altitude, 'g--',
Hdens, altitude, 'b-',
Hedens, altitude, 'y-',
Ardens, altitude, 'm-')
gca().set_xscale('log')
gca().set_yticklabels([])
xlabel(r'Density [m$^3$]')
leg = legend( (r'O',
r'O$_2$',
r'N',
r'N$_2$',
r'H',
r'He',
r'Ar',),
'upper right')
tight_layout()
Explanation: <a name="msis"/>MSIS - Mass Spectrometer and Incoherent Scatter Radar
<a href="#top">[top]</a>
The fortran subroutine needed is gtd7:
INPUTS:
IYD - year and day as YYDDD (day of year from 1 to 365 (or 366)) (Year ignored in current model)
SEC - UT (SEC)
ALT - altitude (KM)
GLAT - geodetic latitude (DEG)
GLONG - geodetic longitude (DEG)
STL - local aparent solar time (HRS; see Note below)
F107A - 81 day average of F10.7 flux (centered on day DDD)
F107 - daily F10.7 flux for previous day
AP - magnetic index (daily) OR when SW(9)=-1., array containing:
(1) daily AP
(2) 3 HR AP index FOR current time
(3) 3 HR AP index FOR 3 hrs before current time
(4) 3 HR AP index FOR 6 hrs before current time
(5) 3 HR AP index FOR 9 hrs before current time
(6) average of height 3 HR AP indices from 12 TO 33 HRS prior to current time
(7) average of height 3 HR AP indices from 36 TO 57 HRS prior to current time
MASS - mass number (only density for selected gass is calculated. MASS 0 is temperature.
MASS 48 for ALL. MASS 17 is Anomalous O ONLY.)
OUTPUTS:
D(1) - HE number density(CM-3)
D(2) - O number density(CM-3)
D(3) - N2 number density(CM-3)
D(4) - O2 number density(CM-3)
D(5) - AR number density(CM-3)
D(6) - total mass density(GM/CM3)
D(7) - H number density(CM-3)
D(8) - N number density(CM-3)
D(9) - Anomalous oxygen number density(CM-3)
T(1) - exospheric temperature
T(2) - temperature at ALT
End of explanation
w = hwm.hwm07(11001, 0., 200., 40., -80., 0, 0, 0, [0, 0])
print w
Explanation: <a name="hwm"/>HWM07: Horizontal Wind Model
<a href="#top">[top]</a>
Input arguments:
iyd - year and day as yyddd
sec - ut(sec)
alt - altitude(km)
glat - geodetic latitude(deg)
glon - geodetic longitude(deg)
stl - not used
f107a - not used
f107 - not used
ap - two element array with
ap(1) = not used
ap(2) = current 3hr ap index
Output argument:
w(1) = meridional wind (m/sec + northward)
w(2) = zonal wind (m/sec + eastward)
End of explanation
#geo to aacgm
glat,glon,r = aacgm.aacgmConv(42.0,-71.4,300.,2000,0)
print glat, glon, r
#aacgm to geo
glat,glon,r = aacgm.aacgmConv(52.7,6.6,300.,2000,1)
print glat, glon, r
Explanation: <a name="hwm"/>AACGM--Altitude Adjusted Corrected Geomagnetic Coordinates</a>
<a href="http://superdarn.jhuapl.edu/software/analysis/aacgm/">AACGM Homepage</a><br>
<a href="#top">[top]</a>
models.aacgm.aacgmConv(lat,lon,alt,flg)
convert between geographic coords and aacgm
Input arguments:
lat - latitude
lon - longitude
alt - altitude(km)
flg - flag to indicate geo to AACGM (0) or AACGM to geo (1)
Outputs:
olat = output latitude
olon = output longitude
r = the accuracy of the transform
End of explanation
#geo to aacgm
olat,olon,r = aacgm.aacgmConvArr([10.,20.,30.,40.],[80.,90.,100.,110.],[100.,150.,200.,250.],2000,0)
print olat
print olon
print r
Explanation: models.aacgm.aacgmConvArr(lat,lon,alt,flg)
convert between geographic coords and aacgm (array form)
Input arguments:
lat - latitude list
lon - longitude list
alt - altitude(km) list
flg - flag to indicate geo to AACGM (0) or AACGM to geo (1)
Outputs:
olat = output latitude list
olon = output longitude list
r = the accuracy of the transform
End of explanation
import datetime as dt
myDate = dt.datetime(2012,7,10)
epoch = utils.timeUtils.datetimeToEpoch(myDate)
mlt = aacgm.mltFromEpoch(epoch,52.7)
print mlt
Explanation: models.aacgm.mltFromEpoch(epoch,mlon)
calculate magnetic local time from epoch time and mag lon
Input arguments:
epoch - the target time in epoch format
mlon - the input magnetic longitude
Outputs:
mlt = the magnetic local time
End of explanation
mlt = aacgm.mltFromYmdhms(2012,7,10,0,0,0,52.7)
print mlt
Explanation: models.aacgm.mltFromYmdhms(yr,mo,dy,hr,mt,sc,mlon)
calculate magnetic local time from year, month, day, hour, minute, second and mag lon
Input arguments:
yr - the year
mo - the month
dy - the day
hr - the hour
mt - the minute
sc - the second
mlon - the input magnetic longitude
Outputs:
mlt = the magnetic local time
End of explanation
yrsec = int(utils.timeUtils.datetimeToEpoch(dt.datetime(2012,7,10)) - utils.timeUtils.datetimeToEpoch(dt.datetime(2012,1,1)))
print yrsec
mlt = aacgm.mltFromYrsec(2013,yrsec,52.7)
print mlt
Explanation: models.aacgm.mltFromYrsec(yr,yrsec,mlon)
calculate magnetic local time from seconds elapsed in the year and mag lon
Input arguments:
yr - the year
yrsec - the year seconds
mlon - the input magnetic longitude
Outputs:
mlt = the magnetic local time
End of explanation |
8,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Method to try to match lines, mainly in absorption, with known lines at different velocities. The lineid must be identified first to discard wrong detections.
Step1: We define the source to be scanned from the lineAll.db
Step2: Scan through the lines (lineid) matching with a local splatalogue.db. emax is the maximum energy of the upper level to restrain to low energy transitions...
Step3: Plot the detected lines vs. the velocity
Step4: Display the matching transitions | Python Code:
import sys
sys.path.append("/home/stephane/git/alma-calibrator/src")
import lineTools as lt
import pickle
import matplotlib.pyplot as pl
al = lt.analysisLines("/home/stephane/Science/RadioGalaxy/ALMA/absorptions/analysis/a/lineAll.db")
%matplotlib inline
Explanation: Method to try to match lines, mainly in absorption, with known lines at different velocities. The lineid must be identified first to discard wrong detections.
End of explanation
def outputMatch(matches, minmatch = 5):
for m in matches:
imax = len(m)
ifound = 0
vel = m[0]
for i in range(1,len(m)):
if len(m[i]) > 0:
ifound += 1
if ifound >= minmatch:
print("########################")
print("## velocity: %f"%(vel))
print("## Freq. matched: %d"%(ifound))
print("##")
print("## Formula Name E_K Frequency")
print("## (K) (MHz)")
for i in range(1,len(m)):
if len(m[i]) > 0:
print("## Line:")
for line in m[i]:
print line
print("## \n###END###\n")
source = "J004916-445738"
redshift = 0.1213
lineid = [9520, 9527, 9532, 9534, 9535, 9542, 9545]
Explanation: We define the source to be scanned from the lineAll.db
End of explanation
m = al.scanningSplatVelocitySourceLineid(lineid, velmin = -200. , velmax = 200, dv = 0.5 ,nrao = True, emax= 30., absorption = True, emission = True )
vel = []
lineDetected =[]
minmatch = 2
for l in m:
vel.append(l[0])
ifound = 0
for i in range(1,len(l)):
if len(l[i]) > 0:
ifound += 1
if ifound >= minmatch:
print("### Velocity: %f"%(l[0]))
print("##")
for line in l[1:-1]:
if len(line) > 0:
print line
print("\n\n")
lineDetected.append(ifound)
Explanation: Scan through the lines (lineid) matching with a local splatalogue.db. emax is the maximum energy of the upper level to restrain to low energy transitions...
End of explanation
pl.figure(figsize=(15,10))
pl.xlabel("velocity")
pl.ylabel("Lines")
pl.plot(vel, lineDetected, "k-")
pl.show()
## uncomment to save the data in a pickle file
#f = open("3c273-vel-scan.pickle","w")
#pickle.dump(m,f )
#f.close()
Explanation: Plot the detected lines vs. the velocity
End of explanation
outputMatch(m, minmatch)
Explanation: Display the matching transitions
End of explanation |
8,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Chapter 20 - Tables and Networks
In the previous chapter we looked into various types of charts and correlations that are useful for scientific analysis in Python. Here, we present two more groups of visualizations
Step2: 1. Tables
There are (at least) two ways to output your data as a formatted table
Step3: Option 2
Step4: Once you've produced your LaTeX table, it's almost ready to put in your paper. If you're writing an NLP paper and your table contains scores for different system outputs, you might want to make the best scores bold, so that they stand out from the other numbers in the table.
More to explore
The pandas library is really useful if you work with a lot of data (we'll also use it below). As Jake Vanderplas said in the State of the tools video, the pandas DataFrame is becoming the central format in the Python ecosystem. Here is a page with pandas tutorials.
2. Networks
Some data is best visualized as a network. There are several options out there for doing this. The easiest is to use the NetworkX library and either plot the network using Matplotlib, or export it to JSON or GEXF (Graph EXchange Format) and visualize the network using external tools.
Let's explore a bit of WordNet today. For this, we'll want to import the NetworkX library, as well as the WordNet module. We'll look at the first synset for dog
Step6: Networks are made up out of edges
Step7: Now we can actually start drawing the graph. We'll increase the figure size, and use the draw_spring method (that implements the Fruchterman-Reingold layout algorithm).
Step8: What is interesting about this is that there is a cycle in the graph! This is because dog has two hypernyms, and those hypernyms are both superseded (directly or indirectly) by animal.n.01.
What is not so good is that the graph looks pretty ugly | Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_21_Tables_and_Networks.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
%matplotlib inline
Explanation: Chapter 20 - Tables and Networks
In the previous chapter we looked into various types of charts and correlations that are useful for scientific analysis in Python. Here, we present two more groups of visualizations: tables and networks. We will spend little attention to these, since they are less/not useful for the final assignment; however, note that they are still often a useful visualization options in practice.
At the end of this chapter, you will be able to:
- Create formatted tables
- Create networks
This requires that you already have (some) knowledge about:
- Loading and manipulating data.
If you want to learn more about these topics, you might find the following links useful:
- List of visualization blogs: https://flowingdata.com/2012/04/27/data-and-visualization-blogs-worth-following/
End of explanation
from tabulate import tabulate
table = [["spam",42],["eggs",451],["bacon",0]]
headers = ["item", "qty"]
# Documentation: https://pypi.python.org/pypi/tabulate
print(tabulate(table, headers, tablefmt="latex_booktabs"))
Explanation: 1. Tables
There are (at least) two ways to output your data as a formatted table:
Using the tabulate package. (You might need to install it first, using conda install tabulate)
Using the pandas dataframe method df.to_latex(...), df.to_string(...), or even df.to_clipboard(...).
This is extremely useful if you're writing a paper. First version of the 'results' section: done!
Option 1: Tabulate
End of explanation
import pandas as pd
# Documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html
df = pd.DataFrame(data=table, columns=headers)
print(df.to_latex(index=False))
Explanation: Option 2: Pandas DataFrames
End of explanation
import networkx as nx # You might need to install networkx first (conda install -c anaconda networkx)
from nltk.corpus import wordnet as wn
from nltk.util import bigrams # This is a useful function.
Explanation: Once you've produced your LaTeX table, it's almost ready to put in your paper. If you're writing an NLP paper and your table contains scores for different system outputs, you might want to make the best scores bold, so that they stand out from the other numbers in the table.
More to explore
The pandas library is really useful if you work with a lot of data (we'll also use it below). As Jake Vanderplas said in the State of the tools video, the pandas DataFrame is becoming the central format in the Python ecosystem. Here is a page with pandas tutorials.
2. Networks
Some data is best visualized as a network. There are several options out there for doing this. The easiest is to use the NetworkX library and either plot the network using Matplotlib, or export it to JSON or GEXF (Graph EXchange Format) and visualize the network using external tools.
Let's explore a bit of WordNet today. For this, we'll want to import the NetworkX library, as well as the WordNet module. We'll look at the first synset for dog: dog.n.01, and how it's positioned in the WordNet taxonomy. All credits for this idea go to this blog.
End of explanation
def hypernym_edges(synset):
Function that generates a set of edges
based on the path between the synset and entity.n.01
edges = set()
for path in synset.hypernym_paths():
synset_names = [s.name() for s in path]
# bigrams turns a list of arbitrary length into tuples: [(0,1),(1,2),(2,3),...]
# edges.update adds novel edges to the set.
edges.update(bigrams(synset_names))
return edges
import nltk
nltk.download('wordnet')
# Use the synset 'dog.n.01'
dog = wn.synset('dog.n.01')
# Generate a set of edges connecting the synset for 'dog' to the root node (entity.n.01)
edges = hypernym_edges(dog)
# Create a graph object.
G = nx.Graph()
# Add all the edges that we generated earlier.
G.add_edges_from(edges)
Explanation: Networks are made up out of edges: connections between nodes (also called vertices). To build a graph of the WordNet-taxonomy, we need to generate a set of edges. This is what the function below does.
End of explanation
# Increasing figure size for better display of the graph.
from pylab import rcParams
rcParams['figure.figsize'] = 11, 11
# Draw the actual graph.
nx.draw_spring(G,with_labels=True)
Explanation: Now we can actually start drawing the graph. We'll increase the figure size, and use the draw_spring method (that implements the Fruchterman-Reingold layout algorithm).
End of explanation
# Install pygraphviz first: pip install pygraphviz
!sudo apt-get install -y graphviz-dev
!pip install pygraphviz
from networkx.drawing.nx_agraph import graphviz_layout
# Let's add 'cat' to the bunch as well.
cat = wn.synset('cat.n.01')
cat_edges = hypernym_edges(cat)
G.add_edges_from(cat_edges)
# Use the graphviz layout. First compute the node positions..
positioning = graphviz_layout(G)
# And then pass node positions to the drawing function.
nx.draw_networkx(G,pos=positioning)
Explanation: What is interesting about this is that there is a cycle in the graph! This is because dog has two hypernyms, and those hypernyms are both superseded (directly or indirectly) by animal.n.01.
What is not so good is that the graph looks pretty ugly: there are several crossing edges, which is totally unnecessary. There are better layouts implemented in NetworkX, but they do require you to install pygraphviz. Once you've done that, you can execute the next cell. (And if not, then just assume it looks much prettier!)
End of explanation |
8,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Convolutional Neural Network - Ensemble Learning
In this notebook we will verify if our single-column architecture can get any advantage from using ensemble learning, so a multi-column architecture.
We will train multiple networks identical to the best one defined in notebook 03, feeding them with pre-processed images shuffled and distorted using a different pseudo-random seed. This should give us a good ensemble of networks that we can average for each classification.
A prediction doesn't take more time compared to a single-column, but training time scales by a factor of N, where N is the number of columns. Networks could be trained in parallel, but not on our current hardware that is saturated by the training of a single one.
Imports
Step1: Definitions
For this experiment we are using 5 networks, but usually a good number is in the range of 35 (but with more dataset alterations then we do).
Step2: Data load
Step3: Image preprocessing
Step4: Model definition - Single column
This time we are going to define a helper functions to initialize the model, since we're going to use it on a list of models.
Step5: Training and evaluation - Single column
Again we are going to define a helper functions to train the model, since we're going to use them on a list.
Step6: Just by the different seeds, error changes from 0.5% to 0.42% (our best result so far with a single column). The training took 12 hours.
Model definition - Multi column
The MCDNN is obtained by creating a new model that only has 1 layer, Merge, that does the average of the outputs of the models in the given list. No training is required since we're only doing the average.
Step7: Evaluation - Multi column
Step8: The error improved from 0.42% with the best network of the ensemble, to 0.4%, that is out best result so far. | Python Code:
import os.path
from IPython.display import Image
from util import Util
u = Util()
import numpy as np
# Explicit random seed for reproducibility
np.random.seed(1337)
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Merge
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.datasets import mnist
Explanation: MNIST Convolutional Neural Network - Ensemble Learning
In this notebook we will verify if our single-column architecture can get any advantage from using ensemble learning, so a multi-column architecture.
We will train multiple networks identical to the best one defined in notebook 03, feeding them with pre-processed images shuffled and distorted using a different pseudo-random seed. This should give us a good ensemble of networks that we can average for each classification.
A prediction doesn't take more time compared to a single-column, but training time scales by a factor of N, where N is the number of columns. Networks could be trained in parallel, but not on our current hardware that is saturated by the training of a single one.
Imports
End of explanation
batch_size = 1024
nb_classes = 10
nb_epoch = 650
# checkpoint path
checkpoints_dir = "checkpoints"
# number of networks for ensamble learning
number_of_models = 5
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters1 = 20
nb_filters2 = 40
# size of pooling area for max pooling
pool_size1 = (2, 2)
pool_size2 = (3, 3)
# convolution kernel size
kernel_size1 = (4, 4)
kernel_size2 = (5, 5)
# dense layer size
dense_layer_size1 = 200
# dropout rate
dropout = 0.15
# activation type
activation = 'relu'
Explanation: Definitions
For this experiment we are using 5 networks, but usually a good number is in the range of 35 (but with more dataset alterations then we do).
End of explanation
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
u.plot_images(X_train[0:9], y_train[0:9])
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
Explanation: Data load
End of explanation
datagen = ImageDataGenerator(
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.1,
horizontal_flip=False)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
Explanation: Image preprocessing
End of explanation
def initialize_network(model, dropout1=dropout, dropout2=dropout):
model.add(Convolution2D(nb_filters1, kernel_size1[0], kernel_size1[1],
border_mode='valid',
input_shape=input_shape, name='covolution_1_' + str(nb_filters1) + '_filters'))
model.add(Activation(activation, name='activation_1_' + activation))
model.add(MaxPooling2D(pool_size=pool_size1, name='max_pooling_1_' + str(pool_size1) + '_pool_size'))
model.add(Convolution2D(nb_filters2, kernel_size2[0], kernel_size2[1]))
model.add(Activation(activation, name='activation_2_' + activation))
model.add(MaxPooling2D(pool_size=pool_size2, name='max_pooling_1_' + str(pool_size2) + '_pool_size'))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(dense_layer_size1, name='fully_connected_1_' + str(dense_layer_size1) + '_neurons'))
model.add(Activation(activation, name='activation_3_' + activation))
model.add(Dropout(dropout))
model.add(Dense(nb_classes, name='output_' + str(nb_classes) + '_neurons'))
model.add(Activation('softmax', name='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall'])
# pseudo random generation of seeds
seeds = np.random.randint(10000, size=number_of_models)
# initializing all the models
models = [None] * number_of_models
for i in range(number_of_models):
models[i] = Sequential()
initialize_network(models[i])
Explanation: Model definition - Single column
This time we are going to define a helper functions to initialize the model, since we're going to use it on a list of models.
End of explanation
def try_load_checkpoints(model, checkpoints_filepath, warn=False):
# loading weights from checkpoints
if os.path.exists(checkpoints_filepath):
model.load_weights(checkpoints_filepath)
elif warn:
print('Warning: ' + checkpoints_filepath + ' could not be loaded')
def fit(model, checkpoints_name='test', seed=1337, initial_epoch=0,
verbose=1, window_size=(-1), plot_history=False, evaluation=True):
if window_size == (-1):
window = 1 + np.random.randint(14)
else:
window = window_size
if window >= nb_epoch:
window = nb_epoch - 1
print("Not pre-processing " + str(window) + " epoch(s)")
checkpoints_filepath = os.path.join(checkpoints_dir, '04_MNIST_weights.best_' + checkpoints_name + '.hdf5')
try_load_checkpoints(model, checkpoints_filepath, True)
# checkpoint
checkpoint = ModelCheckpoint(checkpoints_filepath, monitor='val_precision', verbose=verbose, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
# fits the model on batches with real-time data augmentation, for nb_epoch-100 epochs
history = model.fit_generator(datagen.flow(X_train, Y_train,
batch_size=batch_size,
# save_to_dir='distorted_data',
# save_format='png'
seed=1337),
samples_per_epoch=len(X_train), nb_epoch=(nb_epoch-window), verbose=0,
validation_data=(X_test, Y_test), callbacks=callbacks_list)
# ensuring best val_precision reached during training
try_load_checkpoints(model, checkpoints_filepath)
# fits the model on clear training set, for nb_epoch-700 epochs
history_cont = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=window,
verbose=0, validation_data=(X_test, Y_test), callbacks=callbacks_list)
# ensuring best val_precision reached during training
try_load_checkpoints(model, checkpoints_filepath)
if plot_history:
print("History: ")
u.plot_history(history)
u.plot_history(history, 'precision')
print("Continuation of training with no pre-processing:")
u.plot_history(history_cont)
u.plot_history(history_cont, 'precision')
if evaluation:
print('Evaluating model ' + str(index))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test accuracy:', score[1]*100, '%')
print('Test error:', (1-score[2])*100, '%')
return history, history_cont
for index in range(number_of_models):
print("Training model " + str(index) + " ...")
if index == 0:
window_size = 20
plot_history = True
else:
window_size = (-1)
plot_history = False
history, history_cont = fit(models[index],
str(index),
seed=seeds[index],
initial_epoch=0,
verbose=0,
window_size=window_size,
plot_history=plot_history)
print("Done.\n\n")
Explanation: Training and evaluation - Single column
Again we are going to define a helper functions to train the model, since we're going to use them on a list.
End of explanation
merged_model = Sequential()
merged_model.add(Merge(models, mode='ave'))
merged_model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall'])
Explanation: Just by the different seeds, error changes from 0.5% to 0.42% (our best result so far with a single column). The training took 12 hours.
Model definition - Multi column
The MCDNN is obtained by creating a new model that only has 1 layer, Merge, that does the average of the outputs of the models in the given list. No training is required since we're only doing the average.
End of explanation
print('Evaluating ensemble')
score = merged_model.evaluate([np.asarray(X_test)] * number_of_models,
Y_test,
verbose=0)
print('Test accuracy:', score[1]*100, '%')
print('Test error:', (1-score[2])*100, '%')
Explanation: Evaluation - Multi column
End of explanation
# The predict_classes function outputs the highest probability class
# according to the trained classifier for each input example.
predicted_classes = merged_model.predict_classes([np.asarray(X_test)] * number_of_models)
# Check which items we got right / wrong
correct_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_indices = np.nonzero(predicted_classes != y_test)[0]
u.plot_images(X_test[correct_indices[:9]], y_test[correct_indices[:9]],
predicted_classes[correct_indices[:9]])
u.plot_images(X_test[incorrect_indices[:9]], y_test[incorrect_indices[:9]],
predicted_classes[incorrect_indices[:9]])
u.plot_confusion_matrix(y_test, nb_classes, predicted_classes)
Explanation: The error improved from 0.42% with the best network of the ensemble, to 0.4%, that is out best result so far.
End of explanation |
8,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Feature Engineering in Keras
Learning Objectives
Create an input pipeline using tf.data
Engineer features to create categorical, crossed, and numerical feature columns
Introduction
In this lab, we utilize feature engineering to improve the prediction of housing prices using a Keras Sequential Model.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Start by importing the necessary libraries for this lab.
Step1: Many of the Google Machine Learning Courses Programming Exercises use the California Housing Dataset, which contains data drawn from the 1990 U.S. Census. Our lab dataset has been pre-processed so that there are no missing values.
First, let's download the raw .csv data by copying the data from a cloud storage bucket.
Step2: Now, let's read in the dataset just copied from the cloud storage bucket and create a Pandas dataframe.
Step3: We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, for example, the count row and corresponding columns. The count shows 2500.000000 for all feature columns. Thus, there are no missing values.
Step4: Split the dataset for ML
The dataset we loaded was a single CSV file. We will split this into train, validation, and test sets.
Step5: Now, we need to output the split files. We will specifically need the test.csv later for testing. You should see the files appear in the home directory.
Step6: Lab Task 1
Step7: Next we initialize the training and validation datasets.
Step8: Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
Step9: We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
Numeric columns
The output of a feature column becomes the input to the model. A numeric is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
In the California housing prices dataset, most columns from the dataframe are numeric. Let's create a variable called numeric_cols to hold only the numerical feature columns.
Step10: Scaler function
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling. Here we are creating a function named 'get_scal' which takes a list of numerical features and returns a 'minmax' function, which will be used in tf.feature_column.numeric_column() as normalizer_fn in parameters. 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number.
Next, we scale the numerical feature columns that we assigned to the variable "numeric cols".
Step11: Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
Step12: Using the Keras Sequential Model
Next, we will run this cell to compile and fit the Keras Sequential model.
Step13: Next we show loss as Mean Square Error (MSE). Remember that MSE is the most commonly used regression loss function. MSE is the sum of squared distances between our target variable (e.g. housing median age) and predicted values.
Step14: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
Step15: Load test data
Next, we read in the test.csv file and validate that there are no null values.
Again, we can use .describe() to see some summary statistics for the numeric fields in our dataframe. The count shows 500.000000 for all feature columns. Thus, there are no missing values.
Step17: Now that we have created an input pipeline using tf.data and compiled a Keras Sequential Model, we now create the input function for the test data and to initialize the test_predict variable.
Step18: Prediction
Step19: Next, we run two predictions in separate cells - one where ocean_proximity=INLAND and one where ocean_proximity= NEAR OCEAN.
Step20: The arrays returns a predicted value. What do these numbers mean? Let's compare this value to the test set.
Go to the test.csv you read in a few cells up. Locate the first line and find the median_house_value - which should be 249,000 dollars near the ocean. What value did your model predicted for the median_house_value? Was it a solid model performance? Let's see if we can improve this a bit with feature engineering!
Lab Task 2
Step21: Next, we scale the numerical, bucktized, and categorical feature columns that we assigned to the variables in the preceding cell.
Step22: Categorical Feature
In this dataset, 'ocean_proximity' is represented as a string. We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector.
Next, we create a categorical feature using 'ocean_proximity'.
Step23: Bucketized Feature
Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider our raw data that represents a homes' age. Instead of representing the house age as a numeric column, we could split the home age into several buckets using a bucketized column. Notice the one-hot values below describe which age range each row matches.
Next we create a bucketized column using 'housing_median_age'
Step24: Feature Cross
Combining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features.
Next, we create a feature cross of 'housing_median_age' and 'ocean_proximity'.
Step25: Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
Step26: Next, we will run this cell to compile and fit the Keras Sequential model. This is the same model we ran earlier.
Step27: Next, we show loss and mean squared error then plot the model.
Step28: Next we create a prediction model. Note | Python Code:
# Run the chown command to change the ownership
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Install Sklearn
# scikit-learn simple and efficient tools for predictive data analysis
# Built on NumPy, SciPy, and matplotlib
!python3 -m pip install --user sklearn
# You can use any Python source file as a module by executing an import statement in some other Python source file
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import os
import tensorflow.keras
# Use matplotlib for visualizing the model
import matplotlib.pyplot as plt
# Import Pandas data processing libraries
import pandas as pd
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
#from keras.utils import plot_model
print("TensorFlow version: ",tf.version.VERSION)
Explanation: Basic Feature Engineering in Keras
Learning Objectives
Create an input pipeline using tf.data
Engineer features to create categorical, crossed, and numerical feature columns
Introduction
In this lab, we utilize feature engineering to improve the prediction of housing prices using a Keras Sequential Model.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Start by importing the necessary libraries for this lab.
End of explanation
if not os.path.isdir("../data"):
os.makedirs("../data")
# Download the raw .csv data by copying the data from a cloud storage bucket.
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/housing_pre-proc_toy.csv ../data
# `ls` is a Linux shell command that lists directory contents
# `l` flag list all the files with permissions and details
!ls -l ../data/
Explanation: Many of the Google Machine Learning Courses Programming Exercises use the California Housing Dataset, which contains data drawn from the 1990 U.S. Census. Our lab dataset has been pre-processed so that there are no missing values.
First, let's download the raw .csv data by copying the data from a cloud storage bucket.
End of explanation
# `head()` function is used to get the first n rows of dataframe
housing_df = pd.read_csv('../data/housing_pre-proc_toy.csv', error_bad_lines=False)
housing_df.head()
Explanation: Now, let's read in the dataset just copied from the cloud storage bucket and create a Pandas dataframe.
End of explanation
# `describe()` is use to get the statistical summary of the DataFrame
housing_df.describe()
Explanation: We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, for example, the count row and corresponding columns. The count shows 2500.000000 for all feature columns. Thus, there are no missing values.
End of explanation
# Let's split the dataset into train, validation, and test sets
train, test = train_test_split(housing_df, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
Explanation: Split the dataset for ML
The dataset we loaded was a single CSV file. We will split this into train, validation, and test sets.
End of explanation
train.to_csv('../data/housing-train.csv', encoding='utf-8', index=False)
val.to_csv('../data/housing-val.csv', encoding='utf-8', index=False)
test.to_csv('../data/housing-test.csv', encoding='utf-8', index=False)
!head ../data/housing*.csv
Explanation: Now, we need to output the split files. We will specifically need the test.csv later for testing. You should see the files appear in the home directory.
End of explanation
# A utility method to create a tf.data dataset from a Pandas Dataframe
# TODO 1a
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('median_house_value')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
Explanation: Lab Task 1: Create an input pipeline using tf.data
Next, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model.
Here, we create an input pipeline using tf.data. This function is missing two lines. Correct and run the cell.
End of explanation
batch_size = 32
train_ds = df_to_dataset(train)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
Explanation: Next we initialize the training and validation datasets.
End of explanation
# TODO 1b
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of households:', feature_batch['households'])
print('A batch of ocean_proximity:', feature_batch['ocean_proximity'])
print('A batch of targets:', label_batch)
Explanation: Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
End of explanation
# Let's create a variable called `numeric_cols` to hold only the numerical feature columns.
# TODO 1c
numeric_cols = ['longitude', 'latitude', 'housing_median_age', 'total_rooms',
'total_bedrooms', 'population', 'households', 'median_income']
Explanation: We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
Numeric columns
The output of a feature column becomes the input to the model. A numeric is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
In the California housing prices dataset, most columns from the dataframe are numeric. Let's create a variable called numeric_cols to hold only the numerical feature columns.
End of explanation
# 'get_scal' function takes a list of numerical features and returns a 'minmax' function
# 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number.
# Scalar def get_scal(feature):
# TODO 1d
def get_scal(feature):
def minmax(x):
mini = train[feature].min()
maxi = train[feature].max()
return (x - mini)/(maxi-mini)
return(minmax)
# TODO 1e
feature_columns = []
for header in numeric_cols:
scal_input_fn = get_scal(header)
feature_columns.append(fc.numeric_column(header,
normalizer_fn=scal_input_fn))
Explanation: Scaler function
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling. Here we are creating a function named 'get_scal' which takes a list of numerical features and returns a 'minmax' function, which will be used in tf.feature_column.numeric_column() as normalizer_fn in parameters. 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number.
Next, we scale the numerical feature columns that we assigned to the variable "numeric cols".
End of explanation
print('Total number of feature coLumns: ', len(feature_columns))
Explanation: Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
End of explanation
# Model create
# `tf.keras.layers.DenseFeatures()` is a layer that produces a dense Tensor based on given feature_columns.
feature_layer = tf.keras.layers.DenseFeatures(feature_columns, dtype='float64')
# `tf.keras.Sequential()` groups a linear stack of layers into a tf.keras.Model.
model = tf.keras.Sequential([
feature_layer,
layers.Dense(12, input_dim=8, activation='relu'),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='linear', name='median_house_value')
])
# Model compile
model.compile(optimizer='adam',
loss='mse',
metrics=['mse'])
# Model Fit
history = model.fit(train_ds,
validation_data=val_ds,
epochs=32)
Explanation: Using the Keras Sequential Model
Next, we will run this cell to compile and fit the Keras Sequential model.
End of explanation
# Let's show loss as Mean Square Error (MSE)
loss, mse = model.evaluate(train_ds)
print("Mean Squared Error", mse)
Explanation: Next we show loss as Mean Square Error (MSE). Remember that MSE is the most commonly used regression loss function. MSE is the sum of squared distances between our target variable (e.g. housing median age) and predicted values.
End of explanation
# Use matplotlib to draw the model's loss curves for training and validation
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
End of explanation
test_data = pd.read_csv('../data/housing-test.csv')
test_data.describe()
Explanation: Load test data
Next, we read in the test.csv file and validate that there are no null values.
Again, we can use .describe() to see some summary statistics for the numeric fields in our dataframe. The count shows 500.000000 for all feature columns. Thus, there are no missing values.
End of explanation
# TODO 1f
def test_input_fn(features, batch_size=256):
An input function for prediction.
# Convert the inputs to a Dataset without labels.
return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)
test_predict = test_input_fn(dict(test_data))
Explanation: Now that we have created an input pipeline using tf.data and compiled a Keras Sequential Model, we now create the input function for the test data and to initialize the test_predict variable.
End of explanation
# Use the model to do prediction with `model.predict()`
predicted_median_house_value = model.predict(test_predict)
Explanation: Prediction: Linear Regression
Before we begin to feature engineer our feature columns, we should predict the median house value. By predicting the median house value now, we can then compare it with the median house value after feature engineering.
To predict with Keras, you simply call model.predict() and pass in the housing features you want to predict the median_house_value for. Note: We are predicting the model locally.
End of explanation
# Ocean_proximity is INLAND
model.predict({
'longitude': tf.convert_to_tensor([-121.86]),
'latitude': tf.convert_to_tensor([39.78]),
'housing_median_age': tf.convert_to_tensor([12.0]),
'total_rooms': tf.convert_to_tensor([7653.0]),
'total_bedrooms': tf.convert_to_tensor([1578.0]),
'population': tf.convert_to_tensor([3628.0]),
'households': tf.convert_to_tensor([1494.0]),
'median_income': tf.convert_to_tensor([3.0905]),
'ocean_proximity': tf.convert_to_tensor(['INLAND'])
}, steps=1)
# Ocean_proximity is NEAR OCEAN
model.predict({
'longitude': tf.convert_to_tensor([-122.43]),
'latitude': tf.convert_to_tensor([37.63]),
'housing_median_age': tf.convert_to_tensor([34.0]),
'total_rooms': tf.convert_to_tensor([4135.0]),
'total_bedrooms': tf.convert_to_tensor([687.0]),
'population': tf.convert_to_tensor([2154.0]),
'households': tf.convert_to_tensor([742.0]),
'median_income': tf.convert_to_tensor([4.9732]),
'ocean_proximity': tf.convert_to_tensor(['NEAR OCEAN'])
}, steps=1)
Explanation: Next, we run two predictions in separate cells - one where ocean_proximity=INLAND and one where ocean_proximity= NEAR OCEAN.
End of explanation
# TODO 2a
numeric_cols = ['longitude', 'latitude', 'housing_median_age', 'total_rooms',
'total_bedrooms', 'population', 'households', 'median_income']
bucketized_cols = ['housing_median_age']
# indicator columns,Categorical features
categorical_cols = ['ocean_proximity']
Explanation: The arrays returns a predicted value. What do these numbers mean? Let's compare this value to the test set.
Go to the test.csv you read in a few cells up. Locate the first line and find the median_house_value - which should be 249,000 dollars near the ocean. What value did your model predicted for the median_house_value? Was it a solid model performance? Let's see if we can improve this a bit with feature engineering!
Lab Task 2: Engineer features to create categorical and numerical features
Now we create a cell that indicates which features will be used in the model.
Note: Be sure to bucketize 'housing_median_age' and ensure that 'ocean_proximity' is one-hot encoded. And, don't forget your numeric values!
End of explanation
# Scalar def get_scal(feature):
def get_scal(feature):
def minmax(x):
mini = train[feature].min()
maxi = train[feature].max()
return (x - mini)/(maxi-mini)
return(minmax)
# All numerical features - scaling
feature_columns = []
for header in numeric_cols:
scal_input_fn = get_scal(header)
feature_columns.append(fc.numeric_column(header,
normalizer_fn=scal_input_fn))
Explanation: Next, we scale the numerical, bucktized, and categorical feature columns that we assigned to the variables in the preceding cell.
End of explanation
# TODO 2b
for feature_name in categorical_cols:
vocabulary = housing_df[feature_name].unique()
categorical_c = fc.categorical_column_with_vocabulary_list(feature_name, vocabulary)
one_hot = fc.indicator_column(categorical_c)
feature_columns.append(one_hot)
Explanation: Categorical Feature
In this dataset, 'ocean_proximity' is represented as a string. We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector.
Next, we create a categorical feature using 'ocean_proximity'.
End of explanation
# TODO 2c
age = fc.numeric_column("housing_median_age")
# Bucketized cols
age_buckets = fc.bucketized_column(age, boundaries=[10, 20, 30, 40, 50, 60, 80, 100])
feature_columns.append(age_buckets)
Explanation: Bucketized Feature
Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider our raw data that represents a homes' age. Instead of representing the house age as a numeric column, we could split the home age into several buckets using a bucketized column. Notice the one-hot values below describe which age range each row matches.
Next we create a bucketized column using 'housing_median_age'
End of explanation
# TODO 2d
vocabulary = housing_df['ocean_proximity'].unique()
ocean_proximity = fc.categorical_column_with_vocabulary_list('ocean_proximity',
vocabulary)
crossed_feature = fc.crossed_column([age_buckets, ocean_proximity],
hash_bucket_size=1000)
crossed_feature = fc.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
Explanation: Feature Cross
Combining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features.
Next, we create a feature cross of 'housing_median_age' and 'ocean_proximity'.
End of explanation
print('Total number of feature columns: ', len(feature_columns))
Explanation: Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
End of explanation
# Model create
# `tf.keras.layers.DenseFeatures()` is a layer that produces a dense Tensor based on given feature_columns.
feature_layer = tf.keras.layers.DenseFeatures(feature_columns,
dtype='float64')
# `tf.keras.Sequential()` groups a linear stack of layers into a tf.keras.Model.
model = tf.keras.Sequential([
feature_layer,
layers.Dense(12, input_dim=8, activation='relu'),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='linear', name='median_house_value')
])
# Model compile
model.compile(optimizer='adam',
loss='mse',
metrics=['mse'])
# Model Fit
history = model.fit(train_ds,
validation_data=val_ds,
epochs=32)
Explanation: Next, we will run this cell to compile and fit the Keras Sequential model. This is the same model we ran earlier.
End of explanation
loss, mse = model.evaluate(train_ds)
print("Mean Squared Error", mse)
plot_curves(history, ['loss', 'mse'])
Explanation: Next, we show loss and mean squared error then plot the model.
End of explanation
# TODO 2e
# Median_house_value is $249,000, prediction is $234,000 NEAR OCEAN
model.predict({
'longitude': tf.convert_to_tensor([-122.43]),
'latitude': tf.convert_to_tensor([37.63]),
'housing_median_age': tf.convert_to_tensor([34.0]),
'total_rooms': tf.convert_to_tensor([4135.0]),
'total_bedrooms': tf.convert_to_tensor([687.0]),
'population': tf.convert_to_tensor([2154.0]),
'households': tf.convert_to_tensor([742.0]),
'median_income': tf.convert_to_tensor([4.9732]),
'ocean_proximity': tf.convert_to_tensor(['NEAR OCEAN'])
}, steps=1)
Explanation: Next we create a prediction model. Note: You may use the same values from the previous prediciton.
End of explanation |
8,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Time windows
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Time Windows
First, we will train a model to forecast the next step given the previous 20 steps, therefore, we need to create a dataset of 20-step windows for training. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
Explanation: Time windows
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Setup
End of explanation
dataset = tf.data.Dataset.range(10)
for val in dataset:
print(val.numpy())
dataset = tf.data.Dataset.range(10)
dataset = dataset.window(5, shift=1)
for window_dataset in dataset:
for val in window_dataset:
print(val.numpy(), end=" ")
print()
dataset = tf.data.Dataset.range(10)
dataset = dataset.window(5, shift=1, drop_remainder=True)
for window_dataset in dataset:
for val in window_dataset:
print(val.numpy(), end=" ")
print()
dataset = tf.data.Dataset.range(10)
dataset = dataset.window(5, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(5))
for window in dataset:
print(window.numpy())
dataset = tf.data.Dataset.range(10)
dataset = dataset.window(5, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(5))
dataset = dataset.map(lambda window: (window[:-1], window[-1:]))
for x, y in dataset:
print(x.numpy(), y.numpy())
dataset = tf.data.Dataset.range(10)
dataset = dataset.window(5, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(5))
dataset = dataset.map(lambda window: (window[:-1], window[-1:]))
dataset = dataset.shuffle(buffer_size=10)
for x, y in dataset:
print(x.numpy(), y.numpy())
dataset = tf.data.Dataset.range(10)
dataset = dataset.window(5, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(5))
dataset = dataset.map(lambda window: (window[:-1], window[-1:]))
dataset = dataset.shuffle(buffer_size=10)
dataset = dataset.batch(2).prefetch(1)
for x, y in dataset:
print("x =", x.numpy())
print("y =", y.numpy())
def window_dataset(series, window_size, batch_size=32,
shuffle_buffer=1000):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer)
dataset = dataset.map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
Explanation: Time Windows
First, we will train a model to forecast the next step given the previous 20 steps, therefore, we need to create a dataset of 20-step windows for training.
End of explanation |
8,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Literate Computing for Reproducible Infrastructure
<img src="./images/literate_computing-logo.png" alt='LC_LOGO' align='left'/>
NII Cloud Operation is a team supporting researchers and teachers in our institute. To make users more productive our team serves many roles, which are maintaining private OpenStack infrastructure, setting up various software stacks, consulting their configuration and optimization, and managing almost everything.<br>
Literate Computing for Reproducible Infrastrucre is our project, which seeks to utilize Jupyter Notebook in operational engineering area for seeking ..<br>
計算機インフラの御守では種々雑多なドキュメンテーションが不可欠です。日々の作業で証跡を残す、手順を整理して共有・再利用する、ユーザマニュアルや教材を整備する.. 国立情報学研究所(NII)のクラウド運用担当では、これらをシームレスに記述・蓄積する方法を研究しています。
表題の Literate Computing for Reproducible Infrastructure は、そのような取り組みのプロジェクト名です。プロジェクトでは Jupyter Notebook を用いてドキュメンテーションを行うことで、運用作業の信頼性向上、手順やノウハウの蓄積・流通が容易になることを目指しています。
インフラ運用の場面において 機械的に再現できる、人が読み解ける手順 を手段として、過度に自動化に依存することのない、レジリエントな 人間中心の機械化 をめざしています。そこでは、作業を効率化しつつもブラックボックス化せず、作業に対する理解をチーム内でコミュニケーションできる、また、目的と手段の整合性や限界を理解し議論・評価できると言った場を維持することで、ノウハウの移転・共有を促し運用者のスキル向上とエンジニアリングチームの再生産をはかる ことを重視しています。
多くの現場では、管理サーバにログインしコンソール上で作業を行う、作業内容や証跡はWiki等に随時転記して共有する.. といった形態が一般的と思います。これに対しLC4RIでは運用管理サーバ上にNotebookサーバを配備し、作業単位毎にNotebookを作成、作業内容やメモを記述しながら随時実行するといった作業形態を推奨しています。作業の証跡を齟齬なく記録する仕組み、過去の作業記録を参照して機械的に再現あるいは流用できる仕組み、機械的に実行できるとともに人が読み解き補完することもできるNotebook手順を整備しています。
プロジェクトの成果物は GitHub NII Cloud Operation Teamで公開しています。
Hadoop や Elastic Search など学術機関でポピュラーなインフラにに関する構築や 運用の手順を整理したもの、また、構築や運用の手順を記述するために便利な Jupyterの機能拡張 を掲載しています。
This notebook demonstrates our project's extensions for Literate Computing for Reproducible Infrastructure. <br>
この Notebook では拡張した機能の概要を紹介します。
(2019.06.20)
Jupyter-run_through <span lang="ja">- まとめ実行機能</span>
Usage
Step1: "echo" に修正して実行してみましょう。
Step2: Jupyter-LC_wrapper
Usage
Step3: Jupyter-multi_outputs
用途
Step4: 上の例では、タブをクリックすると以前の出力結果を参照することができます。
Step5: 以前の出力結果を選択表示すると <span class='fa fa-fw fa-thumb-tack'></span> が <span class='fa fa-fw fa-exchange'></span> に変わります。この <span class='fa fa-fw fa-exchange'></span>をクリックすると、選択している出力と現在の出力を比較することができます。 | Python Code:
! echo "This is 1st step" > foo; cat foo
! echo ".. 2nd step..." >> foo && cat foo
!echooooo ".. 3rd step... will fail" >> foo && cat foo
Explanation: Literate Computing for Reproducible Infrastructure
<img src="./images/literate_computing-logo.png" alt='LC_LOGO' align='left'/>
NII Cloud Operation is a team supporting researchers and teachers in our institute. To make users more productive our team serves many roles, which are maintaining private OpenStack infrastructure, setting up various software stacks, consulting their configuration and optimization, and managing almost everything.<br>
Literate Computing for Reproducible Infrastrucre is our project, which seeks to utilize Jupyter Notebook in operational engineering area for seeking ..<br>
計算機インフラの御守では種々雑多なドキュメンテーションが不可欠です。日々の作業で証跡を残す、手順を整理して共有・再利用する、ユーザマニュアルや教材を整備する.. 国立情報学研究所(NII)のクラウド運用担当では、これらをシームレスに記述・蓄積する方法を研究しています。
表題の Literate Computing for Reproducible Infrastructure は、そのような取り組みのプロジェクト名です。プロジェクトでは Jupyter Notebook を用いてドキュメンテーションを行うことで、運用作業の信頼性向上、手順やノウハウの蓄積・流通が容易になることを目指しています。
インフラ運用の場面において 機械的に再現できる、人が読み解ける手順 を手段として、過度に自動化に依存することのない、レジリエントな 人間中心の機械化 をめざしています。そこでは、作業を効率化しつつもブラックボックス化せず、作業に対する理解をチーム内でコミュニケーションできる、また、目的と手段の整合性や限界を理解し議論・評価できると言った場を維持することで、ノウハウの移転・共有を促し運用者のスキル向上とエンジニアリングチームの再生産をはかる ことを重視しています。
多くの現場では、管理サーバにログインしコンソール上で作業を行う、作業内容や証跡はWiki等に随時転記して共有する.. といった形態が一般的と思います。これに対しLC4RIでは運用管理サーバ上にNotebookサーバを配備し、作業単位毎にNotebookを作成、作業内容やメモを記述しながら随時実行するといった作業形態を推奨しています。作業の証跡を齟齬なく記録する仕組み、過去の作業記録を参照して機械的に再現あるいは流用できる仕組み、機械的に実行できるとともに人が読み解き補完することもできるNotebook手順を整備しています。
プロジェクトの成果物は GitHub NII Cloud Operation Teamで公開しています。
Hadoop や Elastic Search など学術機関でポピュラーなインフラにに関する構築や 運用の手順を整理したもの、また、構築や運用の手順を記述するために便利な Jupyterの機能拡張 を掲載しています。
This notebook demonstrates our project's extensions for Literate Computing for Reproducible Infrastructure. <br>
この Notebook では拡張した機能の概要を紹介します。
(2019.06.20)
Jupyter-run_through <span lang="ja">- まとめ実行機能</span>
Usage:
* Freeze: Preventing miss-operation; once a code cell has been executed, it freezes against unintended execution and modification. Note that the freeze is implemented as different state from standard cell lock's. The freeze only make an executed cell un-editable temporally.<br>
//誤操作を防ぐ; いったんセルを実行すると"凍結"され解凍しないと再実行できない。また、セルの編集をロックすることもできる。
Bricks: Giving a summarized perspective for execution control; embedded "cells" underneath are represented as bricks and be able to run through altogether with a click, while markdowns and codes are collapsed using <span class='fa fa-fw fa-external-link'></span>Collapsible Headings<br>
//まとめ実行; マークダウンを畳み込んだ際に、畳み込まれた範囲にあるすべての"Cell"を一連の"brick"として表示する。一連の"brick"を1-clickでまとめて実行できる。
Run_through: Execute collapsed code cells as a whole; Simply reuse workflows without paying much attention to details, whithout needs for arrenge nor customization. Run throughout the notebook as an executable checklist, then verify error if it occurs.<br>
//定型的な作業をまとめて実行する; 詳細は気にせずNotebook形式の手順を気軽に利用, アレンジやカスタマイズはあまり必要がない。まとめて実行してエラーが発生した時だけ詳細を確認したい; 実行可能なチェックリストとして用いる。
<span class='fa fa-fw fa-external-link'></span> GitHub: Jupyter-LC_run_through
<div style="width:30%;right">
[<span class='fa fa-fw fa-youtube'></span>LC_run_through](https://www.youtube.com/watch?v=pkzE_nwtEKQ)</div>
How-to: run_through <br>
The <span class='fa fa-fw fa-caret-right' style='color:#a0a0a0;'></span> on the collapsed headding indicates there are collapsed markdowns and cells. With clicking <span class='fa fa-fw fa-caret-right' style='color:#a0a0a0;'></span> and <span class='fa fa-fw fa-caret-down' style='color:#a0a0a0;'></span> switch the expand/collapse.<br clear="right">
<img src="./images/run_through.png" align="right" width=30% />When headings are collapsed, executable cells underneath are represented as blicks. Each blick <span class='fa fa-fw fa-square' style='color:#cccccc;'></span> represents its collesponding cell's status, the gray means 'not executed yet'.
Click <span class='fa fa-fw fa-play-circle'></span> runs through cells altogeather.<br clear="right">
<img src="./images/run_through_OK.png" width=30% align=right /> Normally executed and completed cells truned to light green <span class='fa fa-fw fa-square' style='color:#88ff88;'></span>. In addition completed cells are "frozen" <span class='fa fa-fw fa-snowflake-o' style='background-color:cornflowerblue;color:white;'></span>, that would not be executed again until "unfrozen" via toolbar <span class='fa fa-fw fa-snowflake-o'style='color:gray;'></span>.<br clear="right">
<img src="./images/run_through_NG.png" width=30% align=right /> In the case of error cells' execution is aborted and the blick turned to light coral<span class='fa fa-fw fa-square' style='color:#ff8888;'></span>. You can examine the error by expanding folded cells with <span class='fa fa-fw fa-caret-right'></span>.
まとめ実行機能 <br>
表題の左側にある <span class='fa fa-fw fa-caret-right' style='color:#a0a0a0;'></span> は、詳細な手順が畳み込まれていることを示しています。 <span class='fa fa-fw fa-caret-right' style='color:#a0a0a0;'></span> をクリックすると、 <span class='fa fa-fw fa-caret-down' style='color:#a0a0a0;'></span> に変化し、畳み込まれている内容を表示することができます。この畳み込み表示機能は "Collapsible Headings"を利用しています。オリジナルの Collapsible Headings ではマークダウンの階層を畳み込むだけだったのですが、、
<br><br>
<img src="./images/run_through.png" align="right" width=30% /> まとめ実行機能では、畳みこまれている内容に「手順」(実行可能なセル)が含まれていると右のように正方形の箱 <span class='fa fa-fw fa-square' style='color:#cccccc;'></span>として可視化されます。箱の数は畳み込まれている手順のステップ数を示しています。
また、表示領域右上の<span class='fa fa-fw fa-play-circle'></span> を押すと、配下の手順をまとめて実行することができます。<br clear="right">
<img src="./images/run_through_OK.png" width=30% align=right /> すべてのステップが終了すると右のように箱の表示が変化します。実行が正常に完了したステップは薄緑 <span class='fa fa-fw fa-square' style='color:#88ff88;'></span>に表示されます。また、完了したステップは、再度 まとめ実行ボタン<span class='fa fa-fw fa-play-circle'></span>をクリックしても実行されないよう凍結されます。 箱内側のスノーフレーク “<span class='fa fa-fw fa-snowflake-o' style='background-color:#cccccc;color:white;'></span>”
は、セルが凍結状態 <span class='fa fa-fw fa-snowflake-o' style='background-color:#88ff88;color:white;'></span><span class='fa fa-fw fa-snowflake-o' style='background-color:cornflowerblue;color:white;'></span>であることを示しています。<br clear="right">
<img src="./images/run_through_NG.png" width=30% align=right /> 途中でエラーが発生した場合は当該のステップが薄紅<span class='fa fa-fw fa-square' style='color:#ff8888;'></span>に表示されます。畳み込み表示を <span class='fa fa-fw fa-caret-right' style='color:#a0a0a0;'></span> をクリックし解除すると、それぞれのステップの実行内容を確認することができます。<br clear="right">
<img src="./images/run_through_Frozen2.png" width=20% align=right /> セルの凍結 <br>
実行が正常に終わったセルは凍結 <span class='fa fa-fw fa-snowflake-o' style='background-color:cornflowerblue;color:white;'></span> されます。凍結状態のセルはそのままでは実行することも、修正することもできません。凍結状態を解除するにはツールから<span class='fa fa-fw fa-snowflake-o' style='color:gray;'></span>をクリックします。
凍結機能は、実行済みの範囲を識別するとともに、偶発的な重複実行を防止するためのものです。また、まとめ実行ボタン<span class='fa fa-fw fa-play-circle'></span>は一定期間、チャタリングを防止するように制御しています。<br>
エラーのセル<span class='fa fa-fw fa-square' style='color:#ff8888;'></span>は内容を修正して再実行することができます。実行が終わって凍結状態 <span class='fa fa-fw fa-snowflake-o' style='background-color:cornflowerblue;color:white;'></span> のセルはスキップされるので、エラーを修正した後、まとめ実行ボタン<span class='fa fa-fw fa-play-circle'></span>を再度押すことで作業を継続できます。修正したセル<span class='fa fa-fw fa-square' style='color:#ff8888;'></span>(未実行)と継続する未実行のセル<span class='fa fa-fw fa-square' style='color:#cccccc;'></span>が実行されます。<br>
<br>
エラーの原因が当該のセル単体ではなく、遡ったセルの実行結果に依存している場合は、必要な範囲まで凍結を解除することになります。まとめて凍結を解除するには、ツールから<span class='fa fa-fw fa-snowflake-o' style='color:gray;'></span><span class='fa fa-fw fa-fast-forward'></span>、<span class='fa fa-fw fa-snowflake-o' style='color:gray;'></span><span class='fa fa-fw fa-forward'></span>を利用することができます。
<br clear="right">
Example
Run_through: Try “<span class='fa fa-fw fa-caret-right' style='color:#a0a0a0;'></span>”, which will collapse cells, then, “<span class='fa fa-fw fa-play-circle'></span>” will run through four bricks.<br>
<br>
<span class='fa fa-fw fa-square' style='color:#88ff88;'></span>: The light green bricks indicate successfull completion.<br>
<span class='fa fa-fw fa-square' style='color:#ff8888;'></span>: The third light coral brick indicates some error.<br>
<br>
<span class='fa fa-fw fa-snowflake-o' style='background-color:#cccccc;color:white;'></span><span class='fa fa-fw fa-snowflake-o' style='background-color:#88ff88;color:white;'></span><span class='fa fa-fw fa-snowflake-o' style='background-color:cornflowerblue;color:white;'></span>: The snow flake indicates those bricks are frozen. Executions and edits are prohibitted. The “<span class='fa fa-fw fa-snowflake-o' style='color:gray;'></span>” will unfrozen bricks.<br>
The succeeded cells <span class='fa fa-fw fa-square' style='color:#88ff88;'></span> will be automatically frozen in order to prevent accidental duplicate operations. Error cells
<span class='fa fa-fw fa-square' style='color:#ff8888;'></span> remain unfrozen, then you can fix errors and re-execute the cell. You can continue execution on following not-yet-executed cells <span class='fa fa-fw fa-square' style='color:#cccccc;'></span>.
畳み込んだステップのまとめ実行: 表題の横に <span class='fa fa-fw fa-caret-down' style='color:#a0a0a0;'></span> が表示されている場合はクリックしてセルを畳み込んでください。畳み込まれた状態では <span class='fa fa-fw fa-caret-right' style='color:#a0a0a0;'></span>に変化します。<br>
この例では <span class='fa fa-fw fa-play-circle'></span> をクリックすると4ステップをまとめ実行します。実行が終わったステップは薄緑の表示<span class='fa fa-fw fa-square' style='color:#88ff88;'></span>となります。以下では3番目がエラーとなり薄紅表示<span class='fa fa-fw fa-square' style='color:#ff8888;'></span>され、実行が中断します。<br>
<span class='fa fa-fw fa-caret-right' style='color:#a0a0a0;'></span> をクリックしてエラーの内容を確認します。
エラーのセルを修正すると、続きから実行することができます。<br>
<br>
凍結状態のセルは実行することも、修正することもできません。再度実行する場合にはウインドウ上部の<span class='fa fa-fw fa-snowflake-o' style='color:gray;'></span>をクリックすることで凍結を解除します。<br>
End of explanation
! cat foo
Explanation: "echo" に修正して実行してみましょう。
End of explanation
%env lc_wrapper 8:8:10:10
# lc_wrapper s:h:e:f
#
# s : Summary starts when # of output lines exceed 's' (default s=1)
# h : Summary displays the first h lines and max 2 x h error lines.
# e : Max # of output lines in progress.
# f : Summary displays the last f lines (default f=1)
!!from time import sleep
with open("./resources/bootstrap.log", "r") as f: # "/var/log/bootstrap.log"
count = 0
limit = 100
for line in f:
count = count+1
if count > limit: break
print(line, end=''),
sleep(0.05)
print ("after", limit, "lines are ignored")
#
# Emulate large log output..
Explanation: Jupyter-LC_wrapper
Usage:
* Summarize massive output lines.<br>
//大量のログ出力を要約表示する。
* At each cell’s execution all original output lines are saved into an individual file with a time stamp.<br>
//複数の実行ログを各々ファイルに保存し、後で全体を参照したり、結果を比較できるようにする。
<span class='fa fa-fw fa-external-link'></span> GitHub: Jupyter-LC_wrapper
<p lang="all"><div style="width:30%">
[ <span class='fa fa-fw fa-youtube'></span>LC_wrapper Kernel](https://www.youtube.com/watch?v=-28XG7aHYY8)</div>
</p>
You can review whole output and compare with previous results from different executions.<br>
When you install some pakages, there would be massive log lines. Jupyter Web UI is inefficient for handling a large number of output lines.
例えば、某かのパッケージ群をインストールしたりすると大量のログが出力されますが、JupyetrのWeb UI上で大量の出力を扱うのはなにかと不便です。 Jupyterの Cell の中でログの内容を検索したり、比較したりするのはまどろっこしく手間がかかります。<br>
Jupyter-LC_wrapper を用いると:<br>
・ Output Cell には要約されてた結果が出力されるようになります(例:最初10行と最後の10行)。<br>
・ オリジナルの出力結果全体はファイルに保存されます。<br>
・ 実行毎に各々のファイルに結果が保存されるので出力結果を比較することができます。<br>
・ 要約中でも、エラーなど特定のパターン含む行を表示します(lc_wrapper_regex.txt などいくつかの方法でカスタマイズ可)。<br>
Cellでコマンドを実行する際に先頭に "!!" を付加すると、LC_wrapper の機能が有効になります。
Cellで shell を呼び出す場合は "!!!" を付加してください。
End of explanation
import pandas
import matplotlib
import matplotlib.pyplot as plt
import random
%matplotlib inline
plot_df = pandas.DataFrame({
'col1': [12, 3, random.randint(1,10), 4],
'col2': [3, 12, 5, 2],
'col3': [random.randint(4,7), 10, 3, random.randint(0,2)],
'col4': [random.randint(0,11), 5, random.randint(6,12), random.randint(0,5)],
})
plot_df
plot_df.plot()
Explanation: Jupyter-multi_outputs
用途:
* 出力結果を保存する。
* 現在の出力結果と、以前の出力結果を比較する(いまのところ文字列のみ)。
<span class='fa fa-fw fa-external-link'></span> Jupyter-multi-outputs
<img src="./images/multi_outputs.png" align="right" width=40% />実行後セルの左側に表示される <span class='fa fa-fw fa-thumb-tack'></span> をクリックすると、出力セルの内容がタブ形式で保存されます。
保存された出力のタブを選択した際に表示される <span class='fa fa-fw fa-exchange'></span> をクリックすると、現在の出力と比較することができます。<br><br>
何度もセルを試行する場合、毎回の結果を保存しておくことができます。<br>
お手本となる手順や教材を Notebook で作成する際に、期待される結果や正解を保存しておくなどの用途が考えられます。
End of explanation
plot_df
Explanation: 上の例では、タブをクリックすると以前の出力結果を参照することができます。
End of explanation
plot_df.transpose
Explanation: 以前の出力結果を選択表示すると <span class='fa fa-fw fa-thumb-tack'></span> が <span class='fa fa-fw fa-exchange'></span> に変わります。この <span class='fa fa-fw fa-exchange'></span>をクリックすると、選択している出力と現在の出力を比較することができます。
End of explanation |
8,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 6
v1.1, 2020.4 5, edit by David Yi
本次内容要点
随机数
思考:猜数游戏等
随机数
随机数这一概念在不同领域有着不同的含义,在密码学、通信领域有着非常重要的用途。
Python 的随机数模块是 random,random 模块主要有以下函数,结合例子来看看。
random.choice() 从序列中获取一个随机元素
random.sample() 创建指定范围内指定个数的随机数
random.random() 用于生成一个0到1的随机浮点数
random.uniform() 用于生成一个指定范围内的随机浮点数
random.randint() 生成一个指定范围内的整数
random.shuffle() 将一个列表中的元素打乱
Step1: 思考
一个猜数程序
Step2: 更加复杂的生成随机内容。
可以参考我们开发的python 函数包中的 random 部分,https
Step3: 猜数程序修改为机器猜,根据每次人返回的结果来调整策略 | Python Code:
import random
# random.choice(sequence)。参数sequence表示一个有序类型。
# random.choice 从序列中获取一个随机元素。
print(random.choice(range(1,100)))
# 从一个列表中产生随机元素
list1 = ['a', 'b', 'c']
print(random.choice(list1))
# random.sample()
# 创建指定范围内指定个数的整数随机数
print(random.sample(range(1,100), 10))
print(random.sample(range(1,10), 5))
# 如果要产生的随机数数量大于范围边界,会怎么样?
# print(random.sample(range(1,10), 15))
# random.randint(a, b),用于生成一个指定范围内的整数。
# 其中参数a是下限,参数b是上限,生成的随机数n: a <= n <= b
print(random.randint(1,100))
# random.randrange([start], stop[, step]),
# 从指定范围内,按指定基数递增的集合中 获取一个随机数。
print(random.randrange(1,10))
# 可以多运行几次,看看结果总是哪几个数字
print(random.randrange(1,10,3))
# random.random()用于生成一个0到1的随机浮点数: 0 <= n < 1.0
print(random.random())
# random.uniform(a, b),
# 用于生成一个指定范围内的随机浮点数,两个参数其中一个是上限,一个是下限。
# 如果a < b,则生成的随机数n: a <= n <= b。如果 a > b, 则 b <= n <= a。
print(random.uniform(1,100))
print(random.uniform(50,10))
# random.shuffle(x[, random]),
# 用于将一个列表中的元素打乱
a = [12, 23, 1, 5, 87]
random.shuffle(a)
print(a)
# random.sample(sequence, k),
# 从指定序列中随机获取指定长度的片断。sample函数不会修改原有序列。
print(random.sample(range(10),5))
print(random.sample(range(10),7))
Explanation: Lesson 6
v1.1, 2020.4 5, edit by David Yi
本次内容要点
随机数
思考:猜数游戏等
随机数
随机数这一概念在不同领域有着不同的含义,在密码学、通信领域有着非常重要的用途。
Python 的随机数模块是 random,random 模块主要有以下函数,结合例子来看看。
random.choice() 从序列中获取一个随机元素
random.sample() 创建指定范围内指定个数的随机数
random.random() 用于生成一个0到1的随机浮点数
random.uniform() 用于生成一个指定范围内的随机浮点数
random.randint() 生成一个指定范围内的整数
random.shuffle() 将一个列表中的元素打乱
End of explanation
# 猜数,人猜
# 简单版本
import random
a = random.randint(1,1000)
print('Now you can guess...')
guess_mark = True
while guess_mark:
user_number =int(input('please input number:'))
if user_number > a:
print('too big')
if user_number < a:
print('too small')
if user_number == a:
print('bingo!')
guess_mark = False
# 猜数,人猜
# 记录猜数的过程
import random
# 记录人猜了多少数字
user_number_list = []
# 记录人猜了几次
user_guess_count = 0
a = random.randint(1,100)
print('Now you can guess...')
guess_mark = True
# 主循环
while guess_mark:
user_number =int(input('please input number:'))
user_number_list.append(user_number)
user_guess_count += 1
if user_number > a:
print('too big')
if user_number < a:
print('too small')
if user_number == a:
print('bingo!')
print('your guess number list:', user_number_list)
print('you try times:', user_guess_count)
guess_mark = False
# 猜数,人猜
# 增加判断次数,如果猜了大于4次显示不同提示语
import random
# 记录人猜了多少数字
user_number_list = []
# 记录人猜了几次
user_guess_count = 0
a = random.randint(1,100)
print('Now you can guess...')
guess_mark = True
# 主循环
while guess_mark:
if 0 <= user_guess_count <= 4:
user_number =int(input('please input number:'))
if 4 < user_guess_count <= 100:
user_number =int(input('try harder, please input number:'))
user_number_list.append(user_number)
user_guess_count += 1
if user_number > a:
print('too big')
if user_number < a:
print('too small')
if user_number == a:
print('bingo!')
print('your guess number list:', user_number_list)
print('you try times:', user_guess_count)
guess_mark = False
Explanation: 思考
一个猜数程序
End of explanation
from fishbase.fish_random import *
# 这些银行卡卡号只是符合规范,可以通过最基本的银行卡号规范检查,但是实际上是不存在的
# 随机生成一张银行卡卡号
print(gen_random_bank_card())
# 随机生成一张中国银行的借记卡卡号
print(gen_random_bank_card('中国银行', 'CC'))
# 随机生成一张中国银行的贷记卡卡号
print(gen_random_bank_card('中国银行', 'DC'))
from fishbase.fish_random import *
# 生成假的身份证号码,符合标准身份证的分段设置和校验位
# 指定身份证的地域
print(gen_random_id_card('310000'))
# 增加指定年龄
print(gen_random_id_card('310000', age=70))
# 增加年龄和性别
print(gen_random_id_card('310000', age=30, gender='00'))
# 生成一组
print(gen_random_id_card(age=30, gender='01', result_type='LIST'))
Explanation: 更加复杂的生成随机内容。
可以参考我们开发的python 函数包中的 random 部分,https://fishbase.readthedocs.io/en/latest/fish_random.html
fish_random.gen_random_address(zone) 通过省份行政区划代码,返回该省份的随机地址
fish_random.get_random_areanote(zone) 省份行政区划代码,返回下辖的随机地区名称
fish_random.gen_random_bank_card([…]) 通过指定的银行名称,随机生成该银行的卡号
fish_random.gen_random_company_name() 随机生成一个公司名称
fish_random.gen_random_float(minimum, maximum) 指定一个浮点数范围,随机生成并返回区间内的一个浮点数,区间为闭区间 受限于 random.random 精度限制,支持最大 15 位精度
fish_random.gen_random_id_card([zone, …]) 根据指定的省份编号、性别或年龄,随机生成一个身份证号
fish_random.gen_random_mobile() 随机生成一个手机号
fish_random.gen_random_name([family_name, …]) 指定姓氏、性别、长度,返回随机人名,也可不指定生成随机人名
fish_random.gen_random_str(min_length, …) 指定一个前后缀、字符串长度以及字符串包含字符类型,返回随机生成带有前后缀及指定长度的字符串
End of explanation
# 猜数,机器猜
min = 0
max = 1000
guess_ok_mark = False
while not guess_ok_mark:
cur_guess = int((min + max) / 2)
print('I guess:', cur_guess)
human_answer = input('Please tell me big or small:')
if human_answer == 'big':
max = cur_guess
if human_answer == 'small':
min = cur_guess
if human_answer == 'ok':
print('HAHAHA')
guess_ok_mark = True
Explanation: 猜数程序修改为机器猜,根据每次人返回的结果来调整策略
End of explanation |
8,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
e = 2.71828182845904523536028747135266249775724709369995
Compute the Fermi distribution at energy, mu and kT.
x = 1/(e **((energy - mu)/kT) + 1)
return x
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{align}
\frac{1}{e^{(\epsilon - \mu)/kT} + 1}
\end{align}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
E = np.linspace(0, 10., 100)
y = plt.plot(E, fermidist(E, mu, kT))
plt.xlabel('t')
plt.ylabel('X(t)')
return y
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist, mu = (0.0,5.0), kT=(.1,10.0));
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
8,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS446/519 - Class Session 7 - Transitivity (Clustering Coefficients)
In this class session we are going to compute the local clustering coefficient of all vertices in the undirected human
protein-protein interaction network (PPI), in two ways -- first without using igraph, and the using igraph. We'll obtain the interaction data from the Pathway Commons SIF file (in the shared/ folder), we'll make an "adjacency forest" representation of the network, and we'll manually compute the local clustering coefficient of each vertex (protein) in the network using the "enumerating neighbor pairs" method described by Newman. Then we'll run the same algorithm using the transitivity_local_undirected function in igraph, and we'll compare the results in order to check our work. Grad students
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Step 8
Step9: Step 9
Step10: Step 10
Step11: So the built-in python dictionary type gave us fantastic performance. But is this coming at the cost of huge memory footprint? Let's check the size of our adjacency "list of hashtables", in MB | Python Code:
from igraph import Graph
from igraph import summary
import pandas
import numpy
import timeit
from pympler import asizeof
import bintrees
Explanation: CS446/519 - Class Session 7 - Transitivity (Clustering Coefficients)
In this class session we are going to compute the local clustering coefficient of all vertices in the undirected human
protein-protein interaction network (PPI), in two ways -- first without using igraph, and the using igraph. We'll obtain the interaction data from the Pathway Commons SIF file (in the shared/ folder), we'll make an "adjacency forest" representation of the network, and we'll manually compute the local clustering coefficient of each vertex (protein) in the network using the "enumerating neighbor pairs" method described by Newman. Then we'll run the same algorithm using the transitivity_local_undirected function in igraph, and we'll compare the results in order to check our work. Grad students: you should also group vertices by their "binned" vertex degree k (bin size 50, total number of bins = 25) and plot the average local clustering coefficient for the vertices within a bin, against the center k value for the bin, on log-log scale (compare to Newman Fig. 8.12)
End of explanation
sif_data = pandas.read_csv("shared/pathway_commons.sif",
sep="\t", names=["species1","interaction_type","species2"])
Explanation: Step 1: load in the SIF file (refer to Class 6 exercise) into a data frame sif_data, using the pandas.read_csv function, and name the columns species1, interaction_type, and species2.
End of explanation
interaction_types_ppi = set(["interacts-with",
"in-complex-with"])
interac_ppi = sif_data[sif_data.interaction_type.isin(interaction_types_ppi)]
Explanation: Step 2: restrict the interactions to protein-protein undirected ("in-complex-with", "interacts-with"), by using the isin function and then using [ to index rows into the data frame. Call the returned ata frame interac_ppi.
End of explanation
for i in range(0, interac_ppi.shape[0]):
if interac_ppi.iat[i,0] > interac_ppi.iat[i,2]:
temp_name = interac_ppi.iat[i,0]
interac_ppi.set_value(i, 'species1', interac_ppi.iat[i,2])
interac_ppi.set_value(i, 'species2', temp_name)
interac_ppi_unique = interac_ppi[["species1","species2"]].drop_duplicates()
ppi_igraph = Graph.TupleList(interac_ppi_unique.values.tolist(), directed=False)
summary(ppi_igraph)
Explanation: Step 3: restrict the data frame to only the unique interaction pairs of proteins (ignoring the interaction type), and call that data frame interac_ppi_unique. Make an igraph Graph object from interac_ppi_unique using Graph.TupleList, values, and tolist. Call summary on the Graph object. Refer to the notebooks for the in-class exercises in Class sessions 3 and 6.
End of explanation
ppi_adj_list = ppi_igraph.get_adjlist()
Explanation: Step 4: Obtain an adjacency list representation of the graph (refer to Class 5 exercise), using get_adjlist.
End of explanation
def get_bst_forest(theadjlist):
g_adj_list = theadjlist
n = len(g_adj_list)
theforest = []
for i in range(0,n):
itree = bintrees.AVLTree()
for j in g_adj_list[i]:
itree.insert(j,1)
theforest.append(itree)
return theforest
def find_bst_forest(bst_forest, i, j):
return j in bst_forest[i]
ppi_adj_forest = get_bst_forest(ppi_adj_list)
Explanation: Step 5: Make an "adjacency forest" data structure as a list of AVLTree objects (refer to Class 5 exercise). Call this adjacency forest, ppi_adj_forest.
End of explanation
N = len(ppi_adj_list)
civals = numpy.zeros(100)
civals[:] = numpy.NaN
start_time = timeit.default_timer()
## PUT CODE HERE TO CALCULATE THE LOCAL CLUSTERING COEFFICIENT
ci_elapsed = timeit.default_timer() - start_time
print(ci_elapsed)
Explanation: Step 6: Compute the local clustering coefficient (Ci) values of the first 100 vertices (do timing on this operation) as a numpy.array; for any vertex with degree=1, it's Ci value can be numpy NaN. You'll probably want to have an outer for loop for vertex ID n going from 0 to 99, and then an inner for loop iterating over neighbor vertices of vertex n. Store the clustering coefficients in a list, civals. Print out how many seconds it takes to perform this calculation.
End of explanation
start_time = timeit.default_timer()
civals_igraph = ppi_igraph.transitivity_local_undirected(vertices=list(range(0,100)))
ci_elapsed = timeit.default_timer() - start_time
print(ci_elapsed)
Explanation: Step 7: Calculate the local clustering coefficients for the first 100 vertices using
the method igraph.Graph.transitivity_local_undirected and save the results as a list civals_igraph. Do timing on the call to transitivity_local_undirected, using vertices= to specify the vertices for which you want to compute the local clustering coefficient.
End of explanation
import matplotlib.pyplot
matplotlib.pyplot.plot(civals, civals_igraph)
matplotlib.pyplot.xlabel("Ci (my code)")
matplotlib.pyplot.ylabel("Ci (igraph)")
matplotlib.pyplot.show()
Explanation: Step 8: Compare your Ci values to those that you got from igraph, using a scatter plot where civals is on the horizontal axis and civals_igraph is on the vertical axis.
End of explanation
civals_igraph = numpy.array(ppi_igraph.transitivity_local_undirected())
deg_igraph = ppi_igraph.degree()
deg_npa = numpy.array(deg_igraph)
deg_binids = numpy.rint(deg_npa/50)
binkvals = 50*numpy.array(range(0,25))
civals_avg = numpy.zeros(25)
for i in range(0,25):
civals_avg[i] = numpy.mean(civals_igraph[deg_binids == i])
matplotlib.pyplot.loglog(
binkvals,
civals_avg)
matplotlib.pyplot.ylabel("<Ci>")
matplotlib.pyplot.xlabel("k")
matplotlib.pyplot.show()
Explanation: Step 9: scatter plot the average log(Ci) vs. log(k) (i.e., local clustering coefficient vs. vertex degree) for 25 bins of vertex degree, with each bin size being 50 (so we are binning by k, and the bin centers are 50, 100, 150, 200, ...., 1250)
End of explanation
civals = numpy.zeros(len(ppi_adj_list))
civals[:] = numpy.NaN
ppi_adj_hash = []
for i in range(0, len(ppi_adj_list)):
newhash = {}
for j in ppi_adj_list[i]:
newhash[j] = True
ppi_adj_hash.append(newhash)
start_time = timeit.default_timer()
for n in range(0, len(ppi_adj_list)):
neighbors = ppi_adj_hash[n]
nneighbors = len(neighbors)
if nneighbors > 1:
nctr = 0
for i in neighbors:
for j in neighbors:
if (j > i) and (j in ppi_adj_hash[i]):
nctr += 1
civals[n] = nctr/(nneighbors*(nneighbors-1)/2)
ci_elapsed = timeit.default_timer() - start_time
print(ci_elapsed)
Explanation: Step 10: Now try computing the local clustering coefficient using a "list of hashtables" approach; compute the local clustering coefficients for all vertices, and compare to the timing for R. Which is faster, the python3 implementation or the R implementation?
End of explanation
asizeof.asizeof(ppi_adj_hash)/1000000
Explanation: So the built-in python dictionary type gave us fantastic performance. But is this coming at the cost of huge memory footprint? Let's check the size of our adjacency "list of hashtables", in MB:
End of explanation |
8,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Histogrammar advanced tutorial
Histogrammar is a Python package that allows you to make histograms from numpy arrays, and pandas and spark dataframes. (There is also a scala backend for Histogrammar.)
This advanced tutorial shows how to
Step1: Data generation
Let's first load some data!
Step2: What about Spark DataFrames?
No problem! We can easily perform the same steps on a Spark DataFrame. One important thing to note there is that we need to include a jar file when we create our Spark session. This is used by spark to create the histograms using Histogrammar. The jar file will be automatically downloaded the first time you run this command.
Step3: Filling histograms with spark
Filling histograms with spark dataframes is just as simple as it is with pandas dataframes.
Step4: Let's make the same histogram but from a spark dataframe. There are just two differences
Step5: Apart from these two differences, all functionality is the same between pandas and spark histograms!
Like pandas, we can also do directly from the dataframe
Step6: All examples below also work with spark dataframes.
Making many histograms at once
Histogrammar has a nice method to make many histograms in one go. See here.
By default automagical binning is applied to make the histograms.
Step7: Working with timestamps
Step8: Histogrammar does not support pandas' timestamps natively, but converts timestamps into nanoseconds since 1970-1-1.
Step9: The datatype shows the datetime though
Step10: Setting binning specifications | Python Code:
%%capture
# install histogrammar (if not installed yet)
import sys
!"{sys.executable}" -m pip install histogrammar
import histogrammar as hg
import pandas as pd
import numpy as np
import matplotlib
Explanation: Histogrammar advanced tutorial
Histogrammar is a Python package that allows you to make histograms from numpy arrays, and pandas and spark dataframes. (There is also a scala backend for Histogrammar.)
This advanced tutorial shows how to:
- work with spark dataframes,
- make many histograms at ones, which is one of the nice features of histogrammar, and how to configure that. For example how to set bin specifications, or how to deal with a time-axis.
Enjoy!
End of explanation
# open a pandas dataframe for use below
from histogrammar import resources
df = pd.read_csv(resources.data("test.csv.gz"), parse_dates=["date"])
df.head()
Explanation: Data generation
Let's first load some data!
End of explanation
# download histogrammar jar files if not already installed, used for histogramming of spark dataframe
try:
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
from pyspark import __version__ as pyspark_version
pyspark_installed = True
except ImportError:
print("pyspark needs to be installed for this example")
pyspark_installed = False
# this is the jar file for spark 3.0
# for spark 2.X, in the jars string, for both jar files change "_2.12" into "_2.11".
if pyspark_installed:
scala = '2.12' if int(pyspark_version[0]) >= 3 else '2.11'
hist_jar = f'io.github.histogrammar:histogrammar_{scala}:1.0.20'
hist_spark_jar = f'io.github.histogrammar:histogrammar-sparksql_{scala}:1.0.20'
spark = SparkSession.builder.config(
"spark.jars.packages", f'{hist_spark_jar},{hist_jar}'
).getOrCreate()
sdf = spark.createDataFrame(df)
Explanation: What about Spark DataFrames?
No problem! We can easily perform the same steps on a Spark DataFrame. One important thing to note there is that we need to include a jar file when we create our Spark session. This is used by spark to create the histograms using Histogrammar. The jar file will be automatically downloaded the first time you run this command.
End of explanation
# example: filling from a pandas dataframe
hist = hg.SparselyHistogram(binWidth=100, quantity='transaction')
hist.fill.numpy(df)
hist.plot.matplotlib();
# for spark you will need this spark column function:
if pyspark_installed:
from pyspark.sql.functions import col
Explanation: Filling histograms with spark
Filling histograms with spark dataframes is just as simple as it is with pandas dataframes.
End of explanation
# example: filling from a pandas dataframe
if pyspark_installed:
hist = hg.SparselyHistogram(binWidth=100, quantity=col('transaction'))
hist.fill.sparksql(sdf)
hist.plot.matplotlib();
Explanation: Let's make the same histogram but from a spark dataframe. There are just two differences:
- When declaring a histogram, always set quantity to col('columns_name') instead of 'columns_name'
- When filling the histogram from a dataframe, use the fill.sparksql() method instead of fill.numpy().
End of explanation
if pyspark_installed:
h2 = sdf.hg_SparselyProfileErr(25, col('longitude'), col('age'))
h2.plot.matplotlib();
if pyspark_installed:
h3 = sdf.hg_TwoDimensionallySparselyHistogram(25, col('longitude'), 10, col('latitude'))
h3.plot.matplotlib();
Explanation: Apart from these two differences, all functionality is the same between pandas and spark histograms!
Like pandas, we can also do directly from the dataframe:
End of explanation
hists = df.hg_make_histograms()
# histogrammar has made histograms of all features, using an automated binning.
hists.keys()
h = hists['transaction']
h.plot.matplotlib();
# you can select which features you want to histogram with features=:
hists = df.hg_make_histograms(features = ['longitude', 'age', 'eyeColor'])
# you can also make multi-dimensional histograms
# here longitude is the first axis of each histogram.
hists = df.hg_make_histograms(features = ['longitude:age', 'longitude:age:eyeColor'])
Explanation: All examples below also work with spark dataframes.
Making many histograms at once
Histogrammar has a nice method to make many histograms in one go. See here.
By default automagical binning is applied to make the histograms.
End of explanation
# Working with a dedicated time axis, make histograms of each feature over time.
hists = df.hg_make_histograms(time_axis="date")
hists.keys()
h2 = hists['date:age']
h2.plot.matplotlib();
Explanation: Working with timestamps
End of explanation
h2.bin_edges()
Explanation: Histogrammar does not support pandas' timestamps natively, but converts timestamps into nanoseconds since 1970-1-1.
End of explanation
h2.datatype
# convert these back to timestamps with:
pd.Timestamp(h2.bin_edges()[0])
# For the time axis, you can set the binning specifications with time_width and time_offset:
hists = df.hg_make_histograms(time_axis="date", time_width='28d', time_offset='2014-1-4', features=['date:isActive', 'date:age'])
hists['date:isActive'].plot.matplotlib();
Explanation: The datatype shows the datetime though:
End of explanation
# histogram selections. Here 'date' is the first axis of each histogram.
features=[
'date', 'latitude', 'longitude', 'age', 'eyeColor', 'favoriteFruit', 'transaction'
]
# Specify your own binning specifications for individual features or combinations thereof.
# This bin specification uses open-ended ("sparse") histograms; unspecified features get
# auto-binned. The time-axis binning, when specified here, needs to be in nanoseconds.
bin_specs={
'longitude': {'binWidth': 10.0, 'origin': 0.0},
'latitude': {'edges': [-100, -75, -25, 0, 25, 75, 100]},
'age': {'num': 100, 'low': 0, 'high': 100},
'transaction': {'centers': [-1000, -500, 0, 500, 1000, 1500]},
'date': {'binWidth': pd.Timedelta('4w').value, 'origin': pd.Timestamp('2015-1-1').value}
}
# this binning specification is making:
# - a sparse histogram for: longitude
# - an irregular binned histogram for: latitude
# - a closed-range evenly spaced histogram for: age
# - a histogram centered around bin centers for: transaction
hists = df.hg_make_histograms(features=features, bin_specs=bin_specs)
hists.keys()
hists['transaction'].plot.matplotlib();
# all available bin specifications are (just examples):
bin_specs = {'x': {'bin_width': 1, 'bin_offset': 0}, # SparselyBin histogram
'y': {'num': 10, 'low': 0.0, 'high': 2.0}, # Bin histogram
'x:y': [{}, {'num': 5, 'low': 0.0, 'high': 1.0}], # SparselyBin vs Bin histograms
'a': {'edges': [0, 2, 10, 11, 21, 101]}, # IrregularlyBin histogram
'b': {'centers': [1, 6, 10.5, 16, 20, 100]}, # CentrallyBin histogram
'c': {'max': True}, # Maximize histogram
'd': {'min': True}, # Minimize histogram
'e': {'sum': True}, # Sum histogram
'z': {'deviate': True}, # Deviate histogram
'f': {'average': True}, # Average histogram
'a:f': [{'edges': [0, 10, 101]}, {'average': True}], # IrregularlyBin vs Average histograms
'g': {'thresholds': [0, 2, 10, 11, 21, 101]}, # Stack histogram
'h': {'bag': True}, # Bag histogram
}
# to set binning specs for a specific 2d histogram, you can do this:
# if these are not provide, the 1d binning specifications are picked up for 'a:f'
bin_specs = {'a:f': [{'edges': [0, 10, 101]}, {'average': True}]}
# For example
features = ['latitude:age', 'longitude:age', 'age', 'longitude']
bin_specs = {
'latitude': {'binWidth': 25},
'longitude:': {'edges': [-100, -75, -25, 0, 25, 75, 100]},
'age': {'deviate': True},
'longitude:age': [{'binWidth': 25}, {'average': True}],
}
hists = df.hg_make_histograms(features=features, bin_specs=bin_specs)
h = hists['latitude:age']
h.bins
hists['longitude:age'].plot.matplotlib();
Explanation: Setting binning specifications
End of explanation |
8,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 16 - The BART model of risk taking
16.1 The BART model
Balloon Analogue Risk Task (BART
Step1: 16.2 A hierarchical extension of the BART model
$$ \mu_{\gamma^{+}} \sim \text{Uniform}(0,10) $$
$$ \sigma_{\gamma^{+}} \sim \text{Uniform}(0,10) $$
$$ \mu_{\beta} \sim \text{Uniform}(0,10) $$
$$ \sigma_{\beta} \sim \text{Uniform}(0,10) $$
$$ \gamma^{+}i \sim \text{Gaussian}(\mu{\gamma^{+}}, 1/\sigma_{\gamma^{+}}^2) $$
$$ \beta_i \sim \text{Gaussian}(\mu_{\beta}, 1/\sigma_{\beta}^2) $$
$$ \omega_i = -\gamma^{+}i \,/\,\text{log}(1-p) $$
$$ \theta{ijk} = \frac{1} {1+e^{\beta_i(k-\omega_i)}} $$
$$ d_{ijk} \sim \text{Bernoulli}(\theta_{ijk}) $$ | Python Code:
p = .15 # (Belief of) bursting probability
ntrials = 90 # Number of trials for the BART
Data = pd.read_csv('data/GeorgeSober.txt', sep='\t')
# Data.head()
cash = np.asarray(Data['cash']!=0, dtype=int)
npumps = np.asarray(Data['pumps'], dtype=int)
options = cash + npumps
d = np.full([ntrials,30], np.nan)
k = np.full([ntrials,30], np.nan)
# response vector
for j, ipumps in enumerate(npumps):
inds = np.arange(options[j],dtype=int)
k[j,inds] = inds+1
if ipumps > 0:
d[j,0:ipumps] = 0
if cash[j] == 1:
d[j,ipumps] = 1
indexmask = np.isfinite(d)
d = d[indexmask]
k = k[indexmask]
with pm.Model():
gammap = pm.Uniform('gammap', lower=0, upper=10, testval=1.2)
beta = pm.Uniform('beta', lower=0, upper=10, testval=.5)
omega = pm.Deterministic('omega', -gammap/np.log(1-p))
thetajk = 1 - pm.math.invlogit(- beta * (k - omega))
djk = pm.Bernoulli('djk', p=thetajk, observed=d)
trace = pm.sample(3e3, njobs=2)
pm.traceplot(trace, varnames=['gammap', 'beta']);
from scipy.stats.kde import gaussian_kde
burnin=2000
gammaplus = trace['gammap'][burnin:]
beta = trace['beta'][burnin:]
fig = plt.figure(figsize=(15, 5))
gs = gridspec.GridSpec(1, 3)
ax0 = plt.subplot(gs[0])
ax0.hist(npumps, bins=range(1, 9), rwidth=.8, align='left')
plt.xlabel('Number of Pumps', fontsize=12)
plt.ylabel('Frequency', fontsize=12)
ax1 = plt.subplot(gs[1])
my_pdf1 = gaussian_kde(gammaplus)
x1=np.linspace(.5, 1, 200)
ax1.plot(x1, my_pdf1(x1), 'k', lw=2.5, alpha=0.6) # distribution function
plt.xlim((.5, 1))
plt.xlabel(r'$\gamma^+$', fontsize=15)
plt.ylabel('Posterior Density', fontsize=12)
ax2 = plt.subplot(gs[2])
my_pdf2 = gaussian_kde(beta)
x2=np.linspace(0.3, 1.3, 200)
ax2.plot(x2, my_pdf2(x2), 'k', lw=2.5, alpha=0.6,) # distribution function
plt.xlim((0.3, 1.3))
plt.xlabel(r'$\beta$', fontsize=15)
plt.ylabel('Posterior Density', fontsize=12);
Explanation: Chapter 16 - The BART model of risk taking
16.1 The BART model
Balloon Analogue Risk Task (BART: Lejuez et al., 2002): Every trial in this task starts by showing a balloon representing a small monetary value. The subject can then either transfer the money to a virtual bank account, or choose to pump, which adds a small amount of air to the balloon, and increases its value. There is some probability, however, that pumping the balloon will cause it to burst, causing all the money to be lost. A trial finishes when either the subject has transferred the money, or the balloon has burst.
$$ \gamma^{+} \sim \text{Uniform}(0,10) $$
$$ \beta \sim \text{Uniform}(0,10) $$
$$ \omega = -\gamma^{+} \,/\,\text{log}(1-p) $$
$$ \theta_{jk} = \frac{1} {1+e^{\beta(k-\omega)}} $$
$$ d_{jk} \sim \text{Bernoulli}(\theta_{jk}) $$
End of explanation
p = .15 # (Belief of) bursting probability
ntrials = 90 # Number of trials for the BART
Ncond = 3
dall = np.full([Ncond,ntrials,30], np.nan)
options = np.zeros((Ncond,ntrials))
kall = np.full([Ncond,ntrials,30], np.nan)
npumps_ = np.zeros((Ncond,ntrials))
for icondi in range(Ncond):
if icondi == 0:
Data = pd.read_csv('data/GeorgeSober.txt',sep='\t')
elif icondi == 1:
Data = pd.read_csv('data/GeorgeTipsy.txt',sep='\t')
elif icondi == 2:
Data = pd.read_csv('data/GeorgeDrunk.txt',sep='\t')
# Data.head()
cash = np.asarray(Data['cash']!=0, dtype=int)
npumps = np.asarray(Data['pumps'], dtype=int)
npumps_[icondi,:] = npumps
options[icondi,:] = cash + npumps
# response vector
for j, ipumps in enumerate(npumps):
inds = np.arange(options[icondi,j],dtype=int)
kall[icondi,j,inds] = inds+1
if ipumps > 0:
dall[icondi,j,0:ipumps] = 0
if cash[j] == 1:
dall[icondi,j,ipumps] = 1
indexmask = np.isfinite(dall)
dij = dall[indexmask]
kij = kall[indexmask]
condall = np.tile(np.arange(Ncond,dtype=int),(30,ntrials,1))
condall = np.swapaxes(condall,0,2)
cij = condall[indexmask]
with pm.Model() as model2:
mu_g = pm.Uniform('mu_g', lower=0, upper=10)
sigma_g = pm.Uniform('sigma_g', lower=0, upper=10)
mu_b = pm.Uniform('mu_b', lower=0, upper=10)
sigma_b = pm.Uniform('sigma_b', lower=0, upper=10)
gammap = pm.Normal('gammap', mu=mu_g, sd=sigma_g, shape=Ncond)
beta = pm.Normal('beta', mu=mu_b, sd=sigma_b, shape=Ncond)
omega = -gammap[cij]/np.log(1-p)
thetajk = 1 - pm.math.invlogit(- beta[cij] * (kij - omega))
djk = pm.Bernoulli("djk", p=thetajk, observed=dij)
approx = pm.fit(n=100000, method='advi',
obj_optimizer=pm.adagrad_window
) # type: pm.MeanField
start = approx.sample(draws=2, include_transformed=True)
trace2 = pm.sample(3e3, njobs=2, init='adapt_diag', start=list(start))
pm.traceplot(trace2, varnames=['gammap', 'beta']);
burnin=1000
gammaplus = trace2['gammap'][burnin:]
beta = trace2['beta'][burnin:]
ylabels = ['Sober', 'Tipsy', 'Drunk']
fig = plt.figure(figsize=(15, 12))
gs = gridspec.GridSpec(3, 3)
for ic in range(Ncond):
ax0 = plt.subplot(gs[0+ic*3])
ax0.hist(npumps_[ic], bins=range(1, 10), rwidth=.8, align='left')
plt.xlabel('Number of Pumps', fontsize=12)
plt.ylabel(ylabels[ic], fontsize=12)
ax1 = plt.subplot(gs[1+ic*3])
my_pdf1 = gaussian_kde(gammaplus[:, ic])
x1=np.linspace(.5, 1.8, 200)
ax1.plot(x1, my_pdf1(x1), 'k', lw=2.5, alpha=0.6) # distribution function
plt.xlim((.5, 1.8))
plt.xlabel(r'$\gamma^+$', fontsize=15)
plt.ylabel('Posterior Density', fontsize=12)
ax2 = plt.subplot(gs[2+ic*3])
my_pdf2 = gaussian_kde(beta[:, ic])
x2=np.linspace(0.1, 1.5, 200)
ax2.plot(x2, my_pdf2(x2), 'k', lw=2.5, alpha=0.6) # distribution function
plt.xlim((0.1, 1.5))
plt.xlabel(r'$\beta$', fontsize=15)
plt.ylabel('Posterior Density', fontsize=12);
Explanation: 16.2 A hierarchical extension of the BART model
$$ \mu_{\gamma^{+}} \sim \text{Uniform}(0,10) $$
$$ \sigma_{\gamma^{+}} \sim \text{Uniform}(0,10) $$
$$ \mu_{\beta} \sim \text{Uniform}(0,10) $$
$$ \sigma_{\beta} \sim \text{Uniform}(0,10) $$
$$ \gamma^{+}i \sim \text{Gaussian}(\mu{\gamma^{+}}, 1/\sigma_{\gamma^{+}}^2) $$
$$ \beta_i \sim \text{Gaussian}(\mu_{\beta}, 1/\sigma_{\beta}^2) $$
$$ \omega_i = -\gamma^{+}i \,/\,\text{log}(1-p) $$
$$ \theta{ijk} = \frac{1} {1+e^{\beta_i(k-\omega_i)}} $$
$$ d_{ijk} \sim \text{Bernoulli}(\theta_{ijk}) $$
End of explanation |
8,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$
f\left( v \right) = k_1 \cdot v^2 + k_2 \cdot v + k_3
$$
Step1: $$
\dot{v} = f\left( v \right) - u + I
$$
Step2: $$
\dot{u} = a \cdot \left( b \cdot v - u \right)
$$
Step3: $$
v \approx v_{thr}
\Longrightarrow
\begin{cases}
v \rightarrow c \
u \rightarrow u + d
\end{cases}
$$
Isoclines
$$
\begin{cases}
0 = \dot{v} = f\left( v \right) - u + I \
0 = \dot{u} = a \cdot \left( b \cdot v - u \right)
\end{cases}
\Longrightarrow
\begin{cases}
k_1 \cdot v^2 + k_2 \cdot v + k_3 - u + I = 0\
a \cdot \left( b \cdot v - u \right) = 0
\end{cases}
\Longrightarrow
\begin{cases}
k_1 \cdot v^2 + v \cdot \left( k_2 - b \right) + k_3 + I = 0\
b \cdot v = u
\end{cases}
$$
Discriminant for the first equation
$$
D = \left( k_2 - b \right)^2 - 4 \cdot k_1 \cdot \left( k_3 + I \right)
$$
Thus
$$
\begin{cases}
v_{1, 2} = \frac{b - k_2 \pm \sqrt{\left( k_2 - b \right)^2 - 4 \cdot k_1 \cdot \left( k_3 + I \right)}}{2 \cdot k_1} \
u_{1, 2} = v_{1, 2} \cdot b
\end{cases}
$$
Critical value for $I$ can be fetched from the equation
$$
\left( k_2 - b \right)^2 - 4 \cdot k_1 \cdot \left( k_3 + I \right) = 0
$$
Result is
$$
I_{cr} = \frac{\left( k_2 - b \right)^2}{4 \cdot k_1} - k_3
$$
Step4: Critical values
$$
\begin{cases}
v_{cr} = \frac{b - k_2}{2 \cdot k_1} \
u_{cr} = b \cdot v_{cr}
\end{cases}
$$
Linearization
$$
\begin{cases}
\dot{v} = k_1 \cdot v^2 + k_2 \cdot v + k_3 - u + I \
\dot{u} = a \cdot \left( b \cdot v - u \right)
\end{cases}
$$
Jacobian
$$
J =
\begin{bmatrix}
2 \cdot k_1 \cdot v + k_2 & -1 \
a \cdot b & -a
\end{bmatrix}
$$
Linearized Jacobian
$$
J_l =
\begin{bmatrix}
b & -1 \
a \cdot b & -a
\end{bmatrix}
$$
Characteristics polynomial
$$
\begin{vmatrix}
b - \lambda & -1 \
a \cdot b & -a - \lambda
\end{vmatrix}
= \left( - b + \lambda \right) \cdot \left( a + \lambda \right) + a \cdot b
= - b \cdot a + \lambda \cdot a - b \cdot \lambda + \lambda \cdot \lambda + a \cdot b
= \lambda^2 + \lambda \cdot \left( a - b \right) = 0
$$
Finally
$$
\lambda \cdot \left( \lambda + a - b \right) = 0
$$
Solutions
$$
\begin{cases}
\lambda_1 = 0 \
\lambda_2 = b - a
\end{cases}
$$
Step5: $\lambda_{1, 2} = 0.06 \mp 0.06$ — eigenvalues are real and of opposite signs — we have a saddle point which is always unstable equilibrium
Eigenvectors
By definition eigenvectors $e_{1, 2}$ can be found from the equation
$$
\begin{bmatrix}
b & -1 \
a \cdot b & -a
\end{bmatrix}
\cdot e
= e \cdot \lambda
$$
In scalar form
$$
\begin{bmatrix}
e_v \cdot b - e_u \
e_v \cdot a \cdot b - e_u \cdot a
\end{bmatrix}
= \begin{bmatrix} e_v \ e_u \end{bmatrix}
\cdot \lambda
$$
Move right part to left
$$
\begin{bmatrix}
e_v \cdot \left( b - \lambda \right) - e_u \
e_v \cdot a \cdot b - e_u \cdot \left( a + \lambda \right)
\end{bmatrix}
= \begin{bmatrix} 0 \ 0 \end{bmatrix}
$$
We can easily find equations for $e_u$
$$
\begin{cases}
e_u = e_v \cdot \left( b - \lambda \right) \
e_u = e_v \cdot \frac{a \cdot b}{a + \lambda}
\end{cases}
$$
As the rank of matrix is $1$ because determinant is $0$ by condition of eigenvalues,
we can have eigenvectors with $e_v = 1$
$$
e_{1, 2}
= \begin{bmatrix}
1 \
b - \lambda_{1, 2}
\end{bmatrix}
$$
Step6: Check
Following numbers should be almost equal to zero (almost because of machine)
Step7: Following calculations are done with NumPy library and contain real eigenvalues and normed eigenvectors
Step8: These are the values of the right part of the solved equation
$$
\begin{bmatrix}
b & -1 \
a \cdot b & -a
\end{bmatrix}
\cdot e_{1, 2}
$$
Step9: These are the right ones
$$
e \cdot \lambda
$$
Step10: If they're almost equal we've done
Step11: Eigenvectors
Let's normalize our eigenvectors
$$
e
Step12: $e_{1, 2} = \begin{bmatrix}
0.997 \mp 0.002 \
-0.04 \pm 0.06
\end{bmatrix}$
Step13: Separatrix
Let's take a look at plots with
$$\left( v_0, u_0 \right) = \left( v_{crit} \pm e_v, u_{crit} \pm e_u \right)$$
Step14: Phase portrait
Now we can use
$$\left( v_0, u_0 \right) = \left( -70, -20 \right)$$ | Python Code:
def f(v):
return k[0] * (v**2) + k[1] * v + k[2]
Explanation: $$
f\left( v \right) = k_1 \cdot v^2 + k_2 \cdot v + k_3
$$
End of explanation
def Vt(v, u, I):
return f(v) - u + I
Explanation: $$
\dot{v} = f\left( v \right) - u + I
$$
End of explanation
def Ut(v, u):
return a * (b * v - u)
Explanation: $$
\dot{u} = a \cdot \left( b \cdot v - u \right)
$$
End of explanation
I_critical = ((k[1] - b)**2) / (4 * k[0]) - k[2]; I_critical
v_critical = (b - k[1]) / (2 * k[0]); v_critical
u_critical = b * v_critical; u_critical
Explanation: $$
v \approx v_{thr}
\Longrightarrow
\begin{cases}
v \rightarrow c \
u \rightarrow u + d
\end{cases}
$$
Isoclines
$$
\begin{cases}
0 = \dot{v} = f\left( v \right) - u + I \
0 = \dot{u} = a \cdot \left( b \cdot v - u \right)
\end{cases}
\Longrightarrow
\begin{cases}
k_1 \cdot v^2 + k_2 \cdot v + k_3 - u + I = 0\
a \cdot \left( b \cdot v - u \right) = 0
\end{cases}
\Longrightarrow
\begin{cases}
k_1 \cdot v^2 + v \cdot \left( k_2 - b \right) + k_3 + I = 0\
b \cdot v = u
\end{cases}
$$
Discriminant for the first equation
$$
D = \left( k_2 - b \right)^2 - 4 \cdot k_1 \cdot \left( k_3 + I \right)
$$
Thus
$$
\begin{cases}
v_{1, 2} = \frac{b - k_2 \pm \sqrt{\left( k_2 - b \right)^2 - 4 \cdot k_1 \cdot \left( k_3 + I \right)}}{2 \cdot k_1} \
u_{1, 2} = v_{1, 2} \cdot b
\end{cases}
$$
Critical value for $I$ can be fetched from the equation
$$
\left( k_2 - b \right)^2 - 4 \cdot k_1 \cdot \left( k_3 + I \right) = 0
$$
Result is
$$
I_{cr} = \frac{\left( k_2 - b \right)^2}{4 \cdot k_1} - k_3
$$
End of explanation
lambdas = 0, b - a; lambdas
(lambdas[0] + lambdas[1]) / 2, lambdas[0] - (lambdas[0] + lambdas[1]) / 2
Explanation: Critical values
$$
\begin{cases}
v_{cr} = \frac{b - k_2}{2 \cdot k_1} \
u_{cr} = b \cdot v_{cr}
\end{cases}
$$
Linearization
$$
\begin{cases}
\dot{v} = k_1 \cdot v^2 + k_2 \cdot v + k_3 - u + I \
\dot{u} = a \cdot \left( b \cdot v - u \right)
\end{cases}
$$
Jacobian
$$
J =
\begin{bmatrix}
2 \cdot k_1 \cdot v + k_2 & -1 \
a \cdot b & -a
\end{bmatrix}
$$
Linearized Jacobian
$$
J_l =
\begin{bmatrix}
b & -1 \
a \cdot b & -a
\end{bmatrix}
$$
Characteristics polynomial
$$
\begin{vmatrix}
b - \lambda & -1 \
a \cdot b & -a - \lambda
\end{vmatrix}
= \left( - b + \lambda \right) \cdot \left( a + \lambda \right) + a \cdot b
= - b \cdot a + \lambda \cdot a - b \cdot \lambda + \lambda \cdot \lambda + a \cdot b
= \lambda^2 + \lambda \cdot \left( a - b \right) = 0
$$
Finally
$$
\lambda \cdot \left( \lambda + a - b \right) = 0
$$
Solutions
$$
\begin{cases}
\lambda_1 = 0 \
\lambda_2 = b - a
\end{cases}
$$
End of explanation
e_y = b - lambdas[0], b - lambdas[1]; e_y
e_x = 1
Explanation: $\lambda_{1, 2} = 0.06 \mp 0.06$ — eigenvalues are real and of opposite signs — we have a saddle point which is always unstable equilibrium
Eigenvectors
By definition eigenvectors $e_{1, 2}$ can be found from the equation
$$
\begin{bmatrix}
b & -1 \
a \cdot b & -a
\end{bmatrix}
\cdot e
= e \cdot \lambda
$$
In scalar form
$$
\begin{bmatrix}
e_v \cdot b - e_u \
e_v \cdot a \cdot b - e_u \cdot a
\end{bmatrix}
= \begin{bmatrix} e_v \ e_u \end{bmatrix}
\cdot \lambda
$$
Move right part to left
$$
\begin{bmatrix}
e_v \cdot \left( b - \lambda \right) - e_u \
e_v \cdot a \cdot b - e_u \cdot \left( a + \lambda \right)
\end{bmatrix}
= \begin{bmatrix} 0 \ 0 \end{bmatrix}
$$
We can easily find equations for $e_u$
$$
\begin{cases}
e_u = e_v \cdot \left( b - \lambda \right) \
e_u = e_v \cdot \frac{a \cdot b}{a + \lambda}
\end{cases}
$$
As the rank of matrix is $1$ because determinant is $0$ by condition of eigenvalues,
we can have eigenvectors with $e_v = 1$
$$
e_{1, 2}
= \begin{bmatrix}
1 \
b - \lambda_{1, 2}
\end{bmatrix}
$$
End of explanation
results = ((- b + lambdas[1]) * (a + lambdas[1]) + a*b,
(- b + lambdas[0]) * (a + lambdas[0]) + a*b)
assert np.allclose(results, (0., 0.))
results
Explanation: Check
Following numbers should be almost equal to zero (almost because of machine)
End of explanation
np.linalg.eig([[b, -1], [a*b, -a]])
Explanation: Following calculations are done with NumPy library and contain real eigenvalues and normed eigenvectors
End of explanation
left = (np.array([[b, -1], [a*b, -a]]).dot([e_x, e_y[0]]),
np.array([[b, -1], [a*b, -a]]).dot([e_x, e_y[1]]))
left
Explanation: These are the values of the right part of the solved equation
$$
\begin{bmatrix}
b & -1 \
a \cdot b & -a
\end{bmatrix}
\cdot e_{1, 2}
$$
End of explanation
right = (np.array([e_x, e_y[0]]) * lambdas[0],
np.array([e_x, e_y[1]]) * lambdas[1])
right
Explanation: These are the right ones
$$
e \cdot \lambda
$$
End of explanation
assert np.allclose(left[0], right[0]) and np.allclose(left[1], right[1])
Explanation: If they're almost equal we've done
End of explanation
eigennorm = (e_x**2 + e_y[0]**2)**.5, (e_x**2 + e_y[1]**2)**.5; eigennorm
e_x_ = (1./eigennorm[0], 1./eigennorm[1])
e_y_ = (e_y[0] / eigennorm[0], e_y[1] / eigennorm[1])
e_x_, e_y_
(e_x_[0]**2 + e_y_[0]**2)**.5, (e_x_[1]**2 + e_y_[1]**2)**.5
(e_y_[0] + e_y_[1])/2, (e_y_[0] + e_y_[1])/2 - e_y_[0], (e_y_[0] + e_y_[1])/2 - e_y_[1]
(e_x_[0] + e_x_[1])/2, (e_x_[0] + e_x_[1])/2 - e_x_[0], (e_x_[0] + e_x_[1])/2 - e_x_[1]
Explanation: Eigenvectors
Let's normalize our eigenvectors
$$
e := \frac{e}{\left\| e \right\|}
$$
End of explanation
def get_curve(I, init=(v0, u0)):
vs = np.empty_like(TIME)
us = np.empty_like(TIME)
# vs[0], us[0] = v_critical, u_critical
vs[0], us[0] = init
count = 0
for i in range(len(TIME)- 1):
dv, du = Vt(vs[i], us[i], I), Ut(vs[i], us[i])
vs[i + 1] = vs[i] + dv * dt
us[i + 1] = us[i] + du * dt
# if vs[i] <= v_thr:
if vs[i] >= v_thr:
count += 1
vs[i + 1] = c
us[i + 1] = us[i] + d
return (vs, us, count)
def get_plot(I, init=(v0, u0)):
vs, us, count = get_curve(I, init)
fig = plt.figure(1)
fig.suptitle('$I = %4.2f,\; v_0 = %4.2f,\; u_0 = %4.2f$'%(I, init[0], init[1]), fontsize=24)
plt.subplot(221)
plt.plot(TIME, vs)
plt.subplot(222)
plt.plot(TIME, us)
plt.show()
plt.subplot(212)
plt.plot(vs, us)
plt.show()
Explanation: $e_{1, 2} = \begin{bmatrix}
0.997 \mp 0.002 \
-0.04 \pm 0.06
\end{bmatrix}$
End of explanation
multiplier = 10.
delta_v = multiplier * e_x_[1]
delta_u_unstable = multiplier * e_y_[1]
delta_v, delta_u_unstable
get_plot(I_critical, (v_critical + delta_v, u_critical + delta_u_unstable))
get_plot(I_critical, (v_critical - delta_v, u_critical - delta_u_unstable))
Explanation: Separatrix
Let's take a look at plots with
$$\left( v_0, u_0 \right) = \left( v_{crit} \pm e_v, u_{crit} \pm e_u \right)$$
End of explanation
get_plot(5)
get_plot(I_critical)
get_plot(50)
delta_I = 100 - 0
PEAK_COUNT = 20
result = [(I_critical + delta_I/(PEAK_COUNT-i), get_curve(I_critical + delta_I/(PEAK_COUNT-i))[2]) for i in range(PEAK_COUNT)]; result
counts = [r[1]/T for r in result]
Is = [r[0] for r in result]
plt.plot(Is, counts)
plt.show()
Explanation: Phase portrait
Now we can use
$$\left( v_0, u_0 \right) = \left( -70, -20 \right)$$
End of explanation |
8,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Модель №2
Вход
Данные
Step1: Загрузка модели word2vec
Step2: Подготовка данных
Step3: Обучение модели
Step4: Результаты
Результаты на тестовой выборке (20% от исходных данных)
целевые переменные не нормировались! | Python Code:
reviews_test = pd.read_csv('data/reviews_test.csv', header=0, encoding='utf-8')
reviews_train = pd.read_csv('data/reviews_train.csv', header=0, encoding='utf-8')
X_train_raw = reviews_train.comment
y_train_raw = reviews_train.reting
X_test_raw = reviews_test.comment
y_test_raw = reviews_test.reting
Explanation: Модель №2
Вход
Данные: Объединение обучающей выборки и отзывов об интернет-провайдерах, взятые с сайта: moskvaonline.ru. Отзывы ограничены 100 словами
Представление: word2vec на основе обученной модели gensim, размерность пространства: 500
Описание модели: 4 параллельные 1D свертки с разым размером окна по компоненте word2vec, с L2 регулиризацией, max pooling после каждой свертки, далее объединение результатов и один полносвязный слой с дропаутом, далее выход регрессии.
Параметры обучения: Оптимизатор: стохастический градиентный спуск, learning rate опредлялся как: 2 эпохи с 0.01, далее ~5 эпох с 0.001, далее ~5 эпох с 0.0001, далее ~5 эпох с 0.00005
Целевая метрика: MSE
Результат (целевые переменные не нормировались!): MSE(train)=0.6199, MSE(test)=0.9198
Загрузка обучающей и тестовой выборки
End of explanation
DIR = 'data/w2v_models/'
MODEL_NAME = 'tenth.norm-sz500-w7-cb0-it5-min5.w2v'
VECTOR_SIZE = 500
SENTENCE_LENGTH = 100 #words
w2v_path = DIR + MODEL_NAME
sentence_processor = SentenceProcessor(w2v_path)
# words with wery high freqyency in comments
# garbage_list = ['я', 'большой', 'по', 'купить', 'этот', 'на', 'один', 'так', 'только', 'из', 'хороший', 'как', \
# 'отличный', 'что', 'это', 'и', 'за', 'у', 'в', 'если', 'с', 'очень', 'нет', 'же', 'он', 'при', \
# 'для', 'пользоваться', 'быть', 'а', 'просто', 'раз', 'работать', 'но', 'качество', 'к', 'весь',\
# 'можно', 'есть', 'цена', 'от', 'уже', 'такой', 'она', 'год', 'то']
sentence_processor.stop_list = []
Explanation: Загрузка модели word2vec
End of explanation
X_train = []
y_train = []
for i in tqdm(range(len(X_train_raw))):
sent = sentence_processor.process(X_train_raw[i])
matrix = sentence_processor.convert2matrix(sent, sample_len=SENTENCE_LENGTH)
if matrix.shape == (SENTENCE_LENGTH, VECTOR_SIZE):
X_train.append(matrix)
y_train.append(y_train_raw[i])
X_test = []
y_test = []
for i in tqdm(range(len(X_test_raw))):
sent = sentence_processor.process(X_test_raw[i])
matrix = sentence_processor.convert2matrix(sent, sample_len=SENTENCE_LENGTH)
if matrix.shape == (SENTENCE_LENGTH, VECTOR_SIZE):
X_test.append(matrix)
y_test.append(y_test_raw[i])
X_train = np.array(X_train, dtype=np.float32)
X_test = np.array(X_test, dtype=np.float32)
y_train = np.array(y_train, dtype=np.float32)
y_test = np.array(y_test, dtype=np.float32)
reviews_internet = pd.read_csv('data/internet_reviews.csv', header=0, encoding='utf-8')
X_reviews_internet = reviews_internet.comment
y_reviews_internet = reviews_internet.rating
X_reviews_internet_ = []
y_reviews_internet_ = []
for i in tqdm(range(len(X_reviews_internet))):
sent = sentence_processor.process(X_reviews_internet[i])
matrix = sentence_processor.convert2matrix(sent, sample_len=SENTENCE_LENGTH)
if matrix.shape == (SENTENCE_LENGTH, VECTOR_SIZE):
X_reviews_internet_.append(matrix)
y_reviews_internet_.append(y_reviews_internet[i])
X_reviews_internet_ = np.array(X_reviews_internet_, dtype=np.float32)
y_reviews_internet_ = np.array(y_reviews_internet_, dtype=np.float32)
X_train_final = np.concatenate((X_train, X_reviews_internet_), axis=0)
y_train_final = np.concatenate((y_train, y_reviews_internet_), axis=0)
Explanation: Подготовка данных: перевод в матрицу, размером: "количество отзывов" х "количество слов в отзыве (обрезано или дополнено до 100)" х "размерность word2vec"
End of explanation
from keras.models import Sequential
import keras
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import LocallyConnected1D, Conv1D, Dropout
from keras.layers import MaxPooling1D, GlobalMaxPooling1D
from keras.layers.recurrent import LSTM
from keras.preprocessing import sequence
from keras.optimizers import Adam, SGD
from keras.models import Model
from keras.layers.merge import concatenate
from keras import regularizers
from keras.layers import Input, Dense
input_1 = Input(shape=(100,500))
conv_1 = Conv1D(filters=256, kernel_size=3, activation='relu', kernel_regularizer=regularizers.l2(0.02))(input_1)
pool_1 = GlobalMaxPooling1D()(conv_1)
conv_2 = Conv1D(filters=256, kernel_size=5, activation='relu', kernel_regularizer=regularizers.l2(0.02))(input_1)
pool_2 = GlobalMaxPooling1D()(conv_2)
conv_3 = Conv1D(filters=512, kernel_size=7, activation='relu', kernel_regularizer=regularizers.l2(0.02))(input_1)
pool_3 = GlobalMaxPooling1D()(conv_3)
conv_4 = Conv1D(filters=512, kernel_size=9, activation='relu', kernel_regularizer=regularizers.l2(0.02))(input_1)
pool_4 = GlobalMaxPooling1D()(conv_4)
concat_1 = concatenate([pool_1, pool_2, pool_3, pool_4], axis=1)
dense_1 = Dense(300, activation='relu')(concat_1)
drop_1 = Dropout(0.5)(dense_1)
dense_4 = Dense(1, activation=None)(drop_1)
model = Model(inputs=input_1, outputs=dense_4)
model.summary()
sgd = SGD(lr=0.00005)
model.compile(loss='mean_squared_error', optimizer=sgd, metrics=['mse'])
model.fit(X_train_final, y_train_final, batch_size=10, epochs=15, validation_data=(X_test, y_test), shuffle=True,
verbose=True)
model.save('trained_model_2(keras==2.0.8)')
import sklearn
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import median_absolute_error
from sklearn.metrics import r2_score
import matplotlib.pyplot as plt
def get_score(model, x, y, plot=True, sparse=50):
y_pred = model.predict(x)
y_pred = np.clip(y_pred, 1.0, 5.0)
mse = mean_squared_error(y, y_pred)
mae = mean_absolute_error(y, y_pred)
medae = median_absolute_error(y, y_pred)
r2 = r2_score(y, y_pred)
print ('{:.4} \nMSE: {:.4}\nMAE: {:.4}\nMedianAE: {:.4}\nR2 score: {:.4}'.format(model.name, mse, mae, medae, r2))
if plot:
plt.figure(figsize=(20,5))
plt.title(model.name)
plt.ylabel('Score')
plt.plot(y_pred[::sparse])
plt.plot(y[::sparse])
plt.legend(('y_pred', 'y_test'))
plt.show()
return {'mean squared error':mse, 'mean absolute error':mae, 'median absolute error':medae, 'r2 score':r2}
Explanation: Обучение модели
End of explanation
get_score(model, X_test, y_test, sparse=50)
Explanation: Результаты
Результаты на тестовой выборке (20% от исходных данных)
целевые переменные не нормировались!
End of explanation |
8,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Constraint Satisfaction Problems (CSPs)
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence
Step1: Review
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.
Step2: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
Graph Coloring
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.
Step3: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
Step4: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
Step5: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
Step6: NQueens
The N-queens puzzle is the problem of placing N chess queens on a N×N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.
Step7: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
Step8: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
Step9: Helper Functions
We will now implement few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin with we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assingment_history. We call this new class InstruCSP. This would allow us to see how the assignment evolves over time.
Step10: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
Step11: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes are they are connected to.
Step12: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
Step13: Backtracking Search
For solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.
Step14: Let us also check the number of assingments made.
Step15: Now let us check the total number of assingments and unassingments which is the lentgh ofour assingment history.
Step16: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
The first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.
Step17: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
Step18: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.
Step19: Graph Coloring Visualization
Next, we define some functions to create the visualisation from the assingment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.
Step20: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
Step21: Finally let us plot our problem. We first use the function above to obtain a step function.
Step22: Next we set the canvas size.
Step23: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
Step24: NQueens Visualization
Just like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.
Step25: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
Step26: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
Step27: Now let us finally repeat the above steps for min_conflicts solution.
Step28: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background. | Python Code:
from csp import *
Explanation: Constraint Satisfaction Problems (CSPs)
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in csp.py module. Even though this notebook includes a brief summary of the main topics familiarity with the material present in the book is expected. We will look at some visualizations and solve some of the CSP problems described in the book. Let us import everything from the csp module to get started.
End of explanation
%psource CSP
Explanation: Review
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.
End of explanation
s = UniversalDict(['R','G','B'])
s[5]
Explanation: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
Graph Coloring
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.
End of explanation
%psource different_values_constraint
Explanation: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
End of explanation
%pdoc parse_neighbors
Explanation: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
End of explanation
%psource MapColoringCSP
australia, usa, france
Explanation: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
End of explanation
%psource queen_constraint
Explanation: NQueens
The N-queens puzzle is the problem of placing N chess queens on a N×N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.
End of explanation
%psource NQueensCSP
Explanation: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
End of explanation
eight_queens = NQueensCSP(8)
Explanation: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
End of explanation
import copy
class InstruCSP(CSP):
def __init__(self, variables, domains, neighbors, constraints):
super().__init__(variables, domains, neighbors, constraints)
self.assingment_history = []
def assign(self, var, val, assignment):
super().assign(var,val, assignment)
self.assingment_history.append(copy.deepcopy(assignment))
def unassign(self, var, assignment):
super().unassign(var,assignment)
self.assingment_history.append(copy.deepcopy(assignment))
Explanation: Helper Functions
We will now implement few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin with we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assingment_history. We call this new class InstruCSP. This would allow us to see how the assignment evolves over time.
End of explanation
def make_instru(csp):
return InstruCSP(csp.variables, csp.domains, csp.neighbors,
csp.constraints)
Explanation: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
End of explanation
neighbors = {
0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4],
1: [12, 12, 14, 14],
2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14],
3: [20, 8, 19, 12, 20, 19, 8, 12],
4: [11, 0, 18, 5, 18, 5, 11, 0],
5: [4, 4],
6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14],
7: [13, 16, 13, 16],
8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14],
9: [20, 15, 19, 16, 15, 19, 20, 16],
10: [17, 11, 2, 11, 17, 2],
11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4],
12: [8, 3, 8, 14, 1, 3, 1, 14],
13: [7, 15, 18, 15, 16, 7, 18, 16],
14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12],
15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16],
16: [7, 15, 13, 9, 7, 13, 15, 9],
17: [10, 2, 2, 10],
18: [15, 0, 13, 4, 0, 15, 13, 4],
19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9],
20: [3, 19, 9, 19, 3, 9]
}
Explanation: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes are they are connected to.
End of explanation
coloring_problem = MapColoringCSP('RGBY', neighbors)
coloring_problem1 = make_instru(coloring_problem)
Explanation: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
End of explanation
result = backtracking_search(coloring_problem1)
result # A dictonary of assingments.
Explanation: Backtracking Search
For solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.
End of explanation
coloring_problem1.nassigns
Explanation: Let us also check the number of assingments made.
End of explanation
len(coloring_problem1.assingment_history)
Explanation: Now let us check the total number of assingments and unassingments which is the lentgh ofour assingment history.
End of explanation
%psource mrv
%psource num_legal_values
%psource CSP.nconflicts
Explanation: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
The first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.
End of explanation
%psource lcv
Explanation: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
End of explanation
solve_simple = copy.deepcopy(usa)
solve_parameters = copy.deepcopy(usa)
backtracking_search(solve_simple)
backtracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac )
solve_simple.nassigns
solve_parameters.nassigns
Explanation: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.
End of explanation
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib
import time
Explanation: Graph Coloring Visualization
Next, we define some functions to create the visualisation from the assingment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.
End of explanation
def make_update_step_function(graph, instru_csp):
def draw_graph(graph):
# create networkx graph
G=nx.Graph(graph)
# draw graph
pos = nx.spring_layout(G,k=0.15)
return (G, pos)
G, pos = draw_graph(graph)
def update_step(iteration):
# here iteration is the index of the assingment_history we want to visualize.
current = instru_csp.assingment_history[iteration]
# We convert the particular assingment to a default dict so that the color for nodes which
# have not been assigned defaults to black.
current = defaultdict(lambda: 'Black', current)
# Now we use colors in the list and default to black otherwise.
colors = [current[node] for node in G.node.keys()]
# Finally drawing the nodes.
nx.draw(G, pos, node_color=colors, node_size=500)
labels = {label:label for label in G.node}
# Labels shifted by offset so as to not overlap nodes.
label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}
nx.draw_networkx_labels(G, label_pos, labels, font_size=20)
# show graph
plt.show()
return update_step # <-- this is a function
def make_visualize(slider):
''' Takes an input a slider and returns
callback function for timer and animation
'''
def visualize_callback(Visualize, time_step):
if Visualize is True:
for i in range(slider.min, slider.max + 1):
slider.value = i
time.sleep(float(time_step))
return visualize_callback
Explanation: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
End of explanation
step_func = make_update_step_function(neighbors, coloring_problem1)
Explanation: Finally let us plot our problem. We first use the function above to obtain a step function.
End of explanation
matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)
Explanation: Next we set the canvas size.
End of explanation
import ipywidgets as widgets
from IPython.display import display
iteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assingment_history)-1, step=1, value=0)
w=widgets.interactive(step_func,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
def label_queen_conflicts(assingment,grid):
''' Mark grid with queens that are under conflict. '''
for col, row in assingment.items(): # check each queen for conflict
row_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row == row and temp_col != col}
up_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row+temp_col == row+col and temp_col != col}
down_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row-temp_col == row-col and temp_col != col}
# Now marking the grid.
for col, row in row_conflicts.items():
grid[col][row] = 3
for col, row in up_conflicts.items():
grid[col][row] = 3
for col, row in down_conflicts.items():
grid[col][row] = 3
return grid
def make_plot_board_step_function(instru_csp):
'''ipywidgets interactive function supports
single parameter as input. This function
creates and return such a function by taking
in input other parameters.
'''
n = len(instru_csp.variables)
def plot_board_step(iteration):
''' Add Queens to the Board.'''
data = instru_csp.assingment_history[iteration]
grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]
grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.
# color map of fixed colors
cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])
bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).
norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)
fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
# Place the Queens Unicode Symbol
for col, row in data.items():
fig.axes.text(row, col, u"\u265B", va='center', ha='center', family='Dejavu Sans', fontsize=32)
plt.show()
return plot_board_step
Explanation: NQueens Visualization
Just like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.
End of explanation
twelve_queens_csp = NQueensCSP(12)
backtracking_instru_queen = make_instru(twelve_queens_csp)
result = backtracking_search(backtracking_instru_queen)
backtrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets
Explanation: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
End of explanation
matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)
matplotlib.rcParams['font.family'].append(u'Dejavu Sans')
iteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assingment_history)-1, step=0, value=0)
w=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
conflicts_instru_queen = make_instru(twelve_queens_csp)
result = min_conflicts(conflicts_instru_queen)
conflicts_step = make_plot_board_step_function(conflicts_instru_queen)
Explanation: Now let us finally repeat the above steps for min_conflicts solution.
End of explanation
iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assingment_history)-1, step=0, value=0)
w=widgets.interactive(conflicts_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background.
End of explanation |
8,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step2: Then we need to create a function
Step3: In order to register a magic it has to have few properties
Step4: This does produce quite a lot of values, let's filter it out
Step5: OK, we can see that it is registered. Now let's try to call it
Step6: And check out it's help message
Step7: Here you can see the results from the docstring being used to generate the help for the magic.
Now use the call magic
Step8: And set the arguments
Step9: And finally we can use the exposed function | Python Code:
#@title Only execute if you are connecting to a hosted kernel
!pip install picatrix
from picatrix.lib import framework
from picatrix.lib import utils
# This should not be included in the magic definition file, only used
# in this notebook since we are comparing all magic registration.
from picatrix import notebook_init
notebook_init.init()
Explanation: <a href="https://colab.research.google.com/github/google/picatrix/blob/main/notebooks/adding_magic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Adding A Magic
This notebook describes how to add a magic or register a function into the picatrix set of magics.
Import
The first thing to do is install the picatrix framework and then import the libraries
(only need to install if you are running a colab hosted kernel)
End of explanation
from typing import Optional
from typing import Text
@framework.picatrix_magic
def my_silly_magic(data: Text, magnitude: Optional[int] = 100) -> Text:
Return a silly string with no meaningful value.
Args:
data (str): This is a string that will be printed back.
magnitude (int): A number that will be displayed in the string.
Returns:
A string that basically combines the two options.
return f'This magical magic produced {magnitude} magics of {data.strip()}'
Explanation: Then we need to create a function:
End of explanation
%picatrixmagics
Explanation: In order to register a magic it has to have few properties:
Be a regular Python function that accepts parameters (optional if it returns a value)
The first argument it must accept is data (this is due to how magics work). If you don't need an argument, set the default value of data to an empty string.
Use typing to denote the type of the argument values.
The function must include a docstring, where the first line describes the function.
The docstring also must have an argument section, where each argument is further described (this is used to generate the helpstring for the magic/function).
If the function returns a value it must define a Returns section.
Once these requirements are fulfilled, a simple decorator is all that is required to register the magic and make sure it is available.
Test the Magic
Now once the magic has been registered we can first test to see if it is registered:
End of explanation
magics = %picatrixmagics
magics[magics.name.str.contains('silly_magic')]
Explanation: This does produce quite a lot of values, let's filter it out:
End of explanation
%my_silly_magic foobar
Explanation: OK, we can see that it is registered. Now let's try to call it:
End of explanation
%my_silly_magic --help
Explanation: And check out it's help message:
End of explanation
%%my_silly_magic
this is some text
and some more text
and yet even more
Explanation: Here you can see the results from the docstring being used to generate the help for the magic.
Now use the call magic:
End of explanation
%%my_silly_magic --magnitude 234 store_here
and here is the text
store_here
Explanation: And set the arguments:
End of explanation
my_silly_magic_func?
my_silly_magic_func('some random string', magnitude=234)
Explanation: And finally we can use the exposed function:
End of explanation |
8,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reusable workflows
Nipype doesn't just allow you to create your own workflows. It also already comes with predefined workflows, developed by the community, for the community. For a full list of all workflows, look under the Workflows section of the main homepage.
But to give you a short overview, there are workflows about
Step1: Once a workflow is created, we need to make sure that the mandatory inputs are specified. To see which inputs we have to define, we can use the command
Step2: Now, we're ready to finish up our smooth workflow.
Step3: Before we run it, let's visualize the graph
Step4: And we're ready to go
Step5: Once it's finished, we can look at the results
Step6: How to change node parameters from existing workflows
What if we want to change certain parameters of a loaded or already existing workflow? Let's first get the names of all the nodes in the workflow
Step7: Ok. Hmm, what if we want to change the 'median' node, from 50% to 99%? For this, we first need to get the node.
Step8: Now that we have the node, we can change it's value as we want
Step9: And we can run the workflow again...
Step10: And now the output is | Python Code:
from nipype.workflows.fmri.fsl.preprocess import create_susan_smooth
smoothwf = create_susan_smooth()
Explanation: Reusable workflows
Nipype doesn't just allow you to create your own workflows. It also already comes with predefined workflows, developed by the community, for the community. For a full list of all workflows, look under the Workflows section of the main homepage.
But to give you a short overview, there are workflows about:
Functional MRI workflows:
- from fsl about resting state, fixed_effects, modelfit, featreg, susan_smooth and many more
- from spm about DARTEL and VBM
Structural MRI workflows
- from ants about ANTSBuildTemplate and antsRegistrationBuildTemplate
- from freesurfer about bem, recon and tessellation
Diffusion workflows:
- from camino about connectivity_mapping, diffusion and group_connectivity
- from dipy about denoise
- from fsl about artifacts, dti, epi, tbss and many more
- from mrtrix about connectivity_mapping, diffusion and group_connectivity
How to load a workflow from Nipype
Let's consider the example of a functional MRI workflow, that uses FSL's Susan algorithm to smooth some data. To load such a workflow, we only need the following command:
End of explanation
!fslmaths /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz \
-Tmean -thrP 50 /output/sub-01_ses-test_task-fingerfootlips_mask.nii.gz
Explanation: Once a workflow is created, we need to make sure that the mandatory inputs are specified. To see which inputs we have to define, we can use the command:
create_susan_smooth?
Which gives us the output:
```
Create a SUSAN smoothing workflow
Parameters
Inputs:
inputnode.in_files : functional runs (filename or list of filenames)
inputnode.fwhm : fwhm for smoothing with SUSAN
inputnode.mask_file : mask used for estimating SUSAN thresholds (but not for smoothing)
Outputs:
outputnode.smoothed_files : functional runs (filename or list of filenames)
```
As we can see, we also need a mask file. For the sake of convenience, let's take the mean image of a functional image and threshold it at the 50% percentil:
End of explanation
smoothwf.inputs.inputnode.in_files = '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'
smoothwf.inputs.inputnode.mask_file = '/output/sub-01_ses-test_task-fingerfootlips_mask.nii.gz'
smoothwf.inputs.inputnode.fwhm = 4
smoothwf.base_dir = '/output'
Explanation: Now, we're ready to finish up our smooth workflow.
End of explanation
%pylab inline
from IPython.display import Image
smoothwf.write_graph(graph2use='colored', format='png', simple_form=True)
Image(filename='/output/susan_smooth/graph.dot.png')
Explanation: Before we run it, let's visualize the graph:
End of explanation
smoothwf.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: And we're ready to go:
End of explanation
!fslmaths /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz -Tmean fmean.nii.gz
!fslmaths /output/susan_smooth/smooth/mapflow/_smooth0/sub-01_ses-test_task-fingerfootlips_bold_smooth.nii.gz \
-Tmean smean.nii.gz
from nilearn import image, plotting
plotting.plot_epi(
'fmean.nii.gz', title="mean (no smoothing)", display_mode='z',
cmap='gray', cut_coords=(-45, -30, -15, 0, 15))
plotting.plot_epi(
'smean.nii.gz', title="mean (susan smoothed)", display_mode='z',
cmap='gray', cut_coords=(-45, -30, -15, 0, 15))
Explanation: Once it's finished, we can look at the results:
End of explanation
print(smoothwf.list_node_names())
Explanation: How to change node parameters from existing workflows
What if we want to change certain parameters of a loaded or already existing workflow? Let's first get the names of all the nodes in the workflow:
End of explanation
median = smoothwf.get_node('median')
Explanation: Ok. Hmm, what if we want to change the 'median' node, from 50% to 99%? For this, we first need to get the node.
End of explanation
median.inputs.op_string = '-k %s -p 99'
Explanation: Now that we have the node, we can change it's value as we want:
End of explanation
smoothwf.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: And we can run the workflow again...
End of explanation
!fslmaths /output/susan_smooth/smooth/mapflow/_smooth0/sub-01_ses-test_task-fingerfootlips_bold_smooth.nii.gz \
-Tmean mmean.nii.gz
from nilearn import image, plotting
plotting.plot_epi(
'smean.nii.gz', title="mean (susan smooth)", display_mode='z',
cmap='gray', cut_coords=(-45, -30, -15, 0, 15))
plotting.plot_epi(
'mmean.nii.gz', title="mean (smoothed, median=99%)", display_mode='z',
cmap='gray', cut_coords=(-45, -30, -15, 0, 15))
Explanation: And now the output is:
End of explanation |
8,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
신경망 기초 이론
신경망(neural network) 모형은 퍼셉트론, 서포트 벡터 머신, 로지스틱 회귀 등의 분류 모형과 달리 기저 함수(basis function)도 사용자 파라미터에 의해 변화할 수 있는 적응형 기저 함수 모형(adaptive basis function model)이며 구조적으로는 여러개의 퍼셉트론을 쌓아놓은 형태이므로 MLP(multi-layer perceptron)으로도 불린다.
퍼셉트론 복습
다음 그림과 같이 독립 변수 벡터가 3차원인 간단한 퍼셉트론 모형을 가정한다.
Step1: 입력 $x$
$$ x_1,x_2,x_3 $$
가중치 $w$
$$ w_1, w_2, w_3 $$
상수항(bias) $b$를 포함한 활성화값(activation)
$$ a = \sum_{j=1}^3 w_j x_j + b $$
비선형 활성화 함수 $h$
$$ z = h(a) = h \left( \sum_{j=1}^3 w_j x_j + b \right) $$
출력 $y$
$$
y =
\begin{cases}
0 & \text{if } z \leq 0, \
1 & \text{if } z > 0
\end{cases}
$$
이런 퍼셉트론에서 $x$ 대신 기저 함수를 적용한 $\phi(x)$를 사용하면 XOR 문제 등의 비선형 문제를 해결할 수 있다. 그러나 고정된 기저 함수를 사용해야 하므로 문제에 맞는 기저 함수를 찾아야 한다는 단점이 있다.
만약 기저 함수 $\phi(x)$의 형태를 추가적인 모수 $w^{(1)}$, $b^{(1)}$를 사용하여 조절할 수 있다면 즉, 기저함수 $\phi(x;w^{(1)}, b^{(1)})$ 를 사용하면 $w^{(1)}$ 값을 바꾸는 것만으로 다양한 기저 함수를 시도할 수 있다.
$$ z = h \left( \sum_{j=1} w_j^{(2)} \phi_j(x ; w^{(1)}_j, b^{(1)}_j) + b^{(2)} \right) $$
신경망은 다음과 같이 원래 퍼셉트론과 같은 형태의 적응형 기저함수를 사용한 모형이다.
$$ \phi_j(x ; w^{(1)}j, b^{(1)}_j) = h \left( \sum{i=1} w_{ji}^{(1)} x_i + b_j^{(1)} \right) $$
즉 전체 모형은 다음과 같다.
$$ z = h \left( \sum_{j=1} w_j^{(2)} h \left( \sum_{i=1} w_{ji}^{(1)} x_i + b_j^{(1)} \right) + b^{(2)} \right) $$
일반적으로 활성화 함수 $h$ 는 다음과 같은 시그모이드 함수 $\sigma$를 사용한다.
$$
\begin{eqnarray}
z = \sigma(a) \equiv \frac{1}{1+e^{-a}}.
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{1}{1+\exp(-\sum_j w_j x_j-b)}
\end{eqnarray}
$$
이 시그모이드 함수의 특징은 다음과 같은 미분값을 가진다는 것이다.
$$ \sigma' = \sigma(1-\sigma) $$
퍼셉트론을 사용한 XOR 문제 해결법
퍼셉트론를 연속적으로 연결하여 비선형 문제를 해결하는 방법은 이미 디지털 회로 설계에서 사용되던 방법이다.
퍼셉트론의 가중치를 적절히 조정하면 다음과 같은 AND / OR 등의 디지털 게이트(gate)를 제작할 수 있다.
예를 들어 $w_1 = -2$, $w_2 = -2$, $b = 3$ 인 퍼셉트론은 NAND 게이트를 구현한다.
Step2: <table style="display
Step3: Feedforward propagation
신경망의 계산 과정은 실제 신경망에서 신호가 전달과는 과정과 유사하므로 Feedforward propagation 이라고 불린다.
$l$번째 계층의 $j$번째 뉴런에서의 출력값 $z^l$은 다음과 같이 정의된다.
$$
\begin{eqnarray}
z^{l}j = h \left( \sum_k w^{l}{jk} z^{l-1}k + b^l_j \right) = h \left( w^{l}{j} \cdot z^{l-1} + b^l_j \right)
\end{eqnarray}
$$
$l$번째 계층 전체의 출력은 다음과 같이 표시할 수 있다.
$$
\begin{eqnarray}
z^{l} = h \left( \sum_k w^{l}_{k} z^{l-1}_k + b^l \right) = h \left( w^{l} \cdot z^{l-1} + b^l \right)
\end{eqnarray}
$$
$$
a^l \equiv w^l \cdot z^{l-1}+b^l
$$
$$
\begin{eqnarray}
z^{l} = h \left( a^l \right)
\end{eqnarray}
$$
아래에 Feedforward propagation 예를 보였다.
Step4: $$ z^{1} = h \left( w^{1} \cdot x + b^1 \right) = h \left( a^1 \right)$$
Step5: $$ z^{2} = h \left( w^{2} \cdot z^{1} + b^2 \right) = h \left( a^2 \right)$$
Step6: $$ y = z^{3} = h \left( w^{3} \cdot z^{2} + b^3 \right) = h \left( a^3 \right)$$
오차 함수
신경망의 오차 함수는 조건부 확률이라는 실수 값을 출력해야 하므로 퍼셉트론과 달리 제곱합 오차 함수를 사용한다.
$$
\begin{eqnarray} C(w,b) =
\frac{1}{2n} \sum_i \| y_i - \hat{y}(x_i; w, b)\|^2 = \frac{1}{2n} \sum_i \| y_i - z_i \|^2
\end{eqnarray}
$$
가중치 최적화
오차함수를 최소화하는 최적의 가중치를 찾기 위해 다음과 같이 미분(gradient)을 사용한 steepest gradient descent 방법을 적용한다.
$$
\begin{eqnarray}
\Delta w = -\eta \nabla C,
\end{eqnarray}
$$
여기에서 $\eta$는 최적화 속도(learning rate)이다.
$$
\begin{eqnarray}
\nabla C \equiv \left(\frac{\partial C}{\partial w_1}, \ldots,
\frac{\partial C}{\partial w_m}\right)^T
\end{eqnarray}
$$
가중치 갱신 공식은 다음과 같다.
$$
\begin{eqnarray}
w_k & \rightarrow & w_k' = w_k-\eta \frac{\partial C}{\partial w_k} \
b_l & \rightarrow & b_l' = b_l-\eta \frac{\partial C}{\partial b_l}
\end{eqnarray}
$$
Stochastic Gradient Descent
실제로는 단순 Steepest Gradient Descent 방법보다 (SGC
Step7: $$
\delta^4_j = y_j - z_j
$$
Step8: $$ \frac{\partial C}{\partial w^3_{jk}} = z^2_k \delta^3_j $$
$$ \delta^2 = h'(a^2) \odot ((w^{3})^T \delta^{3}) $$
Step9: $$ \frac{\partial C}{\partial w^2_{jk}} = z^1_k \delta^2_j $$
$$ \delta^1 = h'(a^1) \odot ((w^{2})^T \delta^{2}) $$ | Python Code:
%%tikz
\tikzstyle{neuron}=[circle, draw, minimum size=23pt,inner sep=0pt]
\tikzstyle{bias}=[text centered]
\node[neuron] (node) at (2,0) {$z$};
\node[neuron] (x1) at (0, 1) {$x_1$};
\node[neuron] (x2) at (0, 0) {$x_2$};
\node[neuron] (x3) at (0,-1) {$x_3$};
\node[neuron] (b) at (0,-2) {$1$};
\node[neuron] (output) at (4,0) {$y$};
\draw[->] (x1) -- node[above] {$w_1$} (node);
\draw[->] (x2) -- node[above] {$w_2$} (node);
\draw[->] (x3) -- node[above] {$w_3$} (node);
\draw[->] (b) -- node[above] {$b$} (node);
\draw[->] (node) -- (output);
Explanation: 신경망 기초 이론
신경망(neural network) 모형은 퍼셉트론, 서포트 벡터 머신, 로지스틱 회귀 등의 분류 모형과 달리 기저 함수(basis function)도 사용자 파라미터에 의해 변화할 수 있는 적응형 기저 함수 모형(adaptive basis function model)이며 구조적으로는 여러개의 퍼셉트론을 쌓아놓은 형태이므로 MLP(multi-layer perceptron)으로도 불린다.
퍼셉트론 복습
다음 그림과 같이 독립 변수 벡터가 3차원인 간단한 퍼셉트론 모형을 가정한다.
End of explanation
%%tikz
\tikzstyle{neuron}=[circle, draw, minimum size=23pt,inner sep=0pt, node distance=2cm]
\node[neuron] (node) {$z$};
\node[neuron] (x2) [left of=node] {$x_2$};
\node[neuron] (x1) [above of=x2] {$x_1$};
\node[neuron] (b) [below of=x2] {$1$};
\node[neuron] (output) [right of=node] {$y$};
\draw[->] (x1) -- node[above=0.1] {$-2$} (node);
\draw[->] (x2) -- node[above] {$-2$} (node);
\draw[->] (b) -- node[above=0.1] {$3$} (node);
\draw[->] (node) -- (output);
Explanation: 입력 $x$
$$ x_1,x_2,x_3 $$
가중치 $w$
$$ w_1, w_2, w_3 $$
상수항(bias) $b$를 포함한 활성화값(activation)
$$ a = \sum_{j=1}^3 w_j x_j + b $$
비선형 활성화 함수 $h$
$$ z = h(a) = h \left( \sum_{j=1}^3 w_j x_j + b \right) $$
출력 $y$
$$
y =
\begin{cases}
0 & \text{if } z \leq 0, \
1 & \text{if } z > 0
\end{cases}
$$
이런 퍼셉트론에서 $x$ 대신 기저 함수를 적용한 $\phi(x)$를 사용하면 XOR 문제 등의 비선형 문제를 해결할 수 있다. 그러나 고정된 기저 함수를 사용해야 하므로 문제에 맞는 기저 함수를 찾아야 한다는 단점이 있다.
만약 기저 함수 $\phi(x)$의 형태를 추가적인 모수 $w^{(1)}$, $b^{(1)}$를 사용하여 조절할 수 있다면 즉, 기저함수 $\phi(x;w^{(1)}, b^{(1)})$ 를 사용하면 $w^{(1)}$ 값을 바꾸는 것만으로 다양한 기저 함수를 시도할 수 있다.
$$ z = h \left( \sum_{j=1} w_j^{(2)} \phi_j(x ; w^{(1)}_j, b^{(1)}_j) + b^{(2)} \right) $$
신경망은 다음과 같이 원래 퍼셉트론과 같은 형태의 적응형 기저함수를 사용한 모형이다.
$$ \phi_j(x ; w^{(1)}j, b^{(1)}_j) = h \left( \sum{i=1} w_{ji}^{(1)} x_i + b_j^{(1)} \right) $$
즉 전체 모형은 다음과 같다.
$$ z = h \left( \sum_{j=1} w_j^{(2)} h \left( \sum_{i=1} w_{ji}^{(1)} x_i + b_j^{(1)} \right) + b^{(2)} \right) $$
일반적으로 활성화 함수 $h$ 는 다음과 같은 시그모이드 함수 $\sigma$를 사용한다.
$$
\begin{eqnarray}
z = \sigma(a) \equiv \frac{1}{1+e^{-a}}.
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{1}{1+\exp(-\sum_j w_j x_j-b)}
\end{eqnarray}
$$
이 시그모이드 함수의 특징은 다음과 같은 미분값을 가진다는 것이다.
$$ \sigma' = \sigma(1-\sigma) $$
퍼셉트론을 사용한 XOR 문제 해결법
퍼셉트론를 연속적으로 연결하여 비선형 문제를 해결하는 방법은 이미 디지털 회로 설계에서 사용되던 방법이다.
퍼셉트론의 가중치를 적절히 조정하면 다음과 같은 AND / OR 등의 디지털 게이트(gate)를 제작할 수 있다.
예를 들어 $w_1 = -2$, $w_2 = -2$, $b = 3$ 인 퍼셉트론은 NAND 게이트를 구현한다.
End of explanation
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm]
\node () at (0, 4.5) {$l-1$th layer};
\node () at (4, 4.5) {$l$th layer};
\node[neuron] (i1) at (0, 2) {l-1, k};
\node[neuron] (i2) at (0, -2) {l-1, k+1};
\node[neuron] (h11) at (4, 3) {l, j-1};
\node[neuron] (h12) at (4, 0) {l, j};
\node[neuron] (h13) at (4, -3) {l, j+1};
\draw[->] (i1) -- (h11);
\draw[->] (i2) -- (h11);
\draw[->, line width=0.9mm] (i1) -- node[above=0.2] {$w^{l}_{j,k}$ } (h12);
\draw[->] (i2) -- (h12);
\draw[->] (i1) -- (h13);
\draw[->] (i2) -- (h13);
Explanation: <table style="display: inline-table; margin-right: 30pt;">
<tbody><tr style="background:#def; text-align:center;">
<td colspan="2" style="text-align:center;"><b>INPUT</b></td>
<td colspan="3" style="text-align:center;"><b>OUTPUT</b></td>
</tr>
<tr style="background:#def; text-align:center;">
<td>A</td>
<td>B</td>
<td>A AND B</td>
<td>A NAND B</td>
<td>A XOR B</td>
</tr>
<tr style="background:#dfd; text-align:center;">
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr style="background:#dfd; text-align:center;">
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr style="background:#dfd; text-align:center;">
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr style="background:#dfd; text-align:center;">
<td>1</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody></table>
$x_1 = 0$, $x_2 = 0$
$ (−2)\times 0+(−2)\times 0+3=3 > 0 \rightarrow 1$
$x_1 = 0$, $x_2 = 1$
$ (−2)\times 0+(−2)\times 1+3=1 > 0 \rightarrow 1$
$x_1 = 1$, $x_2 = 0$
$ (−2)\times 1+(−2)\times 0+3=1 > 0 \rightarrow 1$
$x_1 = 1$, $x_2 = 1$
$ (−2)\times 1+(−2)\times 1+3=-1 < 0 \rightarrow 0$
디지털 회로에서는 복수개의 NAND 게이트를 조합하면 어떤 디지털 로직이라도 구현 가능하다. 예를 들어 다음 회로는 두 입력 신호의 합과 자릿수를 반환하는 반가산기(half adder) 회로이다.
<img src="https://datascienceschool.net/upfiles/3002b65c9f034818a318ad7f6b09671f.png">
이 퍼셉트론 조합을 보면 4개의 퍼셉트론을 연결하여 XOR 로직을 구현하였음을 알 수 있다.
다계층 퍼셉트론 (MLP: Multi-Layer Perceptrons)
신경망은 퍼셉트론을 여러개 연결한 것으로 다계층 퍼셉트론(MLP: Multi-Layer Perceptrons)이라고도 한다. 신경망에 속한 퍼셉트론은 뉴론(neuron) 또는 노드(node)라고 불린다.
각 계층(layer)은 다음 계층에 대해 적응형 기저 함수의 역할을 한다. 최초의 계층은 입력 계층(input layer), 마지막 계측은 출력 계층(output layer)이라고 하며 중간은 은닉 계층(hidden layer)라고 한다.
<img src="https://datascienceschool.net/upfiles/4dcef7b75de64023900c7f7edb7cbb2f.png">
MLP의 또다른 특징은 출력 계층에 복수개의 출력 뉴런를 가지고 각 뉴런값으로 출력 클래스의 조건부 확률을 반환하도록 설계하여 멀티 클래스 문제를 해결할 수도 있다는 점이다.
다음은 필기 숫자에 대한 영상 정보를 입력 받아 숫자 0 ~ 9 까지의 조건부 확률을 출력하는 MLP의 예이다. 입력 영상이 28 x 28 해상도를 가진다면 입력 계층의 뉴런 수는 $28 \times 28 = 784$ 개가 된다. 출력은 숫자 0 ~ 9 까지의 조건부 확률을 출력하는 $10$ 개의 뉴런을 가진다.
그림의 모형은 $15$개의 뉴런을 가지는 $1$ 개의 은닉 계층을 가진다.
<img src="https://datascienceschool.net/upfiles/90f2752671424cef846839b89ddcf6aa.png">
신경망 가중치 표기법
신경망의 가중치는 $w^{l}_{j,k}$ 과 같이 표기한다. 이 가중치는 $l-1$ 번째 계층의 $k$번째 뉴런와 $l$ 번째 계층의 $j$번째 뉴런을 연결하는 가중치를 뜻한다. 첨자의 순서에 주의한다.
End of explanation
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm]
\node[neuron, fill=gray!10] (i1) at (0, 2) {$x_1$};
\node[neuron, fill=gray!10] (i2) at (0, -2) {$x_2$};
\node[neuron] (h11) at (4, 3) {hidden 11};
\node[neuron] (h12) at (4, 0) {hidden 12};
\node[neuron] (h13) at (4, -3) {hidden 13};
\draw[->] (i1) -- (h11);
\draw[->] (i2) -- (h11);
\draw[->] (i1) -- (h12);
\draw[->] (i2) -- (h12);
\draw[->] (i1) -- (h13);
\draw[->] (i2) -- (h13);
\node[neuron] (h21) at (8, 3) {hidden 21};
\node[neuron] (h22) at (8, 0) {hidden 22};
\node[neuron] (h23) at (8, -3) {hidden 23};
\draw[->] (h11) -- (h21);
\draw[->] (h11) -- (h22);
\draw[->] (h11) -- (h23);
\draw[->] (h12) -- (h21);
\draw[->] (h12) -- (h22);
\draw[->] (h12) -- (h23);
\draw[->] (h13) -- (h21);
\draw[->] (h13) -- (h22);
\draw[->] (h13) -- (h23);
\node[neuron] (o1) at (12, 2) {output 1};
\node[neuron] (o2) at (12, -2) {output 2};
\draw[->] (h21) -- (o1);
\draw[->] (h21) -- (o2);
\draw[->] (h22) -- (o1);
\draw[->] (h22) -- (o2);
\draw[->] (h23) -- (o1);
\draw[->] (h23) -- (o2);
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm]
\node[neuron, fill=gray!10] (i1) at (0, 2) {$x_1$};
\node[neuron, fill=gray!10] (i2) at (0, -2) {$x_2$};
\node[neuron, fill=gray!10] (h11) at (4, 3) {$a^1_1$, $z^1_1$};
\node[neuron, fill=gray!10] (h12) at (4, 0) {$a^1_2$, $z^1_2$};
\node[neuron, fill=gray!10] (h13) at (4, -3) {$a^1_3$, $z^1_3$};
\draw[->, line width=1mm] (i1) -- (h11);
\draw[->, line width=1mm] (i2) -- (h11);
\draw[->, line width=1mm] (i1) -- (h12);
\draw[->, line width=1mm] (i2) -- (h12);
\draw[->, line width=1mm] (i1) -- (h13);
\draw[->, line width=1mm] (i2) -- (h13);
\node[neuron] (h21) at (8, 3) {hidden 21};
\node[neuron] (h22) at (8, 0) {hidden 22};
\node[neuron] (h23) at (8, -3) {hidden 23};
\draw[->] (h11) -- (h21);
\draw[->] (h11) -- (h22);
\draw[->] (h11) -- (h23);
\draw[->] (h12) -- (h21);
\draw[->] (h12) -- (h22);
\draw[->] (h12) -- (h23);
\draw[->] (h13) -- (h21);
\draw[->] (h13) -- (h22);
\draw[->] (h13) -- (h23);
\node[neuron] (o1) at (12, 2) {output 1};
\node[neuron] (o2) at (12, -2) {output 2};
\draw[->] (h21) -- (o1);
\draw[->] (h21) -- (o2);
\draw[->] (h22) -- (o1);
\draw[->] (h22) -- (o2);
\draw[->] (h23) -- (o1);
\draw[->] (h23) -- (o2);
Explanation: Feedforward propagation
신경망의 계산 과정은 실제 신경망에서 신호가 전달과는 과정과 유사하므로 Feedforward propagation 이라고 불린다.
$l$번째 계층의 $j$번째 뉴런에서의 출력값 $z^l$은 다음과 같이 정의된다.
$$
\begin{eqnarray}
z^{l}j = h \left( \sum_k w^{l}{jk} z^{l-1}k + b^l_j \right) = h \left( w^{l}{j} \cdot z^{l-1} + b^l_j \right)
\end{eqnarray}
$$
$l$번째 계층 전체의 출력은 다음과 같이 표시할 수 있다.
$$
\begin{eqnarray}
z^{l} = h \left( \sum_k w^{l}_{k} z^{l-1}_k + b^l \right) = h \left( w^{l} \cdot z^{l-1} + b^l \right)
\end{eqnarray}
$$
$$
a^l \equiv w^l \cdot z^{l-1}+b^l
$$
$$
\begin{eqnarray}
z^{l} = h \left( a^l \right)
\end{eqnarray}
$$
아래에 Feedforward propagation 예를 보였다.
End of explanation
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm]
\node[neuron, fill=gray!10] (i1) at (0, 2) {$x_1$};
\node[neuron, fill=gray!10] (i2) at (0, -2) {$x_2$};
\node[neuron, fill=gray!10] (h11) at (4, 3) {$a^1_1$, $z^1_1$};
\node[neuron, fill=gray!10] (h12) at (4, 0) {$a^1_2$, $z^1_2$};
\node[neuron, fill=gray!10] (h13) at (4, -3) {$a^1_3$, $z^1_3$};
\draw[-] (i1) -- (h11);
\draw[-] (i2) -- (h11);
\draw[-] (i1) -- (h12);
\draw[-] (i2) -- (h12);
\draw[-] (i1) -- (h13);
\draw[-] (i2) -- (h13);
\node[neuron, fill=gray!10] (h21) at (8, 3) {$a^2_1$, $z^2_1$};
\node[neuron, fill=gray!10] (h22) at (8, 0) {$a^2_2$, $z^2_2$};
\node[neuron, fill=gray!10] (h23) at (8, -3) {$a^2_3$, $z^2_3$};
\draw[->, line width=1mm] (h11) -- (h21);
\draw[->, line width=1mm] (h11) -- (h22);
\draw[->, line width=1mm] (h11) -- (h23);
\draw[->, line width=1mm] (h12) -- (h21);
\draw[->, line width=1mm] (h12) -- (h22);
\draw[->, line width=1mm] (h12) -- (h23);
\draw[->, line width=1mm] (h13) -- (h21);
\draw[->, line width=1mm] (h13) -- (h22);
\draw[->, line width=1mm] (h13) -- (h23);
\node[neuron] (o1) at (12, 2) {output 1};
\node[neuron] (o2) at (12, -2) {output 2};
\draw[-] (h21) -- (o1);
\draw[-] (h21) -- (o2);
\draw[-] (h22) -- (o1);
\draw[-] (h22) -- (o2);
\draw[-] (h23) -- (o1);
\draw[-] (h23) -- (o2);
Explanation: $$ z^{1} = h \left( w^{1} \cdot x + b^1 \right) = h \left( a^1 \right)$$
End of explanation
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm]
\node[neuron, fill=gray!10] (i1) at (0, 2) {$x_1$};
\node[neuron, fill=gray!10] (i2) at (0, -2) {$x_2$};
\node[neuron, fill=gray!10] (h11) at (4, 3) {$a^1_1$, $z^1_1$};
\node[neuron, fill=gray!10] (h12) at (4, 0) {$a^1_2$, $z^1_2$};
\node[neuron, fill=gray!10] (h13) at (4, -3) {$a^1_3$, $z^1_3$};
\draw[-] (i1) -- (h11);
\draw[-] (i2) -- (h11);
\draw[-] (i1) -- (h12);
\draw[-] (i2) -- (h12);
\draw[-] (i1) -- (h13);
\draw[-] (i2) -- (h13);
\node[neuron, fill=gray!10] (h21) at (8, 3) {$a^2_1$, $z^2_1$};
\node[neuron, fill=gray!10] (h22) at (8, 0) {$a^2_2$, $z^2_2$};
\node[neuron, fill=gray!10] (h23) at (8, -3) {$a^2_3$, $z^2_3$};
\draw[-] (h11) -- (h21);
\draw[-] (h11) -- (h22);
\draw[-] (h11) -- (h23);
\draw[-] (h12) -- (h21);
\draw[-] (h12) -- (h22);
\draw[-] (h12) -- (h23);
\draw[-] (h13) -- (h21);
\draw[-] (h13) -- (h22);
\draw[-] (h13) -- (h23);
\node[neuron, fill=gray!10] (o1) at (12, 2) {$a^3_1$, $z^3_1=y_1$};
\node[neuron, fill=gray!10] (o2) at (12, -2) {$a^3_2$, $z^3_2=y_2$};
\draw[->, line width=1mm] (h21) -- (o1);
\draw[->, line width=1mm] (h21) -- (o2);
\draw[->, line width=1mm] (h22) -- (o1);
\draw[->, line width=1mm] (h22) -- (o2);
\draw[->, line width=1mm] (h23) -- (o1);
\draw[->, line width=1mm] (h23) -- (o2);
Explanation: $$ z^{2} = h \left( w^{2} \cdot z^{1} + b^2 \right) = h \left( a^2 \right)$$
End of explanation
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm, align=center]
\node[neuron] (i1) at (0, 2) {$x_1$};
\node[neuron] (i2) at (0, -2) {$x_2$};
\node[neuron] (h11) at (4, 3) {$a^1_1$, $z^1_1$};
\node[neuron] (h12) at (4, 0) {$a^1_2$, $z^1_2$};
\node[neuron] (h13) at (4, -3) {$a^1_3$, $z^1_3$};
\draw[-] (i1) -- (h11);
\draw[-] (i2) -- (h11);
\draw[-] (i1) -- (h12);
\draw[-] (i2) -- (h12);
\draw[-] (i1) -- (h13);
\draw[-] (i2) -- (h13);
\node[neuron] (h21) at (8, 3) {$a^2_1$, $z^2_1$};
\node[neuron] (h22) at (8, 0) {$a^2_2$, $z^2_2$};
\node[neuron] (h23) at (8, -3) {$a^2_3$, $z^2_3$};
\draw[-] (h11) -- (h21);
\draw[-] (h11) -- (h22);
\draw[-] (h11) -- (h23);
\draw[-] (h12) -- (h21);
\draw[-] (h12) -- (h22);
\draw[-] (h12) -- (h23);
\draw[-] (h13) -- (h21);
\draw[-] (h13) -- (h22);
\draw[-] (h13) -- (h23);
\node[neuron, fill=gray!10] (o1) at (12, 2) {$a^3_1$, $z^3_1=y_1$ \\ $\delta^3_1 = z_1 - y_1$};
\node[neuron, fill=gray!10] (o2) at (12, -2) {$a^3_2$, $z^3_2=y_2$ \\ $\delta^3_2 = z_2 - y_2$};
\draw[-] (h21) -- (o1);
\draw[-] (h21) -- (o2);
\draw[-] (h22) -- (o1);
\draw[-] (h22) -- (o2);
\draw[-] (h23) -- (o1);
\draw[-] (h23) -- (o2);
Explanation: $$ y = z^{3} = h \left( w^{3} \cdot z^{2} + b^3 \right) = h \left( a^3 \right)$$
오차 함수
신경망의 오차 함수는 조건부 확률이라는 실수 값을 출력해야 하므로 퍼셉트론과 달리 제곱합 오차 함수를 사용한다.
$$
\begin{eqnarray} C(w,b) =
\frac{1}{2n} \sum_i \| y_i - \hat{y}(x_i; w, b)\|^2 = \frac{1}{2n} \sum_i \| y_i - z_i \|^2
\end{eqnarray}
$$
가중치 최적화
오차함수를 최소화하는 최적의 가중치를 찾기 위해 다음과 같이 미분(gradient)을 사용한 steepest gradient descent 방법을 적용한다.
$$
\begin{eqnarray}
\Delta w = -\eta \nabla C,
\end{eqnarray}
$$
여기에서 $\eta$는 최적화 속도(learning rate)이다.
$$
\begin{eqnarray}
\nabla C \equiv \left(\frac{\partial C}{\partial w_1}, \ldots,
\frac{\partial C}{\partial w_m}\right)^T
\end{eqnarray}
$$
가중치 갱신 공식은 다음과 같다.
$$
\begin{eqnarray}
w_k & \rightarrow & w_k' = w_k-\eta \frac{\partial C}{\partial w_k} \
b_l & \rightarrow & b_l' = b_l-\eta \frac{\partial C}{\partial b_l}
\end{eqnarray}
$$
Stochastic Gradient Descent
실제로는 단순 Steepest Gradient Descent 방법보다 (SGC: Stochastic Gradient Descent)를 주로 사용한다. SGD는 미분 계산을 위해 전체 데이터 샘플을 모두 사용하지 않고 $m$개의 일부 데이터만 사용하여 미분을 계산하는 방법이다.
$$
\begin{eqnarray}
\frac{\sum_{j=1}^m \nabla C_{X_{j}}}{m} \approx \frac{\sum_x \nabla C_x}{n} = \nabla C
\end{eqnarray}
$$
이 경우 가중치 갱신 공식은 다음과 같다.
$$
\begin{eqnarray}
w_k & \rightarrow & w_k' = w_k-\frac{\eta}{m}
\sum_j \frac{\partial C_{X_j}}{\partial w_k} \
b_l & \rightarrow & b_l' = b_l-\frac{\eta}{m}
\sum_j \frac{\partial C_{X_j}}{\partial b_l},
\end{eqnarray}
$$
Back Propagation
단순하게 수치적으로 미분을 계산한다면 모든 가중치에 대해서 개별적으로 미분을 계산해야 한다. 그러나 back propagation 방법을 사용하면 모든 가중치에 대한 미분값을 한번에 계산할 수 있다.
back propagation 방법을 수식으로 표현하면 다음과 같다.
우선 $\delta$ 를 뒤에서 앞으로 전파한다. $\delta$는 다음과 같이 정의되는 값이다.
$$
\delta_j = \dfrac{\partial C}{\partial a_j}
$$
$$
\begin{eqnarray}
\delta^{l-1}j = h'(a^{l-1}_j) \sum_k w^l{kj} \delta^l_k
\end{eqnarray}
$$
이 식을 벡터-행렬 식으로 쓰면 다음과 같다.
$$
\delta^{l-1} = h'(a^{l-1}) \odot ((w^{l})^T \delta^{l})
$$
여기에서 $\odot$ 연산 기호는 Hamadard Product 혹은 Schur product 라고 불리는 연산으로 정의는 다음과 같다.
$$
\left(\begin{array}{ccc} \mathrm{a}{11} & \mathrm{a}{12} & \mathrm{a}{13}\ \mathrm{a}{21} & \mathrm{a}{22} & \mathrm{a}{23}\ \mathrm{a}{31} & \mathrm{a}{32} & \mathrm{a}{33} \end{array}\right) \odot \left(\begin{array}{ccc} \mathrm{b}{11} & \mathrm{b}{12} & \mathrm{b}{13}\ \mathrm{b}{21} & \mathrm{b}{22} & \mathrm{b}{23}\ \mathrm{b}{31} & \mathrm{b}{32} & \mathrm{b}{33} \end{array}\right) = \left(\begin{array}{ccc} \mathrm{a}{11}\, \mathrm{b}{11} & \mathrm{a}{12}\, \mathrm{b}{12} & \mathrm{a}{13}\, \mathrm{b}{13}\ \mathrm{a}{21}\, \mathrm{b}{21} & \mathrm{a}{22}\, \mathrm{b}{22} & \mathrm{a}{23}\, \mathrm{b}{23}\ \mathrm{a}{31}\, \mathrm{b}{31} & \mathrm{a}{32}\, \mathrm{b}{32} & \mathrm{a}{33}\, \mathrm{b}{33} \end{array}\right)
$$
오차값에서 가중치에 대한 미분은 다음과 같이 구한다.
$$
\frac{\partial C}{\partial w^l_{jk}} = \delta^l_j z^{l-1}_k
$$
또한 최종단의 $\delta$는 다음과 같이 예측 오차 그 자체이다.
$$
\delta^L_j = y_j - z_j
$$
따라서 오차값을 위 식에 따라 앞쪽으로 다시 전파하면 전체 가중치에 대한 미분을 구할 수 있다.
End of explanation
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm, align=center]
\node[neuron] (i1) at (0, 2) {$x_1$};
\node[neuron] (i2) at (0, -2) {$x_2$};
\node[neuron] (h11) at (4, 3) {$a^1_1$, $z^1_1$};
\node[neuron] (h12) at (4, 0) {$a^1_2$, $z^1_2$};
\node[neuron] (h13) at (4, -3) {$a^1_3$, $z^1_3$};
\draw[-] (i1) -- (h11);
\draw[-] (i2) -- (h11);
\draw[-] (i1) -- (h12);
\draw[-] (i2) -- (h12);
\draw[-] (i1) -- (h13);
\draw[-] (i2) -- (h13);
\node[neuron, fill=gray!10] (h21) at (8, 3) {$a^2_1$, $z^2_1$ \\ $\delta^2_1$};
\node[neuron, fill=gray!10] (h22) at (8, 0) {$a^2_2$, $z^2_2$ \\ $\delta^2_2$};
\node[neuron, fill=gray!10] (h23) at (8, -3) {$a^2_3$, $z^2_3$ \\ $\delta^2_3$};
\draw[-] (h11) -- (h21);
\draw[-] (h11) -- (h22);
\draw[-] (h11) -- (h23);
\draw[-] (h12) -- (h21);
\draw[-] (h12) -- (h22);
\draw[-] (h12) -- (h23);
\draw[-] (h13) -- (h21);
\draw[-] (h13) -- (h22);
\draw[-] (h13) -- (h23);
\node[neuron, fill=gray!10] (o1) at (12, 2) {$a^3_1$, $z^3_1=y_1$ \\ $\delta^3_1 = z_1 - y_1$};
\node[neuron, fill=gray!10] (o2) at (12, -2) {$a^3_2$, $z^3_2=y_2$ \\ $\delta^3_2 = z_2 - y_2$};
\draw[<-, line width=0.5mm] (h21) -- (o1);
\draw[<-, line width=0.5mm] (h21) -- (o2);
\draw[<-, line width=0.5mm] (h22) -- (o1);
\draw[<-, line width=0.5mm] (h22) -- (o2);
\draw[<-, line width=0.5mm] (h23) -- (o1);
\draw[<-, line width=0.5mm] (h23) -- (o2);
Explanation: $$
\delta^4_j = y_j - z_j
$$
End of explanation
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm, align=center]
\node[neuron] (i1) at (0, 2) {$x_1$};
\node[neuron] (i2) at (0, -2) {$x_2$};
\node[neuron, fill=gray!10] (h11) at (4, 3) {$a^1_1$, $z^1_1$ \\ $\delta^1_1$};
\node[neuron, fill=gray!10] (h12) at (4, 0) {$a^1_2$, $z^1_2$ \\ $\delta^1_2$};
\node[neuron, fill=gray!10] (h13) at (4, -3) {$a^1_3$, $z^1_3$ \\ $\delta^1_3$};
\draw[-] (i1) -- (h11);
\draw[-] (i2) -- (h11);
\draw[-] (i1) -- (h12);
\draw[-] (i2) -- (h12);
\draw[-] (i1) -- (h13);
\draw[-] (i2) -- (h13);
\node[neuron, fill=gray!10] (h21) at (8, 3) {$a^2_1$, $z^2_1$ \\ $\delta^2_1$};
\node[neuron, fill=gray!10] (h22) at (8, 0) {$a^2_2$, $z^2_2$ \\ $\delta^2_2$};
\node[neuron, fill=gray!10] (h23) at (8, -3) {$a^2_3$, $z^2_3$ \\ $\delta^2_3$};
\draw[<-, line width=0.5mm] (h11) -- (h21);
\draw[<-, line width=0.5mm] (h11) -- (h22);
\draw[<-, line width=0.5mm] (h11) -- (h23);
\draw[<-, line width=0.5mm] (h12) -- (h21);
\draw[<-, line width=0.5mm] (h12) -- (h22);
\draw[<-, line width=0.5mm] (h12) -- (h23);
\draw[<-, line width=0.5mm] (h13) -- (h21);
\draw[<-, line width=0.5mm] (h13) -- (h22);
\draw[<-, line width=0.5mm] (h13) -- (h23);
\node[neuron, fill=gray!10] (o1) at (12, 2) {$a^3_1$, $z^3_1=y_1$ \\ $\delta^3_1 = z_1 - y_1$};
\node[neuron, fill=gray!10] (o2) at (12, -2) {$a^3_2$, $z^3_2=y_2$ \\ $\delta^3_2 = z_2 - y_2$};
\draw[-] (h21) -- (o1);
\draw[-] (h21) -- (o2);
\draw[-] (h22) -- (o1);
\draw[-] (h22) -- (o2);
\draw[-] (h23) -- (o1);
\draw[-] (h23) -- (o2);
Explanation: $$ \frac{\partial C}{\partial w^3_{jk}} = z^2_k \delta^3_j $$
$$ \delta^2 = h'(a^2) \odot ((w^{3})^T \delta^{3}) $$
End of explanation
%%tikz --size 600,400
\tikzstyle{neuron}=[circle, draw, minimum size=2 cm,inner sep=5pt, node distance=2cm, align=center]
\node[neuron, fill=gray!10] (i1) at (0, 2) {$x_1$};
\node[neuron, fill=gray!10] (i2) at (0, -2) {$x_2$};
\node[neuron, fill=gray!10] (h11) at (4, 3) {$a^1_1$, $z^1_1$ \\ $\delta^1_1$};
\node[neuron, fill=gray!10] (h12) at (4, 0) {$a^1_2$, $z^1_2$ \\ $\delta^1_2$};
\node[neuron, fill=gray!10] (h13) at (4, -3) {$a^1_3$, $z^1_3$ \\ $\delta^1_3$};
\draw[<-, line width=0.5mm] (i1) -- (h11);
\draw[<-, line width=0.5mm] (i2) -- (h11);
\draw[<-, line width=0.5mm] (i1) -- (h12);
\draw[<-, line width=0.5mm] (i2) -- (h12);
\draw[<-, line width=0.5mm] (i1) -- (h13);
\draw[<-, line width=0.5mm] (i2) -- (h13);
\node[neuron, fill=gray!10] (h21) at (8, 3) {$a^2_1$, $z^2_1$ \\ $\delta^2_1$};
\node[neuron, fill=gray!10] (h22) at (8, 0) {$a^2_2$, $z^2_2$ \\ $\delta^2_2$};
\node[neuron, fill=gray!10] (h23) at (8, -3) {$a^2_3$, $z^2_3$ \\ $\delta^2_3$};
\draw[-] (h11) -- (h21);
\draw[-] (h11) -- (h22);
\draw[-] (h11) -- (h23);
\draw[-] (h12) -- (h21);
\draw[-] (h12) -- (h22);
\draw[-] (h12) -- (h23);
\draw[-] (h13) -- (h21);
\draw[-] (h13) -- (h22);
\draw[-] (h13) -- (h23);
\node[neuron, fill=gray!10] (o1) at (12, 2) {$a^3_1$, $z^3_1=y_1$ \\ $\delta^3_1 = z_1 - y_1$};
\node[neuron, fill=gray!10] (o2) at (12, -2) {$a^3_2$, $z^3_2=y_2$ \\ $\delta^3_2 = z_2 - y_2$};
\draw[-] (h21) -- (o1);
\draw[-] (h21) -- (o2);
\draw[-] (h22) -- (o1);
\draw[-] (h22) -- (o2);
\draw[-] (h23) -- (o1);
\draw[-] (h23) -- (o2);
Explanation: $$ \frac{\partial C}{\partial w^2_{jk}} = z^1_k \delta^2_j $$
$$ \delta^1 = h'(a^1) \odot ((w^{2})^T \delta^{2}) $$
End of explanation |
8,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
面向对象程序设计
Step1: 当访问mycircle.radius,实际上是通过字典 mycircle.dict 查看相应的值。但是,如果回到类这一层,属性也是存储在类的字典里面
Step2: 也就是说,在查找属性或者方法的时候,会递归的查找mro中的内容,直到找到一个匹配项。
在传统的Python编码中,我们可以用下面的方式破坏面向对象的封装性:
Step3: 可以看到,虽然尝试使用mycircle.radius = 3这个语句尝试改变实例属性,但是并没有成功
Step4: 于是,当我们按照这样的方式obj.foo在对象上访问一个属性时,就可以得到如下所示的一些简单的规则:
1. 如果有该命名的属性,则直接访问该属性的值obj.foo
2. 访问obj.__dict__
3. 或者查询 type(obj).__dict__
4. 直到在mro中找到合适的匹配项
5. 赋值操作总是对于obj.__dict__ 创建一个键值对
6. 直到设定一个setter属性
MRO 和C3 算法
MRO,主要用于在多继承时判断所调用属性的路径。
本地优先级:指声明时父类的顺序,比如C(A,B),如果访问C类对象属性时,应该根据声明顺序,优先查找A类,然后再查找B类。
单调性:如果在C的解析顺序中,A排在B的前面,那么在C的所有子类里,也必须满足这个顺序。
Step5: 其实只需用知道以下的规则即可:
mro(G) = [G] + merge(mro[E], mro[F], [E,F])
= [G] + merge([E,A,B,O], [F,B,C,O], [E,F])
= [G,E] + merge([A,B,O], [F,B,C,O], [F])
= [G,E,A] + merge([B,O], [F,B,C,O], [F])
= [G,E,A,F] + merge([B,O], [B,C,O])
= [G,E,A,F,B] + merge([O], [C,O])
= [G,E,A,F,B,C] + merge([O], [O])
= [G,E,A,F,B,C,O]
委托(delegation)
Step6: 这个示例有两个类,cook和material:
Step7: 原本material和cook类之间的关系仅仅只是在material类的‘构造’函数中,实例化了cook类,一旦在material的实例中没有找到需要的方法,就自动加载cook类中的方法。
Step8: 设计模式
如果小伙伴想要更进一步了解软件开发的知识,设计模式是没有办法回避的主题。本次培训只给大家介绍两个常见的示例:
抽象工厂有一个优点,在使用工厂方法时从用户视角通常是看不到的,那就是抽象工厂能够通过改变激活的工厂方法动态地(运行时)改变应用行为。一个经典例子是能够让用户在使用应用时改变应用的观感(比如,Apple风格和Windows风格等),而不需要终止应用然后重新启动。
Step9: 本示例中,判断输入的年龄来选择实例化frogworld还是wizardworld,然后直接调用play方法。
策略模式(Strategy pattern)鼓励使用多种算法来解决一个问题,其杀手级特性是能够在运行时透明地切换算法(客户端代码对变化尤感知)。因此,如果你有两种算法,并且知道其中一种对少量输入效果更好,另一种对大量输入效果更好,则可以使用策略模式在运行时基于输入数据决定使用哪种算法。
Step11: 可以看到,这两个例子都是非常简单的,设计模式就是这样:完全都是经验的总结。使用python实现设计模式尤其简单,很多时候会发现自己组织实现了几个类,就是用到了好几种模式。如果小伙伴没有计划从事软件开发,不掌握也没有关系,反之,则必须要有所了解。
GIL
Global Interpreter lock
Step12: 对于多线程的问题,基本可以到此为止了,因为实在是不推荐使用,因为里面需要了解的东西实在太多了,一不小心,反而性能更糟糕。
装饰器
装饰器是一种特殊的函数,实现的细节非常套路,但是一旦用好了,可以大大简化程序的设计,减少很多重复代码,并且使功能增强
Step13: 注意上面的示例中使用了标准库functools中自带的wraps方法作为装饰器,用来装饰wrapper函数,这样做可以保留被装饰函数中的元信息
Step14: 一个装饰器就是一个函数,它接受一个函数作为参数并返回一个新的函数。
Step15: 跟像下面这样写其实效果是一样的:
Step16: 把装饰器定义成类
Step17: 使用wrap 方法
Step19: 带参数的装饰器
Step20: 初看起来,这种实现看上去很复杂,但是核心思想很简单。 最外层的函数 logged() 接受参数并将它们作用在内部的装饰器函数上面。 内层的函数 decorate() 接受一个函数作为参数,然后在函数上面放置一个包装器。 这里的关键点是包装器是可以使用传递给 logged() 的参数的。
Step21: 装饰器处理过程跟下面的调用是等效的:
Step22: 装饰器是属于比较进阶的Python知识了,如果小伙伴不是想要从事python开发的工作,这个可以不用掌握,但是还是需要了解这个语法在做什么。
classmethod, staticmethod
Step23: @classmethod 和 @staticmethod 实际上并不会创建可直接调用的对象, 而是创建特殊的描述器对象 | Python Code:
class Circle(object):
PI = 3.14 #类变量
def __init__(self,radius):
self.radius = radius #实例变量
def get_areas(self):
return PI * self.radius * self.radius
mycircle = Circle(2) #实例化
print(mycircle.radius) # 实例变量
print(mycircle.PI) #类变量
print(Circle.PI) #也可以使用类名直接调用类变量
Explanation: 面向对象程序设计
End of explanation
Circle.__dict__
# 如果加上继承呢?
class Widget(object):
copyright = 'witrett, inc.'
class Circle(Widget):
PI = 3.14
# copyright = 'circle copyright'
def __init__(self, radius):
self.radius = radius
mycircle = Circle(2)
print(type(mycircle).mro())
# mro即method resolution order,主要用于在多继承时判断调的属性的路径(来自于哪个类)。后续会进一步介绍
print(mycircle.copyright) #显然,这里是使用Widget中的变量copyright
Circle.__dict__
Explanation: 当访问mycircle.radius,实际上是通过字典 mycircle.dict 查看相应的值。但是,如果回到类这一层,属性也是存储在类的字典里面:
End of explanation
class Widget(object):
copyright = 'witrett, inc.'
class Circle(Widget):
PI = 3.14
def __init__(self, radius):
self.radius = radius
self.circumference = 2 * self.radius * self.PI
mycircle = Circle(2)
mycircle.radius = 3
mycircle.circumference #呵呵,修改并没生效
Explanation: 也就是说,在查找属性或者方法的时候,会递归的查找mro中的内容,直到找到一个匹配项。
在传统的Python编码中,我们可以用下面的方式破坏面向对象的封装性:
End of explanation
# 尝试一下如何改变这一切:
class Circle(Widget):
PI = 3.14
def __init__(self, radius):
self.radius = radius
@property
def circumference(self):
return 2 * self.radius * self.PI #现在ok
mycircle = Circle(2)
mycircle.radius = 3
mycircle.circumference #呵呵,属性修改成功
Explanation: 可以看到,虽然尝试使用mycircle.radius = 3这个语句尝试改变实例属性,但是并没有成功
End of explanation
def c3_lineration(kls):
if len(kls.__bases__) == 1:
return [kls, kls.__base__]
else:
l = [c3_lineration(base) for base in kls.__bases__]
l.append([base for base in kls.__bases__])
return [kls] + merge(l)
def merge(args):
if args:
for mro_list in args:
for class_type in mro_list:
for comp_list in args:
if class_type in comp_list[1:]:
break
else:
next_merge_list = []
for arg in args:
if class_type in arg:
arg.remove(class_type)
if arg:
next_merge_list.append(arg)
else:
next_merge_list.append(arg)
return [class_type] + merge(next_merge_list)
else:
raise Exception
else:
return []
class A(object):pass
class B(object):pass
class C(object):pass
class E(A,B):pass
class F(B,C):pass
class G(E,F):pass
print(c3_lineration(G))
Explanation: 于是,当我们按照这样的方式obj.foo在对象上访问一个属性时,就可以得到如下所示的一些简单的规则:
1. 如果有该命名的属性,则直接访问该属性的值obj.foo
2. 访问obj.__dict__
3. 或者查询 type(obj).__dict__
4. 直到在mro中找到合适的匹配项
5. 赋值操作总是对于obj.__dict__ 创建一个键值对
6. 直到设定一个setter属性
MRO 和C3 算法
MRO,主要用于在多继承时判断所调用属性的路径。
本地优先级:指声明时父类的顺序,比如C(A,B),如果访问C类对象属性时,应该根据声明顺序,优先查找A类,然后再查找B类。
单调性:如果在C的解析顺序中,A排在B的前面,那么在C的所有子类里,也必须满足这个顺序。
End of explanation
# 这个例子尝试给他小伙伴们解释getattr和__getattr__的使用:虽然类wrapper没有定义append方法,但是可以通过getattr调用list的append方法。
class wrapper:
def __init__(self, object):
self.wrapped = object
def __getattr__(self, attrname):
print('Trace:', attrname)
return getattr(self.wrapped, attrname)
x = wrapper([1,2,3])
x.append(4)
x
x.wrapped
Explanation: 其实只需用知道以下的规则即可:
mro(G) = [G] + merge(mro[E], mro[F], [E,F])
= [G] + merge([E,A,B,O], [F,B,C,O], [E,F])
= [G,E] + merge([A,B,O], [F,B,C,O], [F])
= [G,E,A] + merge([B,O], [F,B,C,O], [F])
= [G,E,A,F] + merge([B,O], [B,C,O])
= [G,E,A,F,B] + merge([O], [C,O])
= [G,E,A,F,B,C] + merge([O], [O])
= [G,E,A,F,B,C,O]
委托(delegation)
End of explanation
class cook:
def __init__(self, material='rice'):
self.material = material
def boil(self):
print('had boiled ', self.material)
class material:
def __init__(self, rice):
self.rice = rice
self.meth = cook()
def clean(self):
print('clean up first')
def __getattr__(self, attr):#通过这样的方式,一旦material的实例属性找不到需要的实例变量活着实例方法,就从cook类中寻找
return getattr(self.meth, attr)
m = material('rice')
m.boil()# 注意此时m是material类的实例,但是boil方法属于cook类,就是通过上面的getattr语句连接起来
Explanation: 这个示例有两个类,cook和material:
End of explanation
m.meth.__dict__
Explanation: 原本material和cook类之间的关系仅仅只是在material类的‘构造’函数中,实例化了cook类,一旦在material的实例中没有找到需要的方法,就自动加载cook类中的方法。
End of explanation
class Frog:
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
def interact_with(self, obstacle):
print('{} the Frog encounters {} and {}!'.format(self,
obstacle, obstacle.action()))
class Bug:
def __str__(self):
return 'a bug'
def action(self):
return 'eats it'
class FrogWorld:
def __init__(self, name):
print(self)
self.player_name = name
def __str__(self):
return '\n\n\t------ Frog World -------'
def make_character(self):
return Frog(self.player_name)
def make_obstacle(self):
return Bug()
######################### 分割线,两个不同的两组类 #############################
class Wizard:
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
def interact_with(self, obstacle):
print(
'{} the Wizard battles against {} and {}!'.format(
self,
obstacle,
obstacle.action()))
class Ork:
def __str__(self):
return 'an evil ork'
def action(self):
return 'kills it'
class WizardWorld:
def __init__(self, name):
print(self)
self.player_name = name
def __str__(self):
return '\n\n\t------ Wizard World -------'
def make_character(self):
return Wizard(self.player_name)
def make_obstacle(self):
return Ork()
class GameEnvironment:
def __init__(self, factory):
self.hero = factory.make_character()
self.obstacle = factory.make_obstacle()
def play(self):
self.hero.interact_with(self.obstacle)
######################### 以上是两组类的不同实现 ###############################
def validate_age(name):
try:
age = input('Welcome {}. How old are you? '.format(name))
age = int(age)
except ValueError as err:
print("Age {} is invalid, please try again...".format(age))
return (False, age)
return (True, age)
def main():
name = input("Hello. What's your name? ")
valid_input = False
while not valid_input:
valid_input, age = validate_age(name) # 验证判断输入是否正确,而且判断输入的年龄
game = FrogWorld if age < 18 else WizardWorld# 通过年龄判断应该使用Frogworld还是wizardworld
environment = GameEnvironment(game(name))
environment.play()
main()
# Hello. What's your name? Nick
# Welcome Nick. How old are you? 17
# ------ Frog World -------
# Nick the Frog encounters a bug and eats it!
Explanation: 设计模式
如果小伙伴想要更进一步了解软件开发的知识,设计模式是没有办法回避的主题。本次培训只给大家介绍两个常见的示例:
抽象工厂有一个优点,在使用工厂方法时从用户视角通常是看不到的,那就是抽象工厂能够通过改变激活的工厂方法动态地(运行时)改变应用行为。一个经典例子是能够让用户在使用应用时改变应用的观感(比如,Apple风格和Windows风格等),而不需要终止应用然后重新启动。
End of explanation
import types
class StrategyExample:
def __init__(self, func=None):
self.name = 'Strategy Example 0'
if func is not None:
self.execute = types.MethodType(func, self)# MethodType: The type of methods of user-defined class instances.
print(self.execute)
def execute(self):
print(self.name)
def execute_replacement1(self):
print(self.name + ' from execute 1')
def execute_replacement2(self):
print(self.name + ' from execute 2')
if __name__ == '__main__':
strat0 = StrategyExample()
strat1 = StrategyExample(execute_replacement1)
strat1.name = 'Strategy Example 1'
strat2 = StrategyExample(execute_replacement2)
strat2.name = 'Strategy Example 2'
strat0.execute()
strat1.execute()
strat2.execute()
Explanation: 本示例中,判断输入的年龄来选择实例化frogworld还是wizardworld,然后直接调用play方法。
策略模式(Strategy pattern)鼓励使用多种算法来解决一个问题,其杀手级特性是能够在运行时透明地切换算法(客户端代码对变化尤感知)。因此,如果你有两种算法,并且知道其中一种对少量输入效果更好,另一种对大量输入效果更好,则可以使用策略模式在运行时基于输入数据决定使用哪种算法。
End of explanation
import threading
def worker(num):
thread worker function
print('Worker: %s' % num)
threads = []
for i in range(5):
t = threading.Thread(target=worker, args=(i,))
threads.append(t)
t.start()
Explanation: 可以看到,这两个例子都是非常简单的,设计模式就是这样:完全都是经验的总结。使用python实现设计模式尤其简单,很多时候会发现自己组织实现了几个类,就是用到了好几种模式。如果小伙伴没有计划从事软件开发,不掌握也没有关系,反之,则必须要有所了解。
GIL
Global Interpreter lock: Python解释器在同一时刻只能运行在一个线程之中。关于这方面的内容,还是参考David beazley相关的演讲:http://www.dabeaz.com/talks.html
End of explanation
import time
from functools import wraps
def timethis(func):
'''
Decorator that reports the execution time.
'''
#@wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(func.__name__, end-start)
return result
return wrapper
Explanation: 对于多线程的问题,基本可以到此为止了,因为实在是不推荐使用,因为里面需要了解的东西实在太多了,一不小心,反而性能更糟糕。
装饰器
装饰器是一种特殊的函数,实现的细节非常套路,但是一旦用好了,可以大大简化程序的设计,减少很多重复代码,并且使功能增强
End of explanation
@timethis
def countdown(n):
'''
Counts down
'''
while n > 0:
n -= 1
countdown(100000)
countdown.__name__
countdown.__doc__
countdown.__annotations__
Explanation: 注意上面的示例中使用了标准库functools中自带的wraps方法作为装饰器,用来装饰wrapper函数,这样做可以保留被装饰函数中的元信息
End of explanation
@timethis
def countdown(n):
pass
Explanation: 一个装饰器就是一个函数,它接受一个函数作为参数并返回一个新的函数。
End of explanation
def countdown(n):
pass
countdown = timethis(countdown)
Explanation: 跟像下面这样写其实效果是一样的:
End of explanation
import types
import time
from functools import wraps
class thistime:
def __init__(self, func):
wraps(func)(self)
self.ncalls = 0
def __call__(self, *args, **kwargs):# 通过这里实现装饰器的功能
self.ncalls += 1
start = time.time()
f = self.__wrapped__(*args, **kwargs)
end = time.time()
print(func.__name__, end-start)
return f
def __get__(self, instance, cls):
if instance is None:
return self
else:
return types.MethodType(self, instance)
# 这里实现类装饰器的关键在于 __call__ 函数,如果小伙伴觉得这种写法难以理解,也可以不用这样写,毕竟python还是有一定的灵活度
@thistime
def countdown(n):
while n > 0:
n -= 1
countdown(100)
Explanation: 把装饰器定义成类
End of explanation
import time
from functools import wraps
def timethis(func):
'''
Decorator that reports the execution time.
'''
@wraps(func)## 最好默认都使用这个wraps
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(func.__name__, end-start)
return result
return wrapper
@timethis
def countdown(n):
'''
Counts down
'''
while n > 0:
n -= 1
countdown(100000)
countdown.__name__
countdown.__doc__
countdown.__annotations__
Explanation: 使用wrap 方法
End of explanation
from functools import wraps
import logging
def logged(level, name=None, message=None):
Add logging to a function. level is the logging
level, name is the logger name, and message is the
log message. If name and message aren't specified,
they default to the function's module and name.
def decorate(func):
logname = name if name else func.__module__
log = logging.getLogger(logname)
logmsg = message if message else func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
log.log(level, logmsg)
return func(*args, **kwargs)
return wrapper
return decorate
# Example use
@logged(logging.CRITICAL)
def add(x, y):
return x + y
@logged(logging.CRITICAL, 'example', 'this is very important')
def spam_func():
print('Spam!')
Explanation: 带参数的装饰器
End of explanation
add(3,4)
spam_func()
@decorator(x, y, z)
def func(a, b):
pass
Explanation: 初看起来,这种实现看上去很复杂,但是核心思想很简单。 最外层的函数 logged() 接受参数并将它们作用在内部的装饰器函数上面。 内层的函数 decorate() 接受一个函数作为参数,然后在函数上面放置一个包装器。 这里的关键点是包装器是可以使用传递给 logged() 的参数的。
End of explanation
def func(a, b):
pass
func = decorator(x, y, z)(func)
Explanation: 装饰器处理过程跟下面的调用是等效的:
End of explanation
import time
from functools import wraps
# A simple decorator
def timethis(func):
@wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
r = func(*args, **kwargs)
end = time.time()
print(end-start)
return r
return wrapper
# Class illustrating application of the decorator to different kinds of methods
class Spam:
@timethis
def instance_method(self, n):
print(self, n)
while n > 0:
n -= 1
@classmethod
@timethis
def class_method(cls, n):
print(cls, n)
while n > 0:
n -= 1
@staticmethod
@timethis
def static_method(n):
print(n)
while n > 0:
n -= 1
Explanation: 装饰器是属于比较进阶的Python知识了,如果小伙伴不是想要从事python开发的工作,这个可以不用掌握,但是还是需要了解这个语法在做什么。
classmethod, staticmethod
End of explanation
s = Spam()
s.instance_method(10)
Spam.class_method(1000) #classmethod 可以是直接通过类名称调用
Spam.static_method(1000) #严格的讲,staticmethod方法和该方法所在的类没有什么关系,只是为了代码管理方便才放到一起
s.class_method(11)
Explanation: @classmethod 和 @staticmethod 实际上并不会创建可直接调用的对象, 而是创建特殊的描述器对象
End of explanation |
8,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step34: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step36: Neural Network Training
Hyperparameters
Tune the following parameters
Step38: Build the Graph
Build the graph using the neural network you implemented.
Step40: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step42: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step44: Checkpoint
Step47: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step50: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step52: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True) # descending order
vocab_to_int = {word: ii for ii, word in enumerate(vocab)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return {
'.':"||Period||",
',':"||Comma||",
'"':"||Quotation_Mark||",
';':"||Semicolon||",
'!':"||Exclamation_mark||",
'?':"||Question_mark||",
'(':"||Left_Parentheses||",
')':"||Right_Parentheses||",
'--':"||Dash||",
'\n':"||Return||"
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input_ = tf.placeholder(shape=[None,None],name='input',dtype=tf.int32) # input shape = [batch_size, seq_size]
targets = tf.placeholder(shape=[None,None],name='targets',dtype=tf.int32)
learning_rate = tf.placeholder(dtype=tf.float32)
return input_, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
lstm_layers = 5
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, tf.identity(initial_state,name='initial_state')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs,dtype=tf.float32)
return outputs, tf.identity(final_state,name="final_state")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed_dim = 300;
embed = get_embed(input_data,vocab_size,embed_dim)
outputs, final_state = build_rnn(cell,embed)
# print(outputs) # Tensor("rnn/transpose:0", shape=(128, 5, 256), dtype=float32)
# print(final_state) # Tensor("final_state:0", shape=(2, 2, ?, 256), dtype=float32)
# !!! it is really import to have a good weigh init
logits = tf.contrib.layers.fully_connected(outputs,vocab_size,activation_fn=None, #tf.nn.relu
weights_initializer = tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
# print(logits) # Tensor("fully_connected/Relu:0", shape=(128, 5, 27), dtype=float32)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
# 根据建议修改的方法,很赞!
def get_batches(int_text, batch_size, seq_length):
n_batches = int(len(int_text) / (batch_size * seq_length))
x_data = np.array(int_text[: n_batches * batch_size * seq_length])
y_data = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x, y)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# def get_batches(int_text, batch_size, seq_length):
#
# Return batches of input and target
# :param int_text: Text with the words replaced by their ids
# :param batch_size: The size of batch
# :param seq_length: The length of sequence
# :return: Batches as a Numpy array
#
# # TODO: Implement Function
# batches = []
# n_batchs = (len(int_text)-1) // (batch_size * seq_length)
# # int_text = int_text[:n_batchs*batch_size * seq_length+1]
# for i in range(0,n_batchs*seq_length,seq_length):
# x = []
# y = []
# for j in range(i,i+batch_size * seq_length,seq_length):
# x.append(int_text[j:j+seq_length])
# y.append(int_text[j+1:j+1+seq_length])
# batches.append([x,y])
# return np.array(batches)
# #print(get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3))
#
# DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
#
# tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# 4257 line ,average 11 words
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 200
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 10 # !!! when i increase the seq_length from 5 to 10,it really helps,如果继续增加会怎么样呢?
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 40
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
# input_data_shape[0] batch size
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
import random
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
r = random.uniform(0,1)
#store prediction char
s = 0
#since length > indices starting at 0
char_id = len(probabilities) - 1
#for each char prediction probabilty
for i in range(len(probabilities)):
#assign it to S
s += probabilities[i]
#check if probability greater than our randomly generated one
if s >= r:
#if it is, thats the likely next char
char_id = i
break
return int_to_vocab[char_id]
# 另一种简单方法,对于为什么这么选择,可以参考一篇文章:
# http://yanyiwu.com/work/2014/01/30/simhash-shi-xian-xiang-jie.html
rand = np.sum(probabilities) * np.random.rand(1)
pred_word = int_to_vocab[int(np.searchsorted(np.cumsum(probabilities), rand))]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
8,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time series in Pastas
R.A. Collenteur, University of Graz, 2020
Time series are at the heart of time series analysis, and therefore need to be considered carefully when dealing with time series models. In this notebook more background information is provided on important characteristics of time series and how these may influence your modeling results. In general, Pastas depends heavily on Pandas for dealing with time series, but adds capabilities to deal with irregular time series and missing data.
All time series should be provided to Pastas as pandas.Series with a pandas.DatetimeIndex. Internally these time series are stored in a pastas.TimeSeries object. The goal of this object is to validate the user-provided time series and enable resampling (changing frequencies) of the independent time series. The TimeSeries object also has capabilities to deal with missing data in the user-provided time series. As much of these operations occur internally, this notebook is meant to explain users what is happening and how to check for this.
<div class="alert alert-info">
<b>Note</b>
* The standard Pastas data type for a date is the `pandas.Timestamp`.
* The standard Pastas data type for a sequence of dates is the `pandas.DatetimeIndex` with `pandas.Timestamp`.
* The standard Pastas data type for a time series is a `pandas.Series` with a `pandas.DatetimeIndex`
</div>
Step1: Different types of time series
Time series data may generally be defined as a set of data values observed at certain times, ordered in a way that the time indices are increasing. Many time series analysis method assume that the time step between the observations is regular, the time series has evenly-spaced observations. These evenly spaced time series may have missing data, but it will still be possible to lay the values on a time-grid with constant time steps.
This is generally also assumed to be the case for the independent time series in hydrological studies. For example, the precipitation records may have some missing data but the precipitation is reported as the total rainfall over one day. In the case of missing data, we may impute a zero (no rain) or the rainfall amount from a nearby measurement station.
Groundwater level time series do generally not share these characteristics with other hydrological time series, and are measured at irregular time intervals. This is especially true for historic time series that were measured by hand. The result is that the measurements can not be laid on a regular time grid. The figure below graphically shows the difference between the three types of time series.
Step2: Independent and dependent time series
We can differentiate between two types of input time series for Pastas models
Step3: Pastas correctly report the frequency and we can continue with this time series. Note that the input time series thus agrees with all the checks for the time series validation. Let's now introduce a nan-value and see what happens.
Step4: This also works fine. The frequency was inferred (stored as freq_original) and one nan-value was filled up with 0.0. Now we take the same time series, but drop the nan-value.
Step5: The above result is probably not what we want. Pastas could not infer the frequency and therefore resorts to the timestep_weighted_resample method. Documentation for this method is available in utils.py.
If we know the original frequency of the time series, we can tell this to Pastas through the freq_original argument. As we can see below, the user-provided frequency is used.
Step6: The above example shows how to obtain the same or different result with four different methods. Some of these methods requires good knowledge about the TimeSeries object and how it processes your time series. It is often preferred to provide Pastas with a better initial time series by resampling it yourself. This has the additional benefit that you are interacting more closely with the data. Most of the examples also follow this pattern.
<div class="alert alert-info">
<b>Best practice</b>
Try and modify your original time series such that Pastas returns a message that it was able to infer the frequency from the time series itself
Step7: Each column name is a valid option for the settings argument. The rows shows the settings that may be chosen for changing the original time series. Once a TimeSeries is created, we can access the existing settings as follows
Step8: This settings dictionary now includes both settings used to resample (sample_up, sample_down), extend (fill_before, fill_after), normalize (norm), and fill nans in the time series, but also dynamic settings such as the start and end date (tmin, tmax), the frequency (freq) and the time offset.
To update these settings you the update_series method is available. For example, if we want to resample the above time series to a 7-day frequency and sum up the values we can use
Step9: Because the original series are stored in the TimeSeries object as well, it is also possible to go back again. The changing made to the time series are always started from the original validated time series again. For more information on the possible settings see the API-docs for the TimeSeries and update_series method on the documentation website.
An example with a Pastas Model
By now you may be wondering why all these settings exist in the first place. The main reason (apart from validating the user-provided time series) is to change the time step of the simulation of the independent time series. It may also be used to extend the time series in time.
Below we load some time series, visualize them and create a Pastas model with precipitation and evaporation to explain the groundwater level fluctuations. It is generally recommended to plot your time series for a quick visual check of the input data.
Step10: What is the model freq?
The output below shows that the time series have frequencies of freq=D. The fit report also shows a frequency of freq=D. The frequency reported in the fit_report is the time step of the simulation for the independent time series, and is internally passed on to the stressmodels. The user-provided dependent time series are stored in the stressmodel object and can be accessed as follows.
Step11: If we want to change the resample method, for example we want to sum the precipitation and evaporation when sampling down (e.g., daily to weekly) we may do the following
Step12: After changing the methods for sampling down, we now solve the model with a simulation time step of 14 days. The precipitation and evaporation are then summed up over 14 day intervals, before being translated to a groundwater fluctuation using a respons function.
Step13: Another method to obtain the settings of the time series used in a stressmodel is as follows | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pastas as ps
ps.show_versions()
Explanation: Time series in Pastas
R.A. Collenteur, University of Graz, 2020
Time series are at the heart of time series analysis, and therefore need to be considered carefully when dealing with time series models. In this notebook more background information is provided on important characteristics of time series and how these may influence your modeling results. In general, Pastas depends heavily on Pandas for dealing with time series, but adds capabilities to deal with irregular time series and missing data.
All time series should be provided to Pastas as pandas.Series with a pandas.DatetimeIndex. Internally these time series are stored in a pastas.TimeSeries object. The goal of this object is to validate the user-provided time series and enable resampling (changing frequencies) of the independent time series. The TimeSeries object also has capabilities to deal with missing data in the user-provided time series. As much of these operations occur internally, this notebook is meant to explain users what is happening and how to check for this.
<div class="alert alert-info">
<b>Note</b>
* The standard Pastas data type for a date is the `pandas.Timestamp`.
* The standard Pastas data type for a sequence of dates is the `pandas.DatetimeIndex` with `pandas.Timestamp`.
* The standard Pastas data type for a time series is a `pandas.Series` with a `pandas.DatetimeIndex`
</div>
End of explanation
regular = pd.Series(index=pd.date_range("2000-01-01", "2000-01-10", freq="D"),
data=np.ones(10))
missing_data = regular.copy()
missing_data.loc[["2000-01-03", "2000-01-08"]] = np.nan
index = [t + pd.Timedelta(np.random.rand()*24, unit="H") for t in missing_data.index]
irregular = missing_data.copy()
irregular.index = index
fig, axes = plt.subplots(3,1, figsize=(6, 5), sharex=True, sharey=True)
regular.plot(ax=axes[0], linestyle=" ", marker="o", x_compat=True)
missing_data.plot(ax=axes[1], linestyle=" ", marker="o", x_compat=True)
irregular.plot(ax=axes[2], linestyle=" ", marker="o", x_compat=True)
for i, name in enumerate(["(a) Regular time steps", "(b) Missing Data", "(c) Irregular time steps"]):
axes[i].grid()
axes[i].set_title(name)
plt.tight_layout()
Explanation: Different types of time series
Time series data may generally be defined as a set of data values observed at certain times, ordered in a way that the time indices are increasing. Many time series analysis method assume that the time step between the observations is regular, the time series has evenly-spaced observations. These evenly spaced time series may have missing data, but it will still be possible to lay the values on a time-grid with constant time steps.
This is generally also assumed to be the case for the independent time series in hydrological studies. For example, the precipitation records may have some missing data but the precipitation is reported as the total rainfall over one day. In the case of missing data, we may impute a zero (no rain) or the rainfall amount from a nearby measurement station.
Groundwater level time series do generally not share these characteristics with other hydrological time series, and are measured at irregular time intervals. This is especially true for historic time series that were measured by hand. The result is that the measurements can not be laid on a regular time grid. The figure below graphically shows the difference between the three types of time series.
End of explanation
rain = pd.read_csv('../examples/data/rain_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
ps.TimeSeries(rain, settings="prec")
Explanation: Independent and dependent time series
We can differentiate between two types of input time series for Pastas models: the dependent and independent time series. The dependent time series are those that we want to explain (e.g., the groundwater levels) and the independent time series are those that we use to explain the dependent time series (e.g., precipitation or evaporation). The requirements for these time series are different:
The dependent time series may be of any kind: regular, missing data or irregular.
The independent time series has to have regular time steps.
In practice, this means that the time series provided to pastas.Model may be of any kind, and that the time series used by the stressmodels (e.g., pastas.RerchargeModel) need to have regular time steps. The regular time steps are required to simulate contributions to the groundwater level fluctuations. As there are virtually no restrictions on the dependent time series, the remainder of this notebook will discuss primarily the independent time series.
How does the TimeSeries object validate a time series?
To ensure that a time series can be used for simulation a number of things are checked and changed:
Make sure the values are floats. Values are change to dtype=float if not.
Make sure the index is a pandas.DatetimeIndex. Index is changed if not.
Make sure the timestamps in the index are increasing. Index is sorted if not.
Make sure there are no nan-values at the start and end of a time series.
Determine the frequency of the time series.
Make sure there are no duplicate indices. Values are averaged if this is the case.
Remove or fill up nan-values, depending on the settings.
For each of these steps an INFO message will be returned by Pastas to inform the user if a change is made. The first four steps generally do not have a large impact and are there to prevent some basic issues. Preferably, no changes are reported.
Frequency of the input data
Pastas tries to determine the frequency in step 5, and will always report the result. It is generally good practice to double-check if the reported frequency agrees with what you know about the time series. Pastas will also report if no frequency can be inferred. If no frequency is reported there is probably some wrong and the user should fix either fix the input time series or provide Pastas with more information.
Below we consider a time series with precipitation data, measured every day. We will use settings="prec as a shortcut for the settings to fill nans and resample. We will come back to those settings later.
End of explanation
rain["1989-01-01"] = np.nan
ps.TimeSeries(rain, settings="prec")
Explanation: Pastas correctly report the frequency and we can continue with this time series. Note that the input time series thus agrees with all the checks for the time series validation. Let's now introduce a nan-value and see what happens.
End of explanation
ps.TimeSeries(rain.dropna(), settings="prec")
Explanation: This also works fine. The frequency was inferred (stored as freq_original) and one nan-value was filled up with 0.0. Now we take the same time series, but drop the nan-value.
End of explanation
rain = pd.read_csv('../examples/data/rain_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
rain["1989-01-01"] = np.nan
ps.TimeSeries(rain.dropna(), settings="prec", freq_original="D")
Explanation: The above result is probably not what we want. Pastas could not infer the frequency and therefore resorts to the timestep_weighted_resample method. Documentation for this method is available in utils.py.
If we know the original frequency of the time series, we can tell this to Pastas through the freq_original argument. As we can see below, the user-provided frequency is used.
End of explanation
pd.DataFrame.from_dict(ps.rcParams["timeseries"])
Explanation: The above example shows how to obtain the same or different result with four different methods. Some of these methods requires good knowledge about the TimeSeries object and how it processes your time series. It is often preferred to provide Pastas with a better initial time series by resampling it yourself. This has the additional benefit that you are interacting more closely with the data. Most of the examples also follow this pattern.
<div class="alert alert-info">
<b>Best practice</b>
Try and modify your original time series such that Pastas returns a message that it was able to infer the frequency from the time series itself: **INFO: Inferred frequency for time series rain: freq=D**
</div>
Time series settings
In the examples above we used the settings keyword when creating the TimeSeries. This is a shortcut method to select a number of settings from a predefined set of options. These predefined options can accessed through ps.rcParams["timeseries"]:
End of explanation
ts = ps.TimeSeries(rain, settings="prec")
ts.settings
Explanation: Each column name is a valid option for the settings argument. The rows shows the settings that may be chosen for changing the original time series. Once a TimeSeries is created, we can access the existing settings as follows:
End of explanation
ts.update_series(freq="7D", sample_down="sum")
Explanation: This settings dictionary now includes both settings used to resample (sample_up, sample_down), extend (fill_before, fill_after), normalize (norm), and fill nans in the time series, but also dynamic settings such as the start and end date (tmin, tmax), the frequency (freq) and the time offset.
To update these settings you the update_series method is available. For example, if we want to resample the above time series to a 7-day frequency and sum up the values we can use:
End of explanation
head = pd.read_csv("../examples/data/B32C0639001.csv", parse_dates=['date'],
index_col='date', squeeze=True)
rain = pd.read_csv('../examples/data/rain_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
evap = pd.read_csv('../examples/data/evap_nb1.csv', parse_dates=['date'],
index_col='date', squeeze=True)
fig, axes = plt.subplots(3,1, figsize=(10,6), sharex=True)
head.plot(ax=axes[0], x_compat=True, linestyle=" ", marker=".")
evap.plot(ax=axes[1], x_compat=True)
rain.plot(ax=axes[2], x_compat=True)
axes[0].set_ylabel("Head [m]")
axes[1].set_ylabel("Evap [mm/d]")
axes[2].set_ylabel("Rain [mm/d]")
plt.xlim("1985", "2005");
ml = ps.Model(head)
rch = ps.rch.Linear()
rm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=ps.Gamma, name="rch")
ml.add_stressmodel(rm)
ml.solve(noise=True, tmin="1990", report="basic")
Explanation: Because the original series are stored in the TimeSeries object as well, it is also possible to go back again. The changing made to the time series are always started from the original validated time series again. For more information on the possible settings see the API-docs for the TimeSeries and update_series method on the documentation website.
An example with a Pastas Model
By now you may be wondering why all these settings exist in the first place. The main reason (apart from validating the user-provided time series) is to change the time step of the simulation of the independent time series. It may also be used to extend the time series in time.
Below we load some time series, visualize them and create a Pastas model with precipitation and evaporation to explain the groundwater level fluctuations. It is generally recommended to plot your time series for a quick visual check of the input data.
End of explanation
ml.stressmodels["rch"].stress
Explanation: What is the model freq?
The output below shows that the time series have frequencies of freq=D. The fit report also shows a frequency of freq=D. The frequency reported in the fit_report is the time step of the simulation for the independent time series, and is internally passed on to the stressmodels. The user-provided dependent time series are stored in the stressmodel object and can be accessed as follows.
End of explanation
for stress in ml.stressmodels["rch"].stress:
stress.update_series(sample_down="sum")
Explanation: If we want to change the resample method, for example we want to sum the precipitation and evaporation when sampling down (e.g., daily to weekly) we may do the following:
End of explanation
ml.settings
ml.solve(freq="14D", tmin="1980", report="basic")
ml.plots.results(figsize=(10,6), tmin="1970");
ml.stressmodels["rch"].stress[1].update_series(tmin="1960")
ml.stressmodels["rch"].stress[1].settings
Explanation: After changing the methods for sampling down, we now solve the model with a simulation time step of 14 days. The precipitation and evaporation are then summed up over 14 day intervals, before being translated to a groundwater fluctuation using a respons function.
End of explanation
ml.get_stressmodel_settings("rch")
Explanation: Another method to obtain the settings of the time series used in a stressmodel is as follows:
End of explanation |
8,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: OIS Data & Discounting
We start by importing OIS term structure data (source
Step2: Next we replace the year fraction index by a DatetimeIndex.
Step3: Let us have a look at the most current data, i.e. the term structure, of the data set.
Step4: This data is used to instantiate a deterministic_short_rate model for risk-neutral discounting purposes.
Step5: Libor Market Data
We want to model a 3 month Libor-based interest rate swap. To this end, we need Libor term structure data, i.e. forward rates in this case (source
Step6: And the short end of the Libor term sturcture visualized.
Step7: Model Calibration
Next, equipped with the Libor data, we calibrate the square-root diffusion short rate model. A bit of data preparation
Step8: A mean-squared error (MSE) function to be minimized during calibration.
Step9: And the calibration itself.
Step10: The optimal parameters (kappa, theta, sigma) are
Step11: The model fit is not too bad in this case.
Step12: Floating Rate Modeling
The optimal parameters from the calibration are used to model the floating rate (3m Libor rate).
Step13: Let us have a look at some simulated rate paths.
Step14: Interest Rate Swap
Finally, we can model the interest rate swap itself.
Modeling
First, the market environment with all the parameters needed.
Step15: The instantiation of the valuation object.
Step16: Valuation
The present value of the interest rate swap given the assumption, in particular, of the fixed rate.
Step17: You can also generate a full output of all present values per simulation path. | Python Code:
import dx
import datetime as dt
import time
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
%matplotlib inline
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Interest Rate Swaps
Very nascent.
Interest rate swaps are a first step towards including rate-sensitive instruments in the modeling and valuation spectrum of DX Analytics. The model used in the following is the square-root diffusion process by Cox-Ingersoll-Ross (1985). Data used are UK/London OIS and Libor rates.
End of explanation
# UK OIS Spot Rates Yield Curve
oiss = pd.read_excel('data/ukois09.xls', 'oiss')
# use years as index
oiss = oiss.set_index('years')
# del oiss['years']
# only date information for columns, no time
oiss.columns = [d.date() for d in oiss.columns]
oiss.tail()
Explanation: OIS Data & Discounting
We start by importing OIS term structure data (source: http://www.bankofengland.co.uk) for risk-free discounting. We also adjust the data structure somewhat for our purposes.
End of explanation
# generate time index given input data
# starting date + 59 months
date = oiss.columns[-1]
index = pd.date_range(date, periods=60, freq='M') # , tz='GMT')
index = [d.replace(day=date.day) for d in index]
index = pd.DatetimeIndex(index)
oiss.index = index
Explanation: Next we replace the year fraction index by a DatetimeIndex.
End of explanation
oiss.iloc[:, -1].plot(figsize=(10, 6))
Explanation: Let us have a look at the most current data, i.e. the term structure, of the data set.
End of explanation
# generate deterministic short rate model based on UK OIS curve
ois = dx.deterministic_short_rate('ois', zip(oiss.index, oiss.iloc[:, -1].values / 100))
# example dates and corresponding discount factors
dr = pd.date_range('2015-1', periods=4, freq='6m').to_pydatetime()
ois.get_discount_factors(dr)[::-1]
Explanation: This data is used to instantiate a deterministic_short_rate model for risk-neutral discounting purposes.
End of explanation
# UK Libor foward rates
libf = pd.read_excel('data/ukblc05.xls', 'fwds')
# use years as index
libf = libf.set_index('years')
# only date information for columns, no time
libf.columns = [d.date() for d in libf.columns]
libf.tail()
# generate time index given input data
# starting date + 59 months
date = libf.columns[-1]
index = pd.date_range(date, periods=60, freq='M') # , tz='GMT')
index = [d.replace(day=date.day) for d in index]
index = pd.DatetimeIndex(index)
libf.index = index
Explanation: Libor Market Data
We want to model a 3 month Libor-based interest rate swap. To this end, we need Libor term structure data, i.e. forward rates in this case (source: http://www.bankofengland.co.uk), to calibrate the valuation to. The data importing and adjustments are the same as before.
End of explanation
libf.iloc[:, -1].plot(figsize=(10, 6))
Explanation: And the short end of the Libor term sturcture visualized.
End of explanation
t = libf.index.to_pydatetime()
f = libf.iloc[:, -1].values / 100
initial_value = 0.005
Explanation: Model Calibration
Next, equipped with the Libor data, we calibrate the square-root diffusion short rate model. A bit of data preparation:
End of explanation
def srd_forward_error(p0):
global initial_value, f, t
if p0[0] < 0 or p0[1] < 0 or p0[2] < 0:
return 100
f_model = dx.srd_forwards(initial_value, p0, t)
MSE = np.sum((f - f_model) ** 2) / len(f)
return MSE
Explanation: A mean-squared error (MSE) function to be minimized during calibration.
End of explanation
from scipy.optimize import fmin
opt = fmin(srd_forward_error, (1.0, 0.7, 0.2),
maxiter=1000, maxfun=1000)
Explanation: And the calibration itself.
End of explanation
opt
Explanation: The optimal parameters (kappa, theta, sigma) are:
End of explanation
plt.figure(figsize=(10, 6))
plt.plot(t, f, label='market forward rates')
plt.plot(t, dx.srd_forwards(initial_value, opt, t), 'r.', label='model forward rates')
plt.gcf().autofmt_xdate(); plt.legend(loc=0)
Explanation: The model fit is not too bad in this case.
End of explanation
# market environment
me_srd = dx.market_environment('me_srd', dt.datetime(2014, 10, 16))
# square-root diffusion
me_srd.add_constant('initial_value', 0.02)
me_srd.add_constant('kappa', opt[0])
me_srd.add_constant('theta', opt[1])
me_srd.add_constant('volatility', opt[2])
me_srd.add_curve('discount_curve', ois)
# OIS discounting object
me_srd.add_constant('currency', 'EUR')
me_srd.add_constant('paths', 10000)
me_srd.add_constant('frequency', 'w')
me_srd.add_constant('starting_date', me_srd.pricing_date)
me_srd.add_constant('final_date', dt.datetime(2020, 12, 31))
srd = dx.square_root_diffusion('srd', me_srd)
Explanation: Floating Rate Modeling
The optimal parameters from the calibration are used to model the floating rate (3m Libor rate).
End of explanation
paths = srd.get_instrument_values()
plt.figure(figsize=(10, 6))
plt.plot(srd.time_grid, paths[:, :6])
Explanation: Let us have a look at some simulated rate paths.
End of explanation
# market environment for the IRS
me_irs = dx.market_environment('irs', me_srd.pricing_date)
me_irs.add_constant('fixed_rate', 0.01)
me_irs.add_constant('trade_date', me_srd.pricing_date)
me_irs.add_constant('effective_date', me_srd.pricing_date)
me_irs.add_constant('payment_date', dt.datetime(2014, 12, 27))
me_irs.add_constant('payment_day', 27)
me_irs.add_constant('termination_date', me_srd.get_constant('final_date'))
me_irs.add_constant('currency', 'EUR')
me_irs.add_constant('notional', 1000000)
me_irs.add_constant('tenor', '6m')
me_irs.add_constant('counting', 'ACT/360')
# discount curve from mar_env of floating rate
Explanation: Interest Rate Swap
Finally, we can model the interest rate swap itself.
Modeling
First, the market environment with all the parameters needed.
End of explanation
irs = dx.interest_rate_swap('irs', srd, me_irs)
Explanation: The instantiation of the valuation object.
End of explanation
%time irs.present_value(fixed_seed=True)
Explanation: Valuation
The present value of the interest rate swap given the assumption, in particular, of the fixed rate.
End of explanation
irs.present_value(full=True).iloc[:, :6]
Explanation: You can also generate a full output of all present values per simulation path.
End of explanation |
8,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DKRZ PyNGL example
- Filled circles instead of grid cells; the size depends on a quality value.
Description
Step1: Global variables
Step2: Create dummy data and coordinates
Step3: Open graphics output (workstation)
Step4: Set color map, levels and and which color indices to be used
Step5: Set contour plot resources for RasterFill
Step6: Draw the contour plot and advance the first frame
Step7: Add values to the grid cells, draw the contour plot and advance the second frame
Step8: Create the third plot using a map plot and add grid lines of the data region
Step9: Now, create the marker size for each cell - marker sizes for quality 1-4
Step10: Now, create the color array for each cell from temp1d
Step11: Add a labelbar to the plot
Step12: Add a legend to the plot
Step13: Add title and center string to the plot
Step14: Draw the third plot and advance the frame
Step15: Show the plots in this notebook | Python Code:
from __future__ import print_function
import numpy as np
import Ngl,Nio
Explanation: DKRZ PyNGL example
- Filled circles instead of grid cells; the size depends on a quality value.
Description:
Draw two plots, first plot is a raster contour plot and the second shows
the data using filled circles which are sized by a quality variable.
Effects illustrated:
o Creating a contour cellFill plot
o Using markers
o Using dummy data
o Create a legend
o Create a labelbar
o Add text
Author: Karin Meier-Fleischer
Load modules
End of explanation
VALUES = True #-- turn on/off value annotation of first plot
GRID = True #-- turn on/off the data grid lines of second plot
Explanation: Global variables
End of explanation
minlat, maxlat = 47.0, 55.0 #-- minimum and maximum latitude of map
minlon, maxlon = 5.0, 16.0 #-- minimum and maximum longitude of map
#-- generate dummy data and coordinates
nlat, nlon = 16, 22
lat = np.linspace(minlat, maxlat, num=nlat)
lon = np.linspace(minlon, maxlon, num=nlon)
#-- generate dummy data with named dimensions
tempmin, tempmax, tempint = -2.0, 2.0, 0.5
temp = np.random.uniform(tempmin,tempmax,[nlat,nlon])
temp1d = temp.flatten()
ncells = len(temp1d)
#-- generate random dummy quality data
minqual, maxqual = 1, 4
quality = np.floor(np.random.uniform(minqual,maxqual+0.5,[nlat,nlon])).astype(int)
quality1d = quality.flatten()
Explanation: Create dummy data and coordinates
End of explanation
wkres = Ngl.Resources()
wkres.wkBackgroundColor = "gray85" #-- set background color to light gray
wks = Ngl.open_wks('png','plot_quality_per_cell',wkres)
Explanation: Open graphics output (workstation)
End of explanation
#-- set color map
colmap = "BlueDarkRed18"
Ngl.define_colormap(wks,colmap)
#-- contour levels and color indices
cmap = Ngl.retrieve_colormap(wks)
ncmap = len(cmap[:,0])
levels = np.arange(tempmin,tempmax+tempint,tempint)
nlevels = len(levels)
colors = np.floor(np.linspace(2,ncmap-1,nlevels+1)).astype(int)
ncolors = len(colors)
Explanation: Set color map, levels and and which color indices to be used
End of explanation
res = Ngl.Resources()
res.nglDraw = False
res.nglFrame = False
res.nglMaximize = False #-- don't maximize plot output, yet
res.vpXF = 0.09 #-- viewport x-position
res.vpYF = 0.95 #-- viewport y-position
res.vpWidthF = 0.8 #-- viewport width
res.vpHeightF = 0.8 #-- viewport height
res.cnFillMode = "RasterFill" #-- use raster fill for contours
res.cnFillOn = True #-- filled contour areas
res.cnLinesOn = False
res.cnLineLabelsOn = False
res.cnInfoLabelOn = False
res.cnLevelSelectionMode = "ExplicitLevels" #-- set manual data levels
res.cnLevels = levels
res.cnMonoFillColor = False
res.cnFillColors = colors
res.lbBoxMinorExtentF = 0.15 #-- decrease height of labelbar
res.lbOrientation = "horizontal" #-- horizontal labelbar
res.mpOutlineBoundarySets = "National" #-- draw national map outlines
res.mpOceanFillColor = "gray90"
res.mpLandFillColor = "gray75"
res.mpInlandWaterFillColor = "gray75"
res.mpDataBaseVersion = "MediumRes" #-- alias to Ncarg4_1
res.mpDataSetName = "Earth..4" #-- higher map resolution
res.mpGridAndLimbOn = False
res.mpLimitMode = "LatLon" #-- map limit mode
res.mpMinLatF = minlat-1.0
res.mpMaxLatF = maxlat+1.0
res.mpMinLonF = minlon-1.0
res.mpMaxLonF = maxlon+1.0
res.sfXArray = lon
res.sfYArray = lat
Explanation: Set contour plot resources for RasterFill
End of explanation
contour = Ngl.contour_map(wks,temp,res)
Ngl.draw(contour)
Ngl.frame(wks)
Explanation: Draw the contour plot and advance the first frame
End of explanation
if(VALUES):
txres = Ngl.Resources()
txres.gsFontColor = "black"
txres.txFontHeightF = 0.01
for j in range(0,nlat):
for i in range(0,nlon):
m = i+j
txid = "txid"+str(m)
txid = Ngl.add_text(wks, contour,""+str(quality[j,i]),lon[i],lat[j],txres)
#-- draw the second plot
Ngl.draw(contour)
Ngl.frame(wks)
Explanation: Add values to the grid cells, draw the contour plot and advance the second frame
End of explanation
plot = Ngl.map(wks,res)
#-----------------------------------------------------------------------------------
#-- draw grid lines of data region if set by GRID global variable
#-----------------------------------------------------------------------------------
if(GRID):
gres = Ngl.Resources()
gres.gsLineColor = "black"
for i in range(0,nlat):
lx = [minlon,maxlon]
ly = [lat[i],lat[i]]
lid = "lidy"+str(i)
lid = Ngl.add_polyline(wks,plot,lx,ly,gres) #-- add grid lines to plot
for j in range(0,nlon):
lx = [lon[j],lon[j]]
ly = [minlat,maxlat]
lid = "lidx"+str(j)
lid = Ngl.add_polyline(wks,plot,lx,ly,gres) #-- add grid lines to plot
Ngl.draw(plot)
Explanation: Create the third plot using a map plot and add grid lines of the data region
End of explanation
marker_sizes = np.linspace(0.01,0.04,4)
ms_array = np.ones(ncells,float) #-- create array for marker sizes depending
#-- on quality1d
for ll in range(minqual,maxqual+1):
indsize = np.argwhere(quality1d == ll)
ms_array[indsize] = marker_sizes[ll-1]
#-- marker resources
plmres = Ngl.Resources()
plmres.gsMarkerIndex = 16 #-- filled circles
Explanation: Now, create the marker size for each cell - marker sizes for quality 1-4
End of explanation
gscolors = np.zeros(ncells,int)
#-- set color for data less than given minimum value
vlow = np.argwhere(temp1d < levels[0]) #-- get the indices of values less levels(0)
gscolors[vlow] = colors[0] #-- choose color
#-- set colors for all cells in between tempmin and tempmax
for i in range(1,nlevels):
vind = np.argwhere(np.logical_and(temp1d >= levels[i-1],temp1d < levels[i])) #-- get the indices of 'middle' values
gscolors[vind] = colors[i] #-- choose the colors
#-- set color for data greater than given maximum
vhgh = np.argwhere(temp1d > levels[nlevels-1]) #-- get indices of values greater levels(nl)
gscolors[vhgh] = colors[ncolors-1] #-- choose color
#-- add the marker to the plot
n=0
for ii in range(0,nlat):
for jj in range(0,nlon):
k = jj+ii
plmres.gsMarkerSizeF = ms_array[n] #-- marker size
plmres.gsMarkerColor = gscolors[n] #-- marker color
plm = "plmark"+str(k)
plm = Ngl.add_polymarker(wks,plot,lon[jj],lat[ii],plmres) #-- add marker to plot
n = n + 1
Explanation: Now, create the color array for each cell from temp1d
End of explanation
vpx = Ngl.get_float(plot,"vpXF") #-- retrieve viewport x-position
vpy = Ngl.get_float(plot,"vpYF") #-- retrieve viewport y-position
vpw = Ngl.get_float(plot,"vpWidthF") #-- retrieve viewport width
vph = Ngl.get_float(plot,"vpHeightF") #-- retrieve viewport height
lbx, lby = vpx, vpy-vph-0.07
lbres = Ngl.Resources()
lbres.vpWidthF = vpw #-- width of labelbar
lbres.vpHeightF = 0.08 #-- height of labelbar
lbres.lbOrientation = "horizontal" #-- labelbar orientation
lbres.lbLabelFontHeightF = 0.014 #-- labelbar label font size
lbres.lbAutoManage = False #-- we control label bar
lbres.lbFillColors = colors #-- box fill colors
lbres.lbPerimOn = False #-- turn off labelbar perimeter
lbres.lbMonoFillPattern = True #-- turn on solid pattern
lbres.lbLabelAlignment = "InteriorEdges" #-- write labels below box edges
#-- create the labelbar
pid = Ngl.labelbar_ndc(wks, ncolors, list(levels.astype('str')), lbx, lby, lbres)
Explanation: Add a labelbar to the plot
End of explanation
legres = Ngl.Resources() #-- legend resources
legres.gsMarkerIndex = 16 #-- filled dots
legres.gsMarkerColor = "gray50" #-- legend marker color
txres = Ngl.Resources() #-- text resources
txres.txFontColor = "black"
txres.txFontHeightF = 0.01
txres.txFont = 30
x, y, ik = 0.94, 0.47, 0
for il in range(minqual,maxqual):
legres.gsMarkerSizeF = marker_sizes[ik]
Ngl.polymarker_ndc(wks, x, y, legres)
Ngl.text_ndc(wks, ""+str(il), x+0.03, y, txres)
y = y + 0.05
ik = ik + 1
txres.txFontHeightF = 0.012
Ngl.text_ndc(wks,"Quality",x,y,txres) #-- legend title
Explanation: Add a legend to the plot
End of explanation
xpos = (vpw/2)+vpx
title1 = "Draw data values at lat/lon location as circles"
title2 = "the size is defined by the quality variable"
txres.txFont = 21
txres.txFontHeightF = 0.018
Ngl.text_ndc(wks, title1, xpos, 0.96, txres)
Ngl.text_ndc(wks, title2, xpos, 0.935, txres)
#-----------------------------------------------------------------------------------
#-- add center string
#-----------------------------------------------------------------------------------
y = vpy+0.035 #-- y-position
txcenter = "Quality: min = "+str(quality.min())+" max = "+str(quality.max())
txres.txFontHeightF = 0.008 #-- font size for string
txres.txJust = "CenterCenter" #-- text justification
Ngl.text_ndc(wks, txcenter, 0.5, y, txres) #-- add text to wks
Explanation: Add title and center string to the plot
End of explanation
Ngl.draw(plot)
Ngl.frame(wks)
Explanation: Draw the third plot and advance the frame
End of explanation
from IPython.display import Image
Image(filename='plot_quality_per_cell.000001.png')
Image(filename='plot_quality_per_cell.000002.png')
Image(filename='plot_quality_per_cell.000003.png')
Explanation: Show the plots in this notebook
End of explanation |
8,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None, real_dim), name="inputs_real")
inputs_z = tf.placeholder(tf.float32, shape=(None, z_dim), name="inputs_z")
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
zshape = z.get_shape().as_list()
# print('zshape', zshape)
gw1 = tf.get_variable("weight", [zshape[1], n_units],
initializer=tf.random_normal_initializer(stddev=0.1))
gb1 = tf.get_variable("bias", [n_units],
initializer=tf.constant_initializer(0.0))
h1 = tf.matmul(z, gw1) + gb1
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
# Logits and tanh output
gwo = tf.get_variable("output_weight", [n_units, out_dim],
initializer=tf.random_normal_initializer(stddev=0.1))
gbo = tf.get_variable("output_bias", [out_dim],
initializer=tf.constant_initializer(0.0))
logits = tf.matmul(h1, gwo) + gbo
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
xshape = x.get_shape().as_list()
# print('xshape', xshape)
dw1 = tf.get_variable("weight", [xshape[1], n_units],
initializer=tf.random_normal_initializer(stddev=0.1))
db1 = tf.get_variable("bias", [n_units],
initializer=tf.constant_initializer(0.0))
h1 = tf.matmul(x, dw1) + db1
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
dwo = tf.get_variable("output_weight", [n_units, 1],
initializer=tf.random_normal_initializer(stddev=0.1))
dbo = tf.get_variable("output_bias", [1],
initializer=tf.constant_initializer(0.0))
logits = tf.matmul(h1, dwo) + dbo
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, g_hidden_size, False, alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, False, alpha)
d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, True, alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
labels_r = tf.ones_like(d_logits_real)*(1-smooth)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=labels_r))
labels_f = tf.zeros_like(d_logits_fake)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=labels_f))
d_loss = d_loss_real + d_loss_fake
labels_g = tf.ones_like(d_logits_fake)
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=labels_g))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [v for v in t_vars if v.name.startswith("generator")]
d_vars = [v for v in t_vars if v.name.startswith("discriminator")]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
8,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Harmonic Oscillator model
This example shows how the Simple Harmonic Oscillator model can be used.
A model for a particle undergoing Newtonian dynamics that experiences a force in proportion to its displacement from an equilibrium position, and, in addition, a friction force. The motion of the particle can be determined by solving a second order ordinary differential equation (from Newton's $F = ma$)
Step1: Parameters are given in the order $(y(0), dy/dt(0), \theta)$. Here, we see that since $\theta > 0$, that the oscillations of the particle decay exponentially over time.
Step2: Substituting an exponential solution of the form
Step3: If $\theta > 2$, we get overdamped dynamics, with an increased rate of decay to zero.
Step4: This model also provides sensitivities
Step5: We can plot these sensitivities, to see where the model is sensitive to each of the parameters | Python Code:
import pints
import pints.toy
import matplotlib.pyplot as plt
import numpy as np
model = pints.toy.SimpleHarmonicOscillatorModel()
Explanation: Simple Harmonic Oscillator model
This example shows how the Simple Harmonic Oscillator model can be used.
A model for a particle undergoing Newtonian dynamics that experiences a force in proportion to its displacement from an equilibrium position, and, in addition, a friction force. The motion of the particle can be determined by solving a second order ordinary differential equation (from Newton's $F = ma$):
$$\frac{d^2y}{dt^2} = -y(t) - \theta \frac{dy(t)}{dt}.$$
where $y(t)$ is the particle's displacement and $\theta$ is the frictional force.
End of explanation
times = np.linspace(0, 50, 1000)
parameters = model.suggested_parameters()
values = model.simulate(parameters, times)
plt.figure(figsize=(15,2))
plt.xlabel('t')
plt.ylabel(r'$y$ (Displacement)')
plt.plot(times, values)
plt.show()
Explanation: Parameters are given in the order $(y(0), dy/dt(0), \theta)$. Here, we see that since $\theta > 0$, that the oscillations of the particle decay exponentially over time.
End of explanation
parameters = model.suggested_parameters()
values = model.simulate([1, 0, 2], times)
plt.figure(figsize=(15,2))
plt.xlabel('t')
plt.ylabel(r'$y$ (Displacement)')
plt.plot(times, values)
plt.show()
Explanation: Substituting an exponential solution of the form: $y(t) = Ae^{\lambda t}$ into the governing ODE, we obtain: $\lambda^2 + \theta \lambda + 1=0$, which has solutions:
$$\lambda = (-\theta \pm \sqrt{\theta^2 - 4})/2.$$
As we can see above, if $\theta^2 < 4$, i.e. if $-2<\theta<2$, $\lambda$ has an imaginary part, which causes the solution to oscillate sinusoidally whilst decaying to zero.
If $\theta = 2$, we get critically dampled dynamics where the displacement decays exponentially to zero, rather than oscillatory motion.
End of explanation
parameters = model.suggested_parameters()
values = model.simulate([1, 0, 5], times)
plt.figure(figsize=(15,2))
plt.xlabel('t')
plt.ylabel(r'$y$ (Displacement)')
plt.plot(times, values)
plt.show()
Explanation: If $\theta > 2$, we get overdamped dynamics, with an increased rate of decay to zero.
End of explanation
values, sensitivities = model.simulateS1(parameters, times)
Explanation: This model also provides sensitivities: derivatives $\frac{\partial y}{\partial p}$ of the output $y$ with respect to the parameters $p$.
End of explanation
plt.figure(figsize=(15,7))
plt.subplot(3, 1, 1)
plt.ylabel(r'$\partial y/\partial y(0)$')
plt.plot(times, sensitivities[:, 0])
plt.subplot(3, 1, 2)
plt.xlabel('t')
plt.ylabel(r'$\partial y/\partial \dot y(0)$')
plt.plot(times, sensitivities[:, 1])
plt.subplot(3, 1, 3)
plt.xlabel('t')
plt.ylabel(r'$\partial y/\partial \theta$')
plt.plot(times, sensitivities[:, 1])
plt.show()
Explanation: We can plot these sensitivities, to see where the model is sensitive to each of the parameters:
End of explanation |
8,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Descriptive statistics
Acknowledging that variables and models are uncertain assumes that we directly or indirectly can describe them through probability distributions.
However for most applications the distribution is a messy entity that on its own is hard to interpret directly.
So instead, we use statistical metrics designed to summarize distribution and to get an intuitive understanding of its statistical properties.
In addition, for each statistical property, there almost always exists an empirical counterpart that works as a best estimate of said statistical property in the scenarios where only data is available.
This is important, as Monte Carlo integration isn't possible without the empirical metrics used to describe the results.
This section takes a look at some popular statistical metrics and compares them to their empirical counterparts.
Expected value
Take for example the most common metric, the expected value function chaospy.E().
This operator works on any distribution
Step1: Its empirical counterpart is the mean function
Step2: The operator can also be used on any polynomial, but would then require the distribution of interest as a second argument
Step3: In the multivariate case, the distribution and the polynomials needs to coincide politically.
E.g.
Step4: Here q0, q1 and q2 correspond to chaospy.Normal(0, 1), chaospy.Uniform(0, 2) and chaospy.Normal(2, 2) respectively.
It is the variable name position and distribution length that matters here, not the shape of what is being taken the expected value of.
Note also that the model approximations created by e.g. chaospy.fit_regression() and chaospy.fit_quadrature() also are valid polynomials.
Higher order moments
In addition to the expected value there is also higher order statistics that work in the same way.
They are with their numpy and scipy empirical counterparts
Step5: Conditional mean
The conditional expected value chaospy.E_cond() is similar to the more conventional chaospy.E(), but differs in that it supports partial conditioning.
In other words it is possible to "freeze" some of the variables and only evaluate the others.
For example
Step6: Sensitivity analysis
Variance-based sensitivity analysis (often referred to as the Sobol method or Sobol indices) is a form of global sensitivity analysis. Working within a probabilistic framework, it decomposes the variance of the output of the model or system into fractions which can be attributed to inputs or sets of inputs. Read more in for example Wikipedia.
In chaospy, the three functions are available
Step7: There are no direct empirical counterparts to these functions, but it is possible to create schemes using for example Saltelli's method.
Percentile
Calculating a closed form percentile of a multivariate polynomial is not feasible.
As such, chaospy does not calculate it.
However, as a matter of convenience, a simple function wrapper chaospy.Perc() that calculate said values using Monte Carlo integration is provided.
For example
Step8: Note that the accuracy of this method is dependent on the number of samples.
Quantity of interest
If you want to interpret the model approximation as a distribution for further second order analysis, this is possible through the chaospy.QoI_Dist.
This is a thin wrapper function that generates samples and pass them to the kernel density estimation class chaospy.GaussianKDE().
It works as follows | Python Code:
import chaospy
uniform = chaospy.Uniform(0, 4)
chaospy.E(uniform)
Explanation: Descriptive statistics
Acknowledging that variables and models are uncertain assumes that we directly or indirectly can describe them through probability distributions.
However for most applications the distribution is a messy entity that on its own is hard to interpret directly.
So instead, we use statistical metrics designed to summarize distribution and to get an intuitive understanding of its statistical properties.
In addition, for each statistical property, there almost always exists an empirical counterpart that works as a best estimate of said statistical property in the scenarios where only data is available.
This is important, as Monte Carlo integration isn't possible without the empirical metrics used to describe the results.
This section takes a look at some popular statistical metrics and compares them to their empirical counterparts.
Expected value
Take for example the most common metric, the expected value function chaospy.E().
This operator works on any distribution:
End of explanation
samples = uniform.sample(1e7)
numpy.mean(samples)
Explanation: Its empirical counterpart is the mean function: $\bar X=\tfrac 1N \sum X_i$.
This function is available as numpy.mean and can be used on samples generated from said distribution:
End of explanation
q0 = chaospy.variable()
chaospy.E(q0**3-1, uniform)
Explanation: The operator can also be used on any polynomial, but would then require the distribution of interest as a second argument:
End of explanation
q0, q1, q2 = chaospy.variable(3)
joint3 = chaospy.J(chaospy.Normal(0, 1), chaospy.Uniform(0, 2), chaospy.Normal(2, 2))
chaospy.E([q0, q1*q2], joint3)
Explanation: In the multivariate case, the distribution and the polynomials needs to coincide politically.
E.g.
End of explanation
chaospy.Corr([q0, q0*q2], joint3)
Explanation: Here q0, q1 and q2 correspond to chaospy.Normal(0, 1), chaospy.Uniform(0, 2) and chaospy.Normal(2, 2) respectively.
It is the variable name position and distribution length that matters here, not the shape of what is being taken the expected value of.
Note also that the model approximations created by e.g. chaospy.fit_regression() and chaospy.fit_quadrature() also are valid polynomials.
Higher order moments
In addition to the expected value there is also higher order statistics that work in the same way.
They are with their numpy and scipy empirical counterparts:
Name | chaospy | numpy or scipy
--- | --- | ---
Variance | chaospy.Var() | numpy.var
Standard deviation| chaospy.Std() | numpy.std
Covariance | chaospy.Cov() | numpy.cov
Correlation | chaospy.Corr() | numpy.corrcoef
Skewness | chaospy.Skew() | scipy.stats.skew
Kurtosis | chaospy.Kurt() | scipy.stats.kurtosis
For example (Pearson's) correlation:
End of explanation
chaospy.E_cond([q0, q1*q2], q0, joint3)
chaospy.E_cond([q0, q1*q2], q1, joint3)
chaospy.E_cond([q0, q1*q2], [q1, q2], joint3)
Explanation: Conditional mean
The conditional expected value chaospy.E_cond() is similar to the more conventional chaospy.E(), but differs in that it supports partial conditioning.
In other words it is possible to "freeze" some of the variables and only evaluate the others.
For example:
End of explanation
chaospy.Sens_m(6*q0+3*q1+q2, joint3)
chaospy.Sens_m2(q0*q1+q1*q2, joint3)
chaospy.Sens_t(6*q0+3*q1+q2, joint3)
Explanation: Sensitivity analysis
Variance-based sensitivity analysis (often referred to as the Sobol method or Sobol indices) is a form of global sensitivity analysis. Working within a probabilistic framework, it decomposes the variance of the output of the model or system into fractions which can be attributed to inputs or sets of inputs. Read more in for example Wikipedia.
In chaospy, the three functions are available:
Name | chaospy function
--- | ---
1. order main | chaospy.Sens_m
2. order main | chaospy.Sens_m2
total order | chaospy.Sens_m2
For example:
End of explanation
chaospy.Perc([q0, q1*q2], [25, 50, 75], joint3, sample=1000, seed=1234)
Explanation: There are no direct empirical counterparts to these functions, but it is possible to create schemes using for example Saltelli's method.
Percentile
Calculating a closed form percentile of a multivariate polynomial is not feasible.
As such, chaospy does not calculate it.
However, as a matter of convenience, a simple function wrapper chaospy.Perc() that calculate said values using Monte Carlo integration is provided.
For example:
End of explanation
new_dist = chaospy.QoI_Dist(q0*q1+q2, joint3)
new_dist.sample(6, seed=1234).round(6)
Explanation: Note that the accuracy of this method is dependent on the number of samples.
Quantity of interest
If you want to interpret the model approximation as a distribution for further second order analysis, this is possible through the chaospy.QoI_Dist.
This is a thin wrapper function that generates samples and pass them to the kernel density estimation class chaospy.GaussianKDE().
It works as follows:
End of explanation |
8,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understand TopicSimilarity.json
Step1: Now we see that the values of "source" and "target" in links indicate the indexes of nodes. I haven't figured out what does "value" in nodes represent.
Generate a test version of TopicSimilarity.json | Python Code:
# read a test version (w/o JS codes) of TopicSimilarity.json
file = 'testSim.json'
with open(file) as train_file:
dict_train = json.load(train_file)
dict_train
len(dict_train['links']), len(dict_train['nodes'])
links = pd.DataFrame(dict_train['links'])
nodes = pd.DataFrame(dict_train['nodes'])
links.head(2)
links[links.value > 540]
nodes[(nodes.index == 10) | (nodes.index == 18) | (nodes.index == 55) | (nodes.index == 70) | (nodes.index == 73) | (nodes.index == 75)]
Explanation: Understand TopicSimilarity.json
End of explanation
plt.scatter(links.source, links.target, alpha=0.5)
plt.title('node id: source to target')
# generate "source" and "target" of 100 links
source = np.random.randint(105 - 10, size=100)
source = np.array(sorted(source))
target = source + np.random.randint(5,10)
source
target
links.value.describe()
# generate 100 random values
value = np.random.normal(175.88, 112.36, size=100)
value = np.array([np.abs(i) for i in value])
value
newlink = pd.DataFrame({'source':source, 'target':target, 'value':value}).to_json(orient='records')
newlink
with open("newlink.json", "w") as outfile:
json.dump({'nodes':dict_train['nodes'], 'links':newlink}, outfile)
Explanation: Now we see that the values of "source" and "target" in links indicate the indexes of nodes. I haven't figured out what does "value" in nodes represent.
Generate a test version of TopicSimilarity.json
End of explanation |
8,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solution
Step2: 2. Create yet another function that takes the name of the region as an input and returns SST values for the corresponding region
This function can look something like the one below
Feel free to modify or extend it
You can replace region_name by experiment_name or whatever you prefer
For convenience, make sure the length of the returned list is the same
Step3: 4. Create the wind speed data
Step4: 5. Create a dictionary of data, whose keys are names of regions and values are lists of heat fluxes data
Create a list of names of the regions/experiments
Create an empty dictionary, named hf_dict or whatever sounds better to you
Loop over the names, call the create_sst() function and assign it to a variable, e.g. fake_sst
Still inside the name-loop, write another loop to iterate over SST and wind values, just as you did in the previous exercise, and calculate the heat flux.
Assign the result to the corresponding key of hf_dict
Step5: Print the result to test yourself.
Step6: 6. Save the dictionary to text file, both keys and values
You can copy the code for writing data to a text file from the previous exercise
Modify it so that the output file would include hf_dict's keys as row (column) names
Step7: Bonus | Python Code:
def calc_heat_flux(u_atm, t_sea, rho=1.2, c_p=1004.5, c_h=1.2e-3, u_sea=1, t_atm=17):
q = rho * c_p * c_h * (u_atm - u_sea) * (t_sea - t_atm)
return q
Explanation: Solution: create a module and reuse code from it (1 h)
Extend the exercise from today by applying what you've just learned about packages and code reusability.
Outline
Put the function into a separate .py file
Create yet another function that takes the name of the region as an input and returns SST values for the corresponding region
Use import to access these functions from another file or a notebook
Create the wind speed data
Create a dictionary of data, whose keys are names of regions and values are lists of heat fluxes data
Save the dictionary to text file (bonus: to a json file), both keys and values
Since this is the solution, we are skipping the step of creating a separate script
End of explanation
def create_sst(region_name):
Create fake SST data (degC) for a given region
Inputs
------
region_name: ...continue the docstring...
n: integer, optional. Length of the returned data list
Returns
-------
...continue the docstring...
if region_name == 'NS':
# North Sea
sst = list(range(5, 15, 1))
elif region_name == 'WS':
# White Sea
sst = list(range(0, 10, 1))
elif region_name == 'BS':
# Black Sea
sst = list(range(15, 25, 1))
else:
raise ValueError('Input value of {} is not recognised'.format(region_name))
return sst
Explanation: 2. Create yet another function that takes the name of the region as an input and returns SST values for the corresponding region
This function can look something like the one below
Feel free to modify or extend it
You can replace region_name by experiment_name or whatever you prefer
For convenience, make sure the length of the returned list is the same
End of explanation
wind_speed = list(range(0,20,2))
Explanation: 4. Create the wind speed data
End of explanation
regions = ['WS', 'BS']
hf_dict = dict()
for reg in regions:
fake_sst = create_sst(reg)
heat_flux = []
for u, t in zip(wind_speed, fake_sst):
q = calc_heat_flux(u, t)
heat_flux.append(q)
hf_dict[reg] = heat_flux
Explanation: 5. Create a dictionary of data, whose keys are names of regions and values are lists of heat fluxes data
Create a list of names of the regions/experiments
Create an empty dictionary, named hf_dict or whatever sounds better to you
Loop over the names, call the create_sst() function and assign it to a variable, e.g. fake_sst
Still inside the name-loop, write another loop to iterate over SST and wind values, just as you did in the previous exercise, and calculate the heat flux.
Assign the result to the corresponding key of hf_dict
End of explanation
hf_dict
Explanation: Print the result to test yourself.
End of explanation
with open('heat_flux_var_sst_bycol.txt', 'w') as f:
column_names = sorted(hf_dict.keys())
f.write(','.join(column_names)+'\n')
for tup in zip(*[hf_dict[i] for i in column_names]):
f.write(','.join([str(i) for i in tup])+'\n')
with open('heat_flux_var_sst_byrow.txt', 'w') as f:
for k, v in hf_dict.items():
line = k + ','
for i in v:
line += str(i) + ','
line = line[:-1]+ '\n'
f.write(line)
!more {f.name}
Explanation: 6. Save the dictionary to text file, both keys and values
You can copy the code for writing data to a text file from the previous exercise
Modify it so that the output file would include hf_dict's keys as row (column) names
End of explanation
import json
Explanation: Bonus
End of explanation |
8,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-ll', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NERC
Source ID: UKESM1-0-LL
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:26
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
8,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 02
Machine Learning Models
Step1: It is pretty clear that there is a linear trend here. If I wanted to predict what would happen if we tried the input of x=0.6, it would be a good guess to pick something like y=1.6 or so. Training the computer to do this is what we mean by Machine Learning.
To formalize this a little bit, it consists of four steps
Step2: You can see that, with a 20% split, our small fake dataset doesn't have very many points. Really we shouldn't be working with less than 100 points for anything we do. Any fewer than that and the statistics just start breaking. Ideally we'd have tens of millions of data points. We'll talk later about how to get that much data, but we'll start small for now. We'll load in the Class02_fakedata2.csv file and split it 80/20 training/testing datasets.
Step3: Linear Regression
We are now ready to train our linear model on the training part of this data. Remember that, from this point forward, we must "lock" the testing data and not use it to train our models. This takes two steps in Python. The first step is to define the model and set any model parameters (in this case we'll use the defaults). This is a Python object that will subsequently hold all the information about the model including fit parameters and other information about the fit. Again, take a look at the documentation
Step4: We now want to see what this looks like! We start by looking at the fit coefficient and intercept. When we have more than one input variable, there will be a coefficient corresponding to each feature.
Step5: That doesn't really tell us much. It would be better if we could compare the model to the test data. We will use the inputs from the test data and run them through the model. It will predict what the outputs should be. We can then compare them to the actual outputs. We'll plot the predictions as a line (since they will all lie on the same line due to our model being a linear regression).
Step6: This looks pretty good. We can go one step futher and define a quantitative measure of the quality of the fit. We will subtract the difference between the prediction and the actual value for each point. We then square all of those and average them. Finally we take the square root of all of that. This is known as the RMS error (for Root Mean Squared).
Step7: Using Multiple Inputs
We'll now move to a real-world data set (which means it is messy). We'll load in the diabetes data set from Class 01 and try training it. Our input will be the 'BMI' feature and the output is the 'Target' column.
Step8: I've put all the steps together in one cell and commented on each step.
Step9: Not too surprising that the RMS error isn't very good. This is the real world after all. However, we saw in Class 01 that there may be some dependence on some of the other variables like the LDL. We can try a linear regression with both of them as inputs. I have to change the code a little to do this. Compare this with the previous cell to see what needs to change. | Python Code:
import pandas as pd
fakedata1 = pd.DataFrame(
[[ 0.862, 2.264],
[ 0.694, 1.847],
[ 0.184, 0.705],
[ 0.41 , 1.246]], columns=['input','output'])
fakedata1.plot(x='input',y='output',kind='scatter')
Explanation: Class 02
Machine Learning Models: Linear regression & Validation
We are going to cover two main topics in this class: Linear Regressions and Validation. We need to start with a broader question, though.
What is Machine Learning?
The goal this semester is to use machine learning to teach the computer how to make predictions. So we'll start with my definitions of machine learning -- in particular of supervised machine learning. We are using a programming algorithm that gives the computer the tools it needs to identify patterns in a set of data. Once we have those patterns, we can use them to make predictions - what we would expect should happen if we gather more data that may not necessarily be exactly the same as the data we learned from.
We'll start by looking at a very simple set of fake data that will help us cover the key ideas. Suppose that we just collected four data points. I've manually input them (as opposed to using a CSV file). Execute the following cell to see what the data look like.
End of explanation
from sklearn.model_selection import train_test_split
faketrain1, faketest1 = train_test_split(fakedata1, test_size=0.2, random_state=23)
faketrain1.plot(x='input',y='output',kind='scatter')
faketest1.plot(x='input',y='output',kind='scatter')
Explanation: It is pretty clear that there is a linear trend here. If I wanted to predict what would happen if we tried the input of x=0.6, it would be a good guess to pick something like y=1.6 or so. Training the computer to do this is what we mean by Machine Learning.
To formalize this a little bit, it consists of four steps:
We start with relevant historical data. This is our input to the machine learning algorithm.
Choose an algorithm. There are a number of possibilities that we will cover over the course of the semester.
Train the model. This is where the computer learns the pattern.
Test the model. We now have to check to see how well the model works.
We then refine the model and repeat the process until we are happy with the results.
The Testing Problem
There is a bit of a sticky point here. If we use our data to train the computer, what do we use to test the model to see how good it is? If we use the same data to test the model we will, most likely, get fantastic results! After all, we used that data to train the model, so it should (if the model worked at all) do a great job of predicting the results.
However, this doesn't tell us anything about how well the model will work with a new data point. Even if we get a new data point, we won't necessarily know what it is supposed to be, so we won't know how well the model is working. There is a way around all of this that works reasonably well. What we will do is set aside a part of our historical data as "test" data. We won't use that data to train the model. Instead, we will use it to test the model to see how well it works. This gives us a good idea of how the model will work with new data points. As a rule of thumb, we want to reserve about 20% of our data set as testing data.
There is a library that does this for us in Python called train_test_split. The documentation is here: http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html. I want you to get used to looking up the documentation yourself to see how the function works. Pay close attention to the inputs and the outputs of the function.
One of the inputs we will use is the random_state option. By using the same number here we should all end up with the same results. If you change this number, you change the random distribution of the data and, thus, the end result.
End of explanation
fakedata2 = pd.read_csv('Class02_fakedata2.csv')
faketrain2, faketest2 = train_test_split(fakedata2, test_size=0.2, random_state=23)
faketrain2.plot(x='input',y='output',kind='scatter')
faketest2.plot(x='input',y='output',kind='scatter')
Explanation: You can see that, with a 20% split, our small fake dataset doesn't have very many points. Really we shouldn't be working with less than 100 points for anything we do. Any fewer than that and the statistics just start breaking. Ideally we'd have tens of millions of data points. We'll talk later about how to get that much data, but we'll start small for now. We'll load in the Class02_fakedata2.csv file and split it 80/20 training/testing datasets.
End of explanation
faketrain2.head()
from sklearn.linear_model import LinearRegression
# Step 1: Create linear regression object
regr = LinearRegression()
# Step 2: Train the model using the training sets
features = faketrain2[['input']].values
labels = faketrain2['output'].values
regr.fit(features,labels)
Explanation: Linear Regression
We are now ready to train our linear model on the training part of this data. Remember that, from this point forward, we must "lock" the testing data and not use it to train our models. This takes two steps in Python. The first step is to define the model and set any model parameters (in this case we'll use the defaults). This is a Python object that will subsequently hold all the information about the model including fit parameters and other information about the fit. Again, take a look at the documentation: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html.
The second step is to actually fit the data. We need to reformat our data so that we can tell the computer what our inputs are and what our outputs are. We define two new variables called "features" and "labels". Note the use of the double square bracket in selecting data for the features. This will allow us to, in the future, select mutltiple columns as our input variables. In the mean time, it formats the data in the way that the fit algorithm needs it to be formatted.
End of explanation
print('Coefficients: \n', regr.coef_)
print('Intercept: \n', regr.intercept_)
Explanation: We now want to see what this looks like! We start by looking at the fit coefficient and intercept. When we have more than one input variable, there will be a coefficient corresponding to each feature.
End of explanation
testinputs = faketest2[['input']].values
predictions = regr.predict(testinputs)
actuals = faketest2['output'].values
import matplotlib.pyplot as plt
plt.scatter(testinputs, actuals, color='black', label='Actual')
plt.plot(testinputs, predictions, color='blue', linewidth=1, label='Prediction')
# We also add a legend to our plot. Note that we've added the 'label' option above. This will put those labels together in a single legend.
plt.legend(loc='upper left', shadow=False, scatterpoints=1)
plt.xlabel('input')
plt.ylabel('output')
plt.scatter(testinputs, (actuals-predictions), color='green', label='Residuals just because $\lambda$')
plt.xlabel('input')
plt.ylabel('residuals')
plt.legend(loc='upper left', shadow=False, scatterpoints=1)
Explanation: That doesn't really tell us much. It would be better if we could compare the model to the test data. We will use the inputs from the test data and run them through the model. It will predict what the outputs should be. We can then compare them to the actual outputs. We'll plot the predictions as a line (since they will all lie on the same line due to our model being a linear regression).
End of explanation
import numpy as np
print("RMS Error: {0:.3f}".format( np.sqrt(np.mean((predictions - actuals) ** 2))))
Explanation: This looks pretty good. We can go one step futher and define a quantitative measure of the quality of the fit. We will subtract the difference between the prediction and the actual value for each point. We then square all of those and average them. Finally we take the square root of all of that. This is known as the RMS error (for Root Mean Squared).
End of explanation
diabetes = pd.read_csv('../Class01/Class01_diabetes_data.csv')
diabetes.head()
Explanation: Using Multiple Inputs
We'll now move to a real-world data set (which means it is messy). We'll load in the diabetes data set from Class 01 and try training it. Our input will be the 'BMI' feature and the output is the 'Target' column.
End of explanation
# Step 1: Split off the test data
dia_train, dia_test = train_test_split(diabetes, test_size=0.2, random_state=23)
# Step 2: Create linear regression object
dia_model = LinearRegression()
# Step 3: Train the model using the training sets
features = dia_train[['BMI']].values
labels = dia_train['Target'].values
# Step 4: Fit the model
dia_model.fit(features,labels)
# Step 5: Get the predictions
testinputs = dia_test[['BMI']].values
predictions = dia_model.predict(testinputs)
actuals = dia_test['Target'].values
# Step 6: Plot the results
plt.scatter(testinputs, actuals, color='black', label='Actual')
plt.plot(testinputs, predictions, color='blue', linewidth=1, label='Prediction')
plt.xlabel('BMI') # Label the x axis
plt.ylabel('Target') # Label the y axis
plt.legend(loc='upper left', shadow=False, scatterpoints=1)
# Step 7: Get the RMS value
print("RMS Error: {0:.3f}".format( np.sqrt(np.mean((predictions - actuals) ** 2))))
Explanation: I've put all the steps together in one cell and commented on each step.
End of explanation
# Step 2: Create linear regression object
dia_model2 = LinearRegression()
# Possible columns:
# 'Age', 'Sex', 'BMI', 'BP', 'TC', 'LDL', 'HDL', 'TCH', 'LTG', 'GLU'
#
inputcolumns = [ 'BMI', 'HDL']
# Step 3: Train the model using the training sets
features = dia_train[inputcolumns].values
labels = dia_train['Target'].values
# Step 4: Fit the model
dia_model2.fit(features,labels)
# Step 5: Get the predictions
testinputs = dia_test[inputcolumns].values
predictions = dia_model2.predict(testinputs)
actuals = dia_test['Target'].values
# Step 6: Plot the results
#
# Note the change here in how we plot the test inputs. We can only plot one variable, so we choose the first.
# Also, it no longer makes sense to plot the fit points as lines. They have more than one input, so we only visualize them as points.
#
plt.scatter(testinputs[:,0], actuals, color='black', label='Actual')
plt.scatter(testinputs[:,0], predictions, color='blue', label='Prediction')
plt.legend(loc='upper left', shadow=False, scatterpoints=1)
# Step 7: Get the RMS value
print("RMS Error: {0:.3f}".format( np.sqrt(np.mean((predictions - actuals) ** 2))))
Explanation: Not too surprising that the RMS error isn't very good. This is the real world after all. However, we saw in Class 01 that there may be some dependence on some of the other variables like the LDL. We can try a linear regression with both of them as inputs. I have to change the code a little to do this. Compare this with the previous cell to see what needs to change.
End of explanation |
8,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$\LaTeX$ definition block
$\newcommand{\sign}{\operatorname{sign}}$
Distant Supervision
This notebook has a few cute experiments to explore distant supervision.
Setup
In the distant supervision setting, we receive data $((x_i^a, x_i^b), y_i)$. Assuming the data is linearly separable with hyperplane $w^$, we also are given that if $y_i = \sign[\max(w^{\top} x_i^{a}, w^{*\top} x_i^{a})]$. Thus, if $y_i = -1$, we know that both $x^a_i$ and $x^b_i$ are in the negative class, but if $y_i = 1$, we only know that atleast one of $x^a_i$ or $x^b_i$ is positive.
Data generation
Consider a random (unit) weights $w^$.
We generate elements ${x_a, x_b}$, order $x_a$ by $w^ x_a$ and set $y_i$ to be $\sign[\max(w^{\top} x_a, w^{\top} x_b)]$.
Step1: Method
Step2: Exact SVM solution
Assuming we know what the positive points are.
Step3: Naive
Assumes that all positive data is the same.
Step4: As expected, the naive approach does really poorly.
Proposed method
Incorporate responsibilities for the points.
$\begin{align}
\min{}& \frac{1}{2} \|\beta\|^2 + C \xi \
\textrm{subject to}
& -1(\beta^\top x_{-}) \ge 1 - \xi \
& +1(\beta^\top x_{a}) \ge y_a - \xi \
& +1(\beta^\top x_{b}) \ge y_b - \xi \
& y_a > 0, y_b > 0, y_a + y_b \ge 1
\end{align}$
Step5: Proposed 2 | Python Code:
# Constants
D = 2
N = 100
K = 2
w = np.random.randn(D)
w = normalize(w)
theta = np.arctan2(w[0], w[1])
X = np.random.randn(N,D,K)
y = np.zeros(N)
for i in range(N):
m = w.dot(X[i])
X[i] = X[i][:,np.argsort(-m)]
y[i] = np.sign(max(m))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), c='black')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
# Split data for convenience
X_ = np.concatenate((X[y<0,:,0],X[y<0,:,1]))
Xa = X[y>0,:,0]
Xb = X[y>0,:,1]
Xp = np.concatenate((Xa, Xb))
Na, N_, Np = y[y>0].shape[0], 2*y[y<0].shape[0], 2*y[y>0].shape[0]
Nb = Na
Na, N_, Np
Explanation: $\LaTeX$ definition block
$\newcommand{\sign}{\operatorname{sign}}$
Distant Supervision
This notebook has a few cute experiments to explore distant supervision.
Setup
In the distant supervision setting, we receive data $((x_i^a, x_i^b), y_i)$. Assuming the data is linearly separable with hyperplane $w^$, we also are given that if $y_i = \sign[\max(w^{\top} x_i^{a}, w^{*\top} x_i^{a})]$. Thus, if $y_i = -1$, we know that both $x^a_i$ and $x^b_i$ are in the negative class, but if $y_i = 1$, we only know that atleast one of $x^a_i$ or $x^b_i$ is positive.
Data generation
Consider a random (unit) weights $w^$.
We generate elements ${x_a, x_b}$, order $x_a$ by $w^ x_a$ and set $y_i$ to be $\sign[\max(w^{\top} x_a, w^{\top} x_b)]$.
End of explanation
from cvxpy import *
Explanation: Method
End of explanation
# Beta is the coefficients, and e is the slack.
beta = Variable(D)
ea, e_ = Variable(Na), Variable(N_)
loss = 0.5 * norm(beta, 2) ** 2 + 1*sum_entries(e_)/N_ + 1* sum_entries(ea)/Na
constr = [mul_elemwise(-1,X_*beta) > 1 - e_, mul_elemwise(1,Xa*beta) > 1 - ea]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
Explanation: Exact SVM solution
Assuming we know what the positive points are.
End of explanation
# Beta is the coefficients, and e is the slack.
beta = Variable(D)
ep, e_ = Variable(Np), Variable(N_)
loss = 0.5 * norm(beta, 2) ** 2 + 1*sum_entries(e_)/N_ + 1* sum_entries(ep)/Np
constr = [mul_elemwise(-1,X_*beta) > 1 - e_, mul_elemwise(1,Xp*beta) > 1 - ep]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
Explanation: Naive
Assumes that all positive data is the same.
End of explanation
# Beta is the coefficients, and e is the slack.
Da, = y[y>0].shape
beta = Variable(D)
e = Variable()
loss = 0.5 * norm(beta, 2) ** 2 + 1 * e
constr = [mul_elemwise(-1,X_*beta) > 1 - e,
Xa*beta + Xb*beta > 1 - e,
]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
Explanation: As expected, the naive approach does really poorly.
Proposed method
Incorporate responsibilities for the points.
$\begin{align}
\min{}& \frac{1}{2} \|\beta\|^2 + C \xi \
\textrm{subject to}
& -1(\beta^\top x_{-}) \ge 1 - \xi \
& +1(\beta^\top x_{a}) \ge y_a - \xi \
& +1(\beta^\top x_{b}) \ge y_b - \xi \
& y_a > 0, y_b > 0, y_a + y_b \ge 1
\end{align}$
End of explanation
# Beta is the coefficients, and e is the slack.
Da, = y[y>0].shape
beta = Variable(D)
e = Variable()
C = 1.
C_ = 1
loss = 0.5 * norm(beta, 2) ** 2 + C * e + C_ * sum_entries(pos(Xa*beta) + pos(Xb*beta) - 1) / Na
constr = [mul_elemwise(-1,X_*beta) > 1 - e,
]
prob = Problem(Minimize(loss), constr)
print("loss", prob.solve())
w_ = np.array(beta.value).flatten()
w_ = w_ / np.linalg.norm(w_) # Until I add a 0.
print("error", np.linalg.norm(w-w_))
# Visualize data
plt.plot(np.arange(-3,3), -w[0]/w[1] * np.arange(-3,3), linestyle='dashed', c='black')
plt.plot(np.arange(-3,3), -w_[0]/w_[1] * np.arange(-3,3), c='green')
plt.scatter(X[y<0,0,0], X[y<0,1,0], c='r', marker='o')
plt.scatter(X[y<0,0,1], X[y<0,1,1], c='r', marker='o')
plt.scatter(X[y>0,0,0], X[y>0,1,0], c='b', marker='D')
plt.scatter(X[y>0,0,1], X[y>0,1,1], c='b', marker='D')
Explanation: Proposed 2
End of explanation |
8,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Kafka Producer for Twitter </center>
Acquire and decompress Kafka
$ wget http
Step1: To delete a topic
Step2: Setup Confluent_Kafka
First, install your own Anaconda to a local directory in your home on Palmetto.
Next, perform the following steps
$ wget https | Python Code:
!cd ~/software/kafka_2.11-1.0.0; \
./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
!cd ~/software/kafka_2.11-1.0.0; \
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
!cd ~/software/kafka_2.11-1.0.0; \
./bin/kafka-topics.sh --list --zookeeper localhost:2181
Explanation: <center> Kafka Producer for Twitter </center>
Acquire and decompress Kafka
$ wget http://download.nextag.com/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz
$ tar xzf kafka_2.11-1.0.0.tgz
Run the following from separate terminals:
cd kafka_2.11-1.0.0
bin/zookeeper-server-start.sh config/zookeeper.properties
cd kafka_2.11-1.0.0
bin/kafka-server-start.sh config/server.properties
End of explanation
!cd ~/software/kafka_2.11-1.0.0 & bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
Explanation: To delete a topic:
Add the following lines to config/server.properties: delete.topic.enable=true
End of explanation
from confluent_kafka import Producer
import sys
import logging
import json
from datetime import datetime
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
broker = 'localhost:9092'
topic = 'test'
# Producer configuration
# See https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
conf = {'bootstrap.servers': broker}
# Create Producer instance
p = Producer(**conf)
# Optional per-message delivery callback (triggered by poll() or flush())
# when a message has been successfully delivered or permanently
# failed delivery (after retries).
def delivery_callback(err, msg):
if err:
sys.stderr.write('%% Message failed delivery: %s\n' % err)
else:
sys.stderr.write('%% Message key %s delivered to %s [%d]\n' % (msg.key(), msg.topic(), msg.partition()))
class StdOutListener(StreamListener):
def on_data(self, data):
try:
jsonData = json.loads(data)
p.produce(topic, json.dumps(data), key=str(jsonData['id']), callback=delivery_callback)
except BufferError as e:
sys.stderr.write('%% Local producer queue is full (%d messages awaiting delivery): try again\n' % len(p))
p.poll(0)
return True
def on_error(self, status):
print (status)
# read cert (not on github)
keyFile = open('/home/lngo/.cert/Twitter','r')
CONSUMER_KEY = keyFile.readline().rstrip()
CONSUMER_SECRET = keyFile.readline().rstrip()
ACCESS_TOKEN_KEY = keyFile.readline().rstrip()
ACCESS_TOKEN_SECRET = keyFile.readline().rstrip()
auth = OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN_KEY, ACCESS_TOKEN_SECRET)
while True:
try:
stream = Stream(auth, StdOutListener())
stream.filter(track=['Clemson'])
except IncompleteRead:
print ("Lets try it again")
continue
except KeyboardInterrupt:
stream.disconnect()
break
# Wait until all messages have been delivered
sys.stderr.write('%% Waiting for %d deliveries\n' % len(p))
p.flush()
Explanation: Setup Confluent_Kafka
First, install your own Anaconda to a local directory in your home on Palmetto.
Next, perform the following steps
$ wget https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh
$ sh Anaconda3-5.0.1-Linux-x86_64.sh
$ export PATH=/home/lngo/software/anaconda3/bin:$PATH
$ conda create --name kafka python=3.6
$ source activate kafka
$ conda install jupyter
$ python -m ipykernel install --prefix=/home/lngo/.local/ --name 'Python-Kafka-3.6'
$ conda install -c conda-forge python-confluent-kafka
$ conda install -c conda-forge tweepy
End of explanation |
8,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First we need to 'binarize' the picture we want to embed in the markow field. This function does it
Step1: We load and prepare the picture
Step2: We now define the spin-spin correlation function for the lattice | Python Code:
def prep_datas(pic, size):
X=resize(pic,(size,size)) # reduce the size of the image from 100X100 to 32X32. Also flattens the color levels
X=np.reshape(X,size**2) # reshape from 32x32 to a flat 1024 vector
X=np.array(X) # transforms it into an array
for j in range(len(X)): # let's binarized the image
if X[j] < 0.5:
X[j] = -1
else:
X[j] = 1;
X_pic = np.reshape(X,(size,size))
return X, X_pic
Explanation: First we need to 'binarize' the picture we want to embed in the markow field. This function does it
End of explanation
size_lattice = 100 # Size of the lattice in which we want to embed the picture
einstein_pic = io.imread("einstein.png")
training_chain, ein_pic = prep_datas(einstein_pic, size_lattice)
plt.subplot(1,2,1)
plt.imshow(einstein_pic,cmap='gray')
plt.subplot(1,2,2)
plt.imshow(ein_pic,cmap='gray')
Explanation: We load and prepare the picture
End of explanation
def corr(mat, d): # ensemble average of lattice (in form of 1D vector) for d-nearest neigbors spin
item = deque(mat)
item.rotate(d)
mean = np.correlate(mat, np.asanyarray(item))/len(mat)
return mean
Explanation: We now define the spin-spin correlation function for the lattice
End of explanation |
8,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: MLMD Model Card Toolkit Demo
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Did you restart the runtime?
If you are using Google Colab, the runtime must be restarted after installing new packages.
Import packages
We import necessary packages, including standard TFX component classes and check the library versions.
Step3: Ensure TensorFlow 2 is running and executing eagerly.
Step4: Set up pipeline paths
Step5: Download example data
We download the example dataset for use in our TFX pipeline.
Step6: Take a quick look at the CSV file.
Step7: Create the InteractiveContext
Last, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
Step8: Run TFX components interactively
In the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at TFX Colab workshop.
ExampleGen
Create the ExampleGen component to split data into training and evaluation sets, convert the data into tf.Example format, and copy data into the _tfx_root directory for other components to access.
Step9: Let’s take a look at the first three training examples
Step10: StatisticsGen
StatisticsGen takes as input the dataset we just ingested using ExampleGen and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
Step11: After StatisticsGen finishes running, we can visualize the outputted statistics. Try playing with the different plots!
Step12: SchemaGen
SchemaGen will take as input the statistics that we generated with StatisticsGen, looking at the training split by default.
Step15: To learn more about schemas, see the SchemaGen documentation.
Transform
Transform will take as input the data from ExampleGen, the schema from SchemaGen, as well as a module that contains user-defined Transform code.
Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, see the tutorial).
Step23: Trainer
Let's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, see the tutorial)
Step24: Evaluator
The Evaluator component computes model performance metrics over the evaluation set. It uses the TensorFlow Model Analysis library.
Evaluator will take as input the data from ExampleGen, the trained model from Trainer, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below
Step25: Warning
Step26: Using the evaluation output we can show the default visualization of global metrics on the entire evaluation set.
Step27: Model Card Generator
The Model Card component is a TFX Component that generates model cards-- short documentation that provides key information about a machine learning model-- from the StatisticGen outputs, the Evaluator outputs, and a prepared json annotation. Optionally, a pushed model or a template can be provided as well.
The model cards assets are saved to a ModelCard artifact that can be fetched from the outputs['model_card]' property.
Prepare Annotation Json for Model Card
It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. Thus, we will prepare this information in json format to be used in the model card generating step.
Step28: Generate the Model Card.
Step29: Display Model Card
Lastly, we isolate the uri from the model card generator artifact and use it to display the model card. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install --upgrade pip==21.3
!pip install model-card-toolkit
Explanation: MLMD Model Card Toolkit Demo
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/responsible_ai/model_card_toolkit/examples/MLMD_Model_Card_Toolkit_Demo"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-card-toolkit/blob/master/model_card_toolkit/documentation/examples/MLMD_Model_Card_Toolkit_Demo.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-card-toolkit/blob/master/model_card_toolkit/documentation/examples/MLMD_Model_Card_Toolkit_Demo.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-card-toolkit/model_card_toolkit/documentation/examples/MLMD_Model_Card_Toolkit_Demo.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Background
This notebook demonstrates how to generate a model card using the Model Card Toolkit with MLMD and TFX pipeline in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about.
Setup
We first need to a) install and import the necessary packages, and b) download the data.
Upgrade to Pip 21 (or later) and Install Model Card Toolkit
End of explanation
import os
import pprint
import tempfile
import urllib
import absl
import random
import tensorflow.compat.v2 as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import Pusher
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.components.base import executor_spec
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
import ml_metadata as mlmd
Explanation: Did you restart the runtime?
If you are using Google Colab, the runtime must be restarted after installing new packages.
Import packages
We import necessary packages, including standard TFX component classes and check the library versions.
End of explanation
tf.enable_v2_behavior()
tf.executing_eagerly()
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.version.__version__))
print('MLMD version: {}'.format(mlmd.__version__))
Explanation: Ensure TensorFlow 2 is running and executing eagerly.
End of explanation
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
Explanation: Set up pipeline paths
End of explanation
DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/' \
'adult.data'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
columns = [
"Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital-Status",
"Occupation", "Relationship", "Race", "Sex", "Capital-Gain", "Capital-Loss",
"Hours-per-week", "Country", "Over-50K"]
with open(_data_filepath, 'r') as f:
content = f.read()
content = content.replace(", <=50K", ', 0').replace(", >50K", ', 1')
with open(_data_filepath, 'w') as f:
f.write(','.join(columns) + '\n' + content)
Explanation: Download example data
We download the example dataset for use in our TFX pipeline.
End of explanation
!head {_data_filepath}
Explanation: Take a quick look at the CSV file.
End of explanation
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext(pipeline_name="Census Income Classification Pipeline")
Explanation: Create the InteractiveContext
Last, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
End of explanation
example_gen = CsvExampleGen(input_base=_data_root)
context.run(example_gen)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
Explanation: Run TFX components interactively
In the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at TFX Colab workshop.
ExampleGen
Create the ExampleGen component to split data into training and evaluation sets, convert the data into tf.Example format, and copy data into the _tfx_root directory for other components to access.
End of explanation
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
Explanation: Let’s take a look at the first three training examples:
End of explanation
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
Explanation: StatisticsGen
StatisticsGen takes as input the dataset we just ingested using ExampleGen and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
End of explanation
context.show(statistics_gen.outputs['statistics'])
Explanation: After StatisticsGen finishes running, we can visualize the outputted statistics. Try playing with the different plots!
End of explanation
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
Explanation: SchemaGen
SchemaGen will take as input the statistics that we generated with StatisticsGen, looking at the training split by default.
End of explanation
_census_income_constants_module_file = 'census_income_constants.py'
%%writefile {_census_income_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [20]
CATEGORICAL_FEATURE_KEYS = ["Education-Num"]
DENSE_FLOAT_FEATURE_KEYS = ["Capital-Gain", "Hours-per-week", "Capital-Loss"]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = ["Age"]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 200
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = ["Workclass", "Education", "Marital-Status", "Occupation",
"Relationship", "Race", "Sex", "Country"]
# Keys
LABEL_KEY = "Over-50K"
def transformed_name(key):
return key + '_xf'
_census_income_transform_module_file = 'census_income_transform.py'
%%writefile {_census_income_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def preprocessing_fn(inputs):
tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
label = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = label
return outputs
def _fill_in_missing(x):
Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_census_income_transform_module_file))
context.run(transform)
transform.outputs['transform_graph']
Explanation: To learn more about schemas, see the SchemaGen documentation.
Transform
Transform will take as input the data from ExampleGen, the schema from SchemaGen, as well as a module that contains user-defined Transform code.
Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, see the tutorial).
End of explanation
_census_income_trainer_module_file = 'census_income_trainer.py'
%%writefile {_census_income_trainer_module_file}
from typing import List, Text
import os
import absl
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = census_income_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
def _gzip_reader_fn(filenames):
Small utility returning a record reader that can read gzip'ed files.
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _get_serve_tf_examples_fn(model, tf_transform_output):
Returns a function that parses a serialized tf.Example and applies TFT.
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
Returns the output to be used in the serving signature.
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
if _transformed_name(_LABEL_KEY) in transformed_features:
transformed_features.pop(_transformed_name(_LABEL_KEY))
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_transformed_name(_LABEL_KEY))
return dataset
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
Creates a DNN Keras model.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_VOCAB_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_BUCKET_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_CATEGORICAL_FEATURE_KEYS)
})
# TODO(b/161816639): SparseFeatures for feature columns + Keras.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(
1, activation='sigmoid')(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=absl.logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
# This log path might change in the future.
log_dir = os.path.join(os.path.dirname(fn_args.serving_model_dir), 'logs')
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath(_census_income_trainer_module_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=100),
eval_args=trainer_pb2.EvalArgs(num_steps=50))
context.run(trainer)
trainer.outputs
Explanation: Trainer
Let's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, see the tutorial):
End of explanation
from google.protobuf.wrappers_pb2 import BoolValue
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(label_key="Over-50K")
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='FairnessIndicators',
config='{ "thresholds": [0.5] }'),
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced by feature column Race and Sex.
tfma.SlicingSpec(feature_keys=['Race']),
tfma.SlicingSpec(feature_keys=['Sex']),
tfma.SlicingSpec(feature_keys=['Race', 'Sex']),
],
options = tfma.Options(compute_confidence_intervals=BoolValue(value=True))
)
Explanation: Evaluator
The Evaluator component computes model performance metrics over the evaluation set. It uses the TensorFlow Model Analysis library.
Evaluator will take as input the data from ExampleGen, the trained model from Trainer, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below:
End of explanation
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
# TODO(b/226656838) Fix the inconsistent references warnings.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
evaluator.outputs
Explanation: Warning: the Evaluator Component may take 5-10 minutes to run due to errors regarding "inconsistent references".
End of explanation
context.show(evaluator.outputs['evaluation'])
Explanation: Using the evaluation output we can show the default visualization of global metrics on the entire evaluation set.
End of explanation
import json
import model_card_toolkit as mctlib
model_card_json = {'model_details': {'name': 'Census Income Classifier'},
'model_details': {'overview':
'This is a wide and deep Keras model which aims to classify whether or not '
'an individual has an income of over $50,000 based on various demographic '
'features. The model is trained on the UCI Census Income Dataset. This is '
'not a production model, and this dataset has traditionally only been used '
'for research purposes. In this Model Card, you can review quantitative '
'components of the model’s performance and data, as well as information '
'about the model’s intended uses, limitations, and ethical considerations.'},
'model_details': {'owners': [{"name": "Model Cards Team", "contact": "model-cards@google.com"}]},
'considerations': {'use_cases':[{"description":'This dataset that this model was trained on was originally created to '
'support the machine learning community in conducting empirical analysis '
'of ML algorithms. The Adult Data Set can be used in fairness-related '
'studies that compare inequalities across sex and race, based on '
'people’s annual incomes.'}]},
'considerations': {'limitations': [{'description':
'This is a class-imbalanced dataset across a variety of sensitive classes.'
' The ratio of male-to-female examples is about 2:1 and there are far more'
' examples with the “white” attribute than every other race combined. '
'Furthermore, the ratio of $50,000 or less earners to $50,000 or more '
'earners is just over 3:1. Due to the imbalance across income levels, we '
'can see that our true negative rate seems quite high, while our true '
'positive rate seems quite low. This is true to an even greater degree '
'when we only look at the “female” sub-group, because there are even '
'fewer female examples in the $50,000+ earner group, causing our model to '
'overfit these examples. To avoid this, we can try various remediation '
'strategies in future iterations (e.g. undersampling, hyperparameter '
'tuning, etc), but we may not be able to fix all of the fairness issues.'}]},
'considerations': {'ethical_considerations': [
{'name': 'We risk expressing the viewpoint that the attributes in this dataset '
'are the only ones that are predictive of someone’s income, even '
'though we know this is not the case.',
'mitigation_strategy': 'As mentioned, some interventions may need to be '
'performed to address the class imbalances in the dataset.'}]}
}
Explanation: Model Card Generator
The Model Card component is a TFX Component that generates model cards-- short documentation that provides key information about a machine learning model-- from the StatisticGen outputs, the Evaluator outputs, and a prepared json annotation. Optionally, a pushed model or a template can be provided as well.
The model cards assets are saved to a ModelCard artifact that can be fetched from the outputs['model_card]' property.
Prepare Annotation Json for Model Card
It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. Thus, we will prepare this information in json format to be used in the model card generating step.
End of explanation
from model_card_toolkit.tfx.component import ModelCardGenerator
mct_gen = ModelCardGenerator(statistics=statistics_gen.outputs['statistics'],
evaluation=evaluator.outputs['evaluation'],
json=json.dumps(model_card_json))
context.run(mct_gen)
mct_gen.outputs['model_card']
Explanation: Generate the Model Card.
End of explanation
from IPython import display
mct_artifact = mct_gen.outputs['model_card'].get()[0]
mct_uri = mct_artifact.uri
print(os.listdir(mct_uri))
mct_path = os.path.join(mct_uri, 'model_cards', 'model_card.html')
with open(mct_path) as f:
mct_content = f.read()
display.display(display.HTML(mct_content))
Explanation: Display Model Card
Lastly, we isolate the uri from the model card generator artifact and use it to display the model card.
End of explanation |
8,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coronagraph Basics
This set of exercises guides the user through a step-by-step process of simulating NIRCam coronagraphic observations of the HR 8799 exoplanetary system. The goal is to familiarize the user with basic pynrc classes and functions relevant to coronagraphy.
Step1: We will start by first importing pynrc along with the obs_hci (High Contrast Imaging) class, which lives in the pynrc.obs_nircam module.
Step2: Source Definitions
The obs_hci class first requires two arguments describing the spectra of the science and reference sources (sp_sci and sp_ref, respectively. Each argument should be a Pysynphot spectrum already normalized to some known flux. pynrc includes built-in functions for generating spectra. The user may use either of these or should feel free to supply their own as long as it meets the requirements.
The pynrc.stellar_spectrum function provides the simplest way to define a new spectrum
Step3: Initialize Observation
Now we will initialize the high-contrast imaging class pynrc.obs_hci using the spectral objects and various other settings. The obs_hci object is a subclass of the more generalized NIRCam class. It implements new settings and functions specific to high-contrast imaging observations for corongraphy and direct imaging.
For this tutorial, we want to observe these targets using the MASK430R coronagraph in the F444W filter. All circular coronagraphic masks such as the 430R (R=round) should be paired with the CIRCLYOT pupil element, whereas wedge/bar masks are paired with WEDGELYOT pupil. Observations in the LW channel are most commonly observed in WINDOW mode with a 320x320 detector subarray size. Full detector sizes are also available.
The PSF simulation size (fov_pix keyword) should also be of similar size as the subarray window (recommend avoiding anything above fov_pix=1024 due to computation time and memory usage). Use odd numbers to center the PSF in the middle of the pixel. If fov_pix is specified as even, then PSFs get centered at the corners. This distinction really only matter for unocculted observations, (ie., where the PSF flux is concentrated in a tight central core).
The obs_hci class also allows one to specify WFE drift values in terms of nm RMS. The wfe_ref_drift parameter defaults to 2nm between
We also need to specify a WFE drift value (wfe_ref_drift parameter), which defines the anticipated drift in nm between the science and reference sources. For the moment, let's intialize with a value of 0nm. This prevents an initially long process by which pynrc calculates changes made to the PSF over a wide range of drift values. This process only happens once, then stores the resulting coefficient residuals to disk for future quick retrieval.
Extended disk models can also be specified upon initialization using the disk_params keyword (which should be a dictionary).
The large_grid keyword controls the quality of PSF variations near and under the corongraphic masks. If False, then a sparse grid is used (faster to generate during initial calculations; less disk space and memory). If True, then a higher density grid is calculated (~2.5 hrs for initial creation; ~3.5x larger sizes), which produces improved PSFs at the SGD positions. For purposes of this demo, we set it to False.
Step4: Some information for the reference observation is stored in the attribute obs.Detector_ref, which is a separate NIRCam DetectorOps class that we use to keep track of the detector a multiaccum configurations, which may differe between science and reference observations. Settings for the reference observation can be updated using the obs.gen_ref() function.
Step5: Exposure Settings
Optimization of exposure settings are demonstrated in another tutorial, so we will not repeat that process here. We can assume the optimization process was performed elsewhere to choose the DEEP8 pattern with 16 groups and 5 total integrations. These settings apply to each roll position of the science observation sequence as well as the for the reference observation.
Step6: Add Planets
There are four known giant planets orbiting HR 8799. Ideally, we would like to position them at their predicted locations on the anticipated observation date. For this case, we choose a plausible observation date of November 1, 2022. To convert between $(x,y)$ and $(r,\theta)$, use the nrc_utils.xy_to_rtheta and nrc_utils.rtheta_to_xy functions.
When adding the planets, it doesn't matter too much which exoplanet model spectrum we decide to use since the spectra are still fairly unconstrained at these wavelengths. We do know roughly the planets' luminosities, so we can simply choose some reasonable model and renormalize it to the appropriate filter brightness.
Their are a few exoplanet models available to pynrc (SB12, BEX, COND), but let's choose those from Spiegel & Burrows (2012).
Step7: As we can see, even with "perfect PSF subtraction" and no noise, it's difficult to make out planet e despite providing a similiar magnitude as d. This is primarily due to its location relative to the occulting mask reducing throughput along with confusion of bright diffraction spots from the other nearby sources.
Note
Step8: The majority of the speckle noise here originates from small pointing offsets between the roll positions and reference observation. These PSF centering mismatches dominate the subtraction residuals compared to the WFE drift variations. Small-grid dithers acquired during the reference observations should produce improved subtraction performance through PCA/KLIP algorithms. To get a better idea of the post-processing performance, we re-run these observations assuming perfect target acquisition.
Step9: 2. Contrast Curves
Next, we will cycle through a few WFE drift values to get an idea of potential predicted sensitivity curves. The calc_contrast method returns a tuple of three arrays
Step10: 3. Saturation Levels
Create an image showing level of saturation for each pixel. For NIRCam, saturation is important to track for purposes of accurate slope fits and persistence correction. In this case, we will plot the saturation levels both at NGROUP=2 and NGROUP=obs.det_info['ngroup']. Saturation is defined at 80% well level, but can be modified using the well_frac keyword.
We want to perform this analysis for both science and reference targets.
Step11: In this case, we don't expect HR 8799 to saturate. However, the reference source should have some saturated pixels before the end of an integration. | Python Code:
# Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Enable inline plotting at lower left
%matplotlib inline
Explanation: Coronagraph Basics
This set of exercises guides the user through a step-by-step process of simulating NIRCam coronagraphic observations of the HR 8799 exoplanetary system. The goal is to familiarize the user with basic pynrc classes and functions relevant to coronagraphy.
End of explanation
import pynrc
from pynrc import nrc_utils # Variety of useful functions and classes
from pynrc.obs_nircam import obs_hci # High-contrast imaging observation class
# Progress bar
from tqdm.auto import tqdm, trange
# Disable informational messages and only include warnings and higher
pynrc.setup_logging(level='WARN')
Explanation: We will start by first importing pynrc along with the obs_hci (High Contrast Imaging) class, which lives in the pynrc.obs_nircam module.
End of explanation
# Define 2MASS Ks bandpass and source information
bp_k = pynrc.bp_2mass('k')
# Science source, dist, age, sptype, Teff, [Fe/H], log_g, mag, band
args_sources = [('HR 8799', 39.0, 30, 'F0V', 7430, -0.47, 4.35, 5.24, bp_k)]
# References source, sptype, Teff, [Fe/H], log_g, mag, band
ref_sources = [('HD 220657', 'F8III', 5888, -0.01, 3.22, 3.04, bp_k)]
name_sci, dist_sci, age, spt_sci, Teff_sci, feh_sci, logg_sci, mag_sci, bp_sci = args_sources[0]
name_ref, spt_ref, Teff_ref, feh_ref, logg_ref, mag_ref, bp_ref = ref_sources[0]
# For the purposes of simplicity, we will use pynrc.stellar_spectrum()
sp_sci = pynrc.stellar_spectrum(spt_sci, mag_sci, 'vegamag', bp_sci,
Teff=Teff_sci, metallicity=feh_sci, log_g=logg_sci)
sp_sci.name = name_sci
# And the refernece source
sp_ref = pynrc.stellar_spectrum(spt_ref, mag_ref, 'vegamag', bp_ref,
Teff=Teff_ref, metallicity=feh_ref, log_g=logg_ref)
sp_ref.name = name_ref
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
ind = (w>=xr[0]) & (w<=xr[1])
sp.convert('Jy')
f = sp.flux / np.interp(4.0, w, sp.flux)
ax.semilogy(w[ind], f[ind], lw=1.5, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized at 4 $\mu m$')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('Spectral Sources')
# Overplot Filter Bandpass
bp = pynrc.read_filter('F444W', 'CIRCLYOT', 'MASK430R')
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
Explanation: Source Definitions
The obs_hci class first requires two arguments describing the spectra of the science and reference sources (sp_sci and sp_ref, respectively. Each argument should be a Pysynphot spectrum already normalized to some known flux. pynrc includes built-in functions for generating spectra. The user may use either of these or should feel free to supply their own as long as it meets the requirements.
The pynrc.stellar_spectrum function provides the simplest way to define a new spectrum:
python
bp_k = pynrc.bp_2mass('k') # Define bandpass to normalize spectrum
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k)
You can also be more specific about the stellar properties with Teff, metallicity, and log_g keywords.
python
sp_sci = pynrc.stellar_spectrum('F0V', 5.24, 'vegamag', bp_k,
Teff=7430, metallicity=-0.47, log_g=4.35)
Alternatively, the pynrc.source_spectrum class ingests spectral information of a given target and generates a model fit to the known photometric SED. Two model routines can be fit. The first is a very simple scale factor that is applied to the input spectrum, while the second takes the input spectrum and adds an IR excess modeled as a modified blackbody function. The user can find the relevant photometric data at http://vizier.u-strasbg.fr/vizier/sed/ and click download data as a VOTable.
End of explanation
# The initial call make take some time, as it will need to generate coefficients
# to calculate PSF variations across wavelength, WFE drift, and mask location
filt, mask, pupil = ('F444W', 'MASK430R', 'CIRCLYOT')
wind_mode, subsize = ('WINDOW', 320)
fov_pix, oversample = (321, 2)
obs = pynrc.obs_hci(sp_sci, dist_sci, sp_ref=sp_ref, use_ap_info=False,
filter=filt, image_mask=mask, pupil_mask=pupil,
wind_mode=wind_mode, xpix=subsize, ypix=subsize,
fov_pix=fov_pix, oversample=oversample, large_grid=True)
Explanation: Initialize Observation
Now we will initialize the high-contrast imaging class pynrc.obs_hci using the spectral objects and various other settings. The obs_hci object is a subclass of the more generalized NIRCam class. It implements new settings and functions specific to high-contrast imaging observations for corongraphy and direct imaging.
For this tutorial, we want to observe these targets using the MASK430R coronagraph in the F444W filter. All circular coronagraphic masks such as the 430R (R=round) should be paired with the CIRCLYOT pupil element, whereas wedge/bar masks are paired with WEDGELYOT pupil. Observations in the LW channel are most commonly observed in WINDOW mode with a 320x320 detector subarray size. Full detector sizes are also available.
The PSF simulation size (fov_pix keyword) should also be of similar size as the subarray window (recommend avoiding anything above fov_pix=1024 due to computation time and memory usage). Use odd numbers to center the PSF in the middle of the pixel. If fov_pix is specified as even, then PSFs get centered at the corners. This distinction really only matter for unocculted observations, (ie., where the PSF flux is concentrated in a tight central core).
The obs_hci class also allows one to specify WFE drift values in terms of nm RMS. The wfe_ref_drift parameter defaults to 2nm between
We also need to specify a WFE drift value (wfe_ref_drift parameter), which defines the anticipated drift in nm between the science and reference sources. For the moment, let's intialize with a value of 0nm. This prevents an initially long process by which pynrc calculates changes made to the PSF over a wide range of drift values. This process only happens once, then stores the resulting coefficient residuals to disk for future quick retrieval.
Extended disk models can also be specified upon initialization using the disk_params keyword (which should be a dictionary).
The large_grid keyword controls the quality of PSF variations near and under the corongraphic masks. If False, then a sparse grid is used (faster to generate during initial calculations; less disk space and memory). If True, then a higher density grid is calculated (~2.5 hrs for initial creation; ~3.5x larger sizes), which produces improved PSFs at the SGD positions. For purposes of this demo, we set it to False.
End of explanation
# Set default WFE drift values between Roll1, Roll2, and Ref
# WFE drift amount between rolls
obs.wfe_roll_drift = 2
# Drift amount between Roll 1 and Reference.
obs.wfe_ref_drift = 5
Explanation: Some information for the reference observation is stored in the attribute obs.Detector_ref, which is a separate NIRCam DetectorOps class that we use to keep track of the detector a multiaccum configurations, which may differe between science and reference observations. Settings for the reference observation can be updated using the obs.gen_ref() function.
End of explanation
# Update both the science and reference observations
obs.update_detectors(read_mode='DEEP8', ngroup=16, nint=5, verbose=True)
obs.gen_ref_det(read_mode='DEEP8', ngroup=16, nint=5)
Explanation: Exposure Settings
Optimization of exposure settings are demonstrated in another tutorial, so we will not repeat that process here. We can assume the optimization process was performed elsewhere to choose the DEEP8 pattern with 16 groups and 5 total integrations. These settings apply to each roll position of the science observation sequence as well as the for the reference observation.
End of explanation
# Projected locations for date 11/01/2022
# These are prelimary positions, but within constrained orbital parameters
loc_list = [(-1.625, 0.564), (0.319, 0.886), (0.588, -0.384), (0.249, 0.294)]
# Estimated magnitudes within F444W filter
pmags = [16.0, 15.0, 14.6, 14.7]
# Add planet information to observation class.
# These are stored in obs.planets.
# Can be cleared using obs.delete_planets().
obs.delete_planets()
for i, loc in enumerate(loc_list):
obs.add_planet(model='SB12', mass=10, entropy=13, age=age, xy=loc, runits='arcsec',
renorm_args=(pmags[i], 'vegamag', obs.bandpass))
# Generate and plot a noiseless slope image to verify orientation
PA1 = 85 # Telescope V3 PA
PA_offset = -1*PA1 # Image field is rotated opposite direction
im_planets = obs.gen_planets_image(PA_offset=PA_offset, return_oversample=False)
from matplotlib.patches import Circle
from pynrc.nrc_utils import plotAxes
from pynrc.obs_nircam import get_cen_offsets
fig, ax = plt.subplots(figsize=(6,6))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
xylim = 4
vmin = 0
vmax = 0.5*im_planets.max()
ax.imshow(im_planets, extent=extent, vmin=vmin, vmax=vmax)
# Overlay the coronagraphic mask
detid = obs.Detector.detid
im_mask = obs.mask_images['DETSAMP']
# Do some masked transparency overlays
masked = np.ma.masked_where(im_mask>0.98*im_mask.max(), im_mask)
ax.imshow(1-masked, extent=extent, alpha=0.3, cmap='Greys_r', vmin=-0.5)
for loc in loc_list:
xc, yc = get_cen_offsets(obs, idl_offset=loc, PA_offset=PA_offset)
circle = Circle((xc,yc), radius=xylim/20., alpha=0.7, lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
xlim = ylim = np.array([-1,1])*xylim
xlim = xlim + obs.bar_offset
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.set_title('{} planets -- {} {}'.format(sp_sci.name, obs.filter, obs.image_mask))
color = 'grey'
ax.tick_params(axis='both', color=color, which='both')
for k in ax.spines.keys():
ax.spines[k].set_color(color)
plotAxes(ax, width=1, headwidth=5, alength=0.15, angle=PA_offset,
position=(0.1,0.1), label1='E', label2='N')
fig.tight_layout()
Explanation: Add Planets
There are four known giant planets orbiting HR 8799. Ideally, we would like to position them at their predicted locations on the anticipated observation date. For this case, we choose a plausible observation date of November 1, 2022. To convert between $(x,y)$ and $(r,\theta)$, use the nrc_utils.xy_to_rtheta and nrc_utils.rtheta_to_xy functions.
When adding the planets, it doesn't matter too much which exoplanet model spectrum we decide to use since the spectra are still fairly unconstrained at these wavelengths. We do know roughly the planets' luminosities, so we can simply choose some reasonable model and renormalize it to the appropriate filter brightness.
Their are a few exoplanet models available to pynrc (SB12, BEX, COND), but let's choose those from Spiegel & Burrows (2012).
End of explanation
# Create pointing offset with a random seed for reproducibility
obs.gen_pointing_offsets(rand_seed=1234, verbose=True)
# Cycle through a few WFE drift values
wfe_list = [0,5,10]
# PA values for each roll
PA1, PA2 = (85,95)
# A dictionary of HDULists
hdul_dict = {}
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift)
hdul_dict[wfe_drift] = hdulist
from pynrc.nb_funcs import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=10)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.1), label1='E', label2='N')
fig.suptitle('{} -- {} {}'.format(name_sci, obs.filter, obs.image_mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
Explanation: As we can see, even with "perfect PSF subtraction" and no noise, it's difficult to make out planet e despite providing a similiar magnitude as d. This is primarily due to its location relative to the occulting mask reducing throughput along with confusion of bright diffraction spots from the other nearby sources.
Note: the circled regions of the expected planet positions don't perfectly align with the PSFs, because the LW wavelengths have a slight dispersion through the Lyot mask material.
Estimated Performance
Now we are ready to determine contrast performance and sensitivites as a function of distance from the star.
1. Roll-Subtracted Images
First, we will create a quick simulated roll-subtracted image using the in gen_roll_image method. For the selected observation date of 11/1/2022, APT shows a PA range of 84$^{\circ}$ to 96$^{\circ}$. So, we'll assume Roll 1 has PA1=85, while Roll 2 has PA2=95. In this case, "roll subtraction" simply creates two science images observed at different parallactic angles, then subtracts the same reference observation from each. The two results are then de-rotated to a common PA=0 and averaged.
There is also the option to create ADI images, where the other roll position becomes the reference star by setting no_ref=True.
Images generated with the gen_roll_image method will also include random pointing offsets described in the pointing_info dictionary. These can be generated by calling obs.gen_pointing_offsets().
End of explanation
hdul_dict = {}
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
# Assume perfect centering by setting xyoff_***=(0,0)
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift,
xyoff_roll1=(0,0), xyoff_roll2=(0,0), xyoff_ref=(0,0))
hdul_dict[wfe_drift] = hdulist
from pynrc.nb_funcs import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=10)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.1), label1='E', label2='N')
fig.suptitle('Ideal TA ({} -- {} {})'.format(name_sci, obs.filter, obs.image_mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
Explanation: The majority of the speckle noise here originates from small pointing offsets between the roll positions and reference observation. These PSF centering mismatches dominate the subtraction residuals compared to the WFE drift variations. Small-grid dithers acquired during the reference observations should produce improved subtraction performance through PCA/KLIP algorithms. To get a better idea of the post-processing performance, we re-run these observations assuming perfect target acquisition.
End of explanation
nsig = 5
roll_angle = np.abs(PA2 - PA1)
curves = []
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
# Generate contrast curves
result = obs.calc_contrast(roll_angle=roll_angle, nsig=nsig,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift,
xyoff_roll1=(0,0), xyoff_roll2=(0,0), xyoff_ref=(0,0))
curves.append(result)
from pynrc.nb_funcs import plot_contrasts, plot_planet_patches, plot_contrasts_mjup, update_yscale
import matplotlib.patches as mpatches
# fig, ax = plt.subplots(figsize=(8,5))
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
xr=[0,5]
yr=[24,8]
# 1a. Plot contrast curves and set x/y limits
ax = axes[0]
ax, ax2, ax3 = plot_contrasts(curves, nsig, wfe_list, obs=obs,
xr=xr, yr=yr, ax=ax, return_axes=True)
# 1b. Plot the locations of exoplanet companions
label = 'Companions ({})'.format(filt)
planet_dist = [np.sqrt(x**2+y**2) for x,y in loc_list]
ax.plot(planet_dist, pmags, marker='o', ls='None', label=label, color='k', zorder=10)
# 1c. Plot Spiegel & Burrows (2012) exoplanet fluxes (Hot Start)
plot_planet_patches(ax, obs, age=age, entropy=13, av_vals=None)
ax.legend(ncol=2)
# 2. Plot in terms of MJup using COND models
ax = axes[1]
ax1, ax2, ax3 = plot_contrasts_mjup(curves, nsig, wfe_list, obs=obs, age=age,
ax=ax, twin_ax=True, xr=xr, yr=None, return_axes=True)
yr = [0.03,100]
for xval in planet_dist:
ax.plot([xval,xval],yr, lw=1, ls='--', color='k', alpha=0.7)
update_yscale(ax1, 'log', ylim=yr)
yr_temp = np.array(ax1.get_ylim()) * 318.0
update_yscale(ax2, 'log', ylim=yr_temp)
# ax.set_yscale('log')
# ax.set_ylim([0.08,100])
ax.legend(loc='upper right', title='BEX ({:.0f} Myr)'.format(age))
fig.suptitle('{} ({} + {})'.format(name_sci, obs.filter, obs.image_mask), fontsize=16)
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
Explanation: 2. Contrast Curves
Next, we will cycle through a few WFE drift values to get an idea of potential predicted sensitivity curves. The calc_contrast method returns a tuple of three arrays:
1. The radius in arcsec.
2. The n-sigma contrast.
3. The n-sigma magnitude sensitivity limit (vega mag).
In order to better understand the relative contributes of WFE drift to contrast loss, we're going to ignore telescope pointing offsets by explicitly passing xoff_* = (0,0) keywords for Roll 1, Roll 2, and Ref observations.
End of explanation
# Saturation limits
ng_max = obs.det_info['ngroup']
sp_flat = pynrc.stellar_spectrum('flat')
print('NGROUP=2')
_ = obs.sat_limits(sp=sp_flat,ngroup=2,verbose=True)
print('')
print(f'NGROUP={ng_max}')
_ = obs.sat_limits(sp=sp_flat,ngroup=ng_max,verbose=True)
mag_sci = obs.star_flux('vegamag')
mag_ref = obs.star_flux('vegamag', sp=obs.sp_ref)
print('')
print(f'{obs.sp_sci.name} flux at {obs.filter}: {mag_sci:0.2f} mags')
print(f'{obs.sp_ref.name} flux at {obs.filter}: {mag_ref:0.2f} mags')
Explanation: 3. Saturation Levels
Create an image showing level of saturation for each pixel. For NIRCam, saturation is important to track for purposes of accurate slope fits and persistence correction. In this case, we will plot the saturation levels both at NGROUP=2 and NGROUP=obs.det_info['ngroup']. Saturation is defined at 80% well level, but can be modified using the well_frac keyword.
We want to perform this analysis for both science and reference targets.
End of explanation
# Well level of each pixel for science source
sci_levels1 = obs.saturation_levels(ngroup=2, exclude_planets=True)
sci_levels2 = obs.saturation_levels(ngroup=ng_max, exclude_planets=True)
# Which pixels are saturated? Assume sat level at 90% full well.
sci_mask1 = sci_levels1 > 0.9
sci_mask2 = sci_levels2 > 0.9
# Well level of each pixel for reference source
ref_levels1 = obs.saturation_levels(ngroup=2, do_ref=True)
ref_levels2 = obs.saturation_levels(ngroup=ng_max, do_ref=True)
# Which pixels are saturated? Assume sat level at 90% full well.
ref_mask1 = ref_levels1 > 0.9
ref_mask2 = ref_levels2 > 0.9
# How many saturated pixels?
nsat1_sci = len(sci_levels1[sci_mask1])
nsat2_sci = len(sci_levels2[sci_mask2])
print(obs.sp_sci.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_sci))
print('{} saturated pixel at NGROUP={}'.format(nsat2_sci,ng_max))
# How many saturated pixels?
nsat1_ref = len(ref_levels1[ref_mask1])
nsat2_ref = len(ref_levels2[ref_mask2])
print('')
print(obs.sp_ref.name)
print('{} saturated pixel at NGROUP=2'.format(nsat1_ref))
print('{} saturated pixel at NGROUP={}'.format(nsat2_ref,ng_max))
# Saturation Mask for science target
nsat1, nsat2 = (nsat1_sci, nsat2_sci)
sat_mask1, sat_mask2 = (sci_mask1, sci_mask2)
sp = obs.sp_sci
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title('{} Saturation (NGROUP=2)'.format(sp.name))
axes[1].set_title('{} Saturation (NGROUP={})'.format(sp.name,ng_max))
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
# Saturation Mask for reference
nsat1, nsat2 = (nsat1_ref, nsat2_ref)
sat_mask1, sat_mask2 = (ref_mask1, ref_mask2)
sp = obs.sp_ref
# Only display saturation masks if there are saturated pixels
if nsat2 > 0:
fig, axes = plt.subplots(1,2, figsize=(10,5))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
axes[0].imshow(sat_mask1, extent=extent)
axes[1].imshow(sat_mask2, extent=extent)
axes[0].set_title(f'{sp.name} Saturation (NGROUP=2)')
axes[1].set_title(f'{sp.name} Saturation (NGROUP={ng_max})')
for ax in axes:
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.tick_params(axis='both', color='white', which='both')
for k in ax.spines.keys():
ax.spines[k].set_color('white')
fig.tight_layout()
else:
print('No saturation detected.')
Explanation: In this case, we don't expect HR 8799 to saturate. However, the reference source should have some saturated pixels before the end of an integration.
End of explanation |
8,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Structures
This chapter describes some things you’ve learned about already in more detail, and adds some new things as well.
More on Lists
list.append(x)
Add an item to the end of the list; equivalent to a[len(a)
Step1: You might have noticed that methods like insert, remove or sort that only modify the list have no return value printed – they return the default None.
This is a design principle for all mutable data structures in Python.
Using Lists as Stacks
The list methods make it very easy to use a list as a stack, where the last element added is the first element retrieved (“last-in, first-out”).
To add an item to the top of the stack, use append(). To retrieve an item from the top of the stack, use pop() without an explicit index. For example
Step2: Using Lists as Queues
It is also possible to use a list as a queue, where the first element added is the first element retrieved (“first-in, first-out”); however, lists are not efficient for this purpose. While appends and pops from the end of list are fast, doing inserts or pops from the beginning of a list is slow (because all of the other elements have to be shifted by one).
To implement a queue, use collections.deque which was designed to have fast appends and pops from both ends. For example
Step3: List Comprehension
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
For example, assume we want to create a list of squares, like
Step4: We can obtain the same result with
Step5: This is also equivalent to squares = map(lambda x
Step6: and it’s equivalent to
Step7: Note how the order of the for and if statements is the same in both these snippets.
If the expression is a tuple (e.g. the (x, y) in the previous example), it must be parenthesized.
Examples
Step8: List comprehensions can contain complex expressions and nested functions
Step9: Nested List Comprehensions
The initial expression in a list comprehension can be any arbitrary expression, including another list comprehension.
Consider the following example of a 3x4 matrix implemented as a list of 3 lists of length 4
Step10: The following list comprehension will transpose rows and columns
Step11: As we saw in the previous section, the nested listcomp is evaluated in the context of the for that follows it, so this example is equivalent to
Step12: which, in turn, is the same as
Step13: zip()
In the real world, you should prefer built-in functions to complex flow statements.
The zip() function would do a great job for this use case
Step14: See Unpacking Argument Lists for details on the asterisk in this line.
The del statement
There is a way to remove an item from a list given its index instead of its value
Step15: del can also be used to delete entire variables
Step16: Referencing the name a hereafter is an error (at least until another value is assigned to it). We’ll find other uses for del later.
Tuples and Sequences
We saw that lists and strings have many common properties, such as indexing and slicing operations.
They are two examples of sequence data types (see Sequence Types — str, unicode, list, tuple, bytearray, buffer, xrange).
Since Python is an evolving language, other sequence data types may be added.
There is also another standard sequence data type
Step17: Parentheses
As you see, on output tuples are always enclosed in parentheses, so that nested tuples are interpreted correctly.
They may be input with or without surrounding parentheses, although often parentheses are necessary anyway (if the tuple is part of a larger expression).
It is not possible to assign to the individual items of a tuple, however it is possible to create tuples which contain mutable objects, such as lists.
Lists vs Tuples
Though tuples may seem similar to lists, they are often used in different situations and for different purposes.
Tuples are immutable, and usually contain a heterogeneous sequence of elements that are accessed via unpacking (see later in this section) or indexing (or even by attribute in the case of namedtuples).
Lists are mutable, and their elements are usually homogeneous and are accessed by iterating over the list.
Tuple Creation
A special problem is the construction of tuples containing 0 or 1 items
Step18: The statement t = 12345, 54321, 'hello!' is an example of tuple packing
Step19: This is called, appropriately enough, sequence unpacking and works for any sequence on the right-hand side.
Sequence unpacking requires the list of variables on the left to have the same number of elements as the length of the sequence.
Note that multiple assignment is really just a combination of tuple packing and sequence unpacking.
Sets
Python also includes a data type for sets.
A set is an unordered collection with no duplicate elements.
Basic uses include membership testing and eliminating duplicate entries.
Set objects also support mathematical operations like
- union
- intersection
- difference
- symmetric difference.
Curly braces or the set() function can be used to create sets.
Note
Step20: Similarly to list comprehensions, set comprehensions are also supported
Step21: Dictionaries
Another useful data type built into Python is the dictionary (see Mapping Types — dict).
Dictionaries are sometimes found in other languages as “associative memories” or “associative arrays”.
Unlike sequences, which are indexed by a range of numbers, dictionaries are indexed by keys, which can be any immutable type; strings and numbers can always be keys.
Tuples can be used as keys if they contain only strings, numbers, or tuples; if a tuple contains any mutable object either directly or indirectly, it cannot be used as a key.
You can’t use lists as keys, since lists can be modified in place using index assignments, slice assignments, or methods like append() and extend().
It is best to think of a dictionary as an unordered set of key
Step22: The dict() constructor builds dictionaries directly from sequences of key-value pairs
Step23: In addition, dict comprehensions can be used to create dictionaries from arbitrary key and value expressions
Step24: When the keys are simple strings, it is sometimes easier to specify pairs using keyword arguments
Step25: Looping Techniques
When looping through a sequence, the position index and corresponding value can be retrieved at the same time using the enumerate() function.
Step26: zip()
To loop over two or more sequences at the same time, the entries can be paired with the zip() function.
Step27: reversed()
To loop over a sequence in reverse, first specify the sequence in a forward direction and then call the reversed() function.
Step28: sorted()
To loop over a sequence in sorted order, use the sorted() function which returns a new sorted list while leaving the source unaltered.
Step29: items()
When looping through dictionaries, the key and corresponding value can be retrieved at the same time using the items() method.
Step30: It is sometimes tempting to change a list while you are looping over it; however, it is often simpler and safer to create a new list instead. | Python Code:
from __future__ import print_function
a = [66.25, 333, 333, 1, 1234.5]
print(a.count(333), a.count(66.25), a.count('x'))
a.insert(2, -1)
a.append(333)
a
a.index(333)
a.remove(333)
a
a.reverse()
a
a.sort()
a
a.pop()
a
Explanation: Data Structures
This chapter describes some things you’ve learned about already in more detail, and adds some new things as well.
More on Lists
list.append(x)
Add an item to the end of the list; equivalent to a[len(a):] = [x].
list.extend(L)
Extend the list by appending all the items in the given list; equivalent to a[len(a):] = L.
list.insert(i, x)
Insert an item at a given position. The first argument is the index of the element before which to insert, so a.insert(0, x) inserts at the front of the list, and a.insert(len(a), x) is equivalent to a.append(x).
list.remove(x)
Remove the first item from the list whose value is x. It is an error if there is no such item.
list.pop([i])
Remove the item at the given position in the list, and return it. If no index is specified, a.pop() removes and returns the last item in the list. (The square brackets around the i in the method signature denote that the parameter is optional, not that you should type square brackets at that position. You will see this notation frequently in the Python Library Reference.)
list.index(x)
Return the index in the list of the first item whose value is x. It is an error if there is no such item.
list.count(x)
Return the number of times x appears in the list.
list.sort(cmp=None, key=None, reverse=False)
Sort the items of the list in place (the arguments can be used for sort customization, see sorted() for their explanation).
list.reverse()
Reverse the elements of the list, in place.
End of explanation
stack = [3, 4, 5]
stack.append(6)
stack.append(7)
stack
stack.pop()
stack
stack.pop()
stack.pop()
stack
Explanation: You might have noticed that methods like insert, remove or sort that only modify the list have no return value printed – they return the default None.
This is a design principle for all mutable data structures in Python.
Using Lists as Stacks
The list methods make it very easy to use a list as a stack, where the last element added is the first element retrieved (“last-in, first-out”).
To add an item to the top of the stack, use append(). To retrieve an item from the top of the stack, use pop() without an explicit index. For example:
End of explanation
from collections import deque
queue = deque(["Eric", "John", "Michael"])
queue.append("Terry") # Terry arrives
queue.append("Graham") # Graham arrives
queue.popleft() # The first to arrive now leaves
queue.popleft() # The second to arrive now leaves
queue # Remaining queue in order of arrival
Explanation: Using Lists as Queues
It is also possible to use a list as a queue, where the first element added is the first element retrieved (“first-in, first-out”); however, lists are not efficient for this purpose. While appends and pops from the end of list are fast, doing inserts or pops from the beginning of a list is slow (because all of the other elements have to be shifted by one).
To implement a queue, use collections.deque which was designed to have fast appends and pops from both ends. For example:
End of explanation
squares = []
for x in range(10):
squares.append(x**2)
squares
Explanation: List Comprehension
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
For example, assume we want to create a list of squares, like:
End of explanation
squares = [x**2 for x in range(10)]
squares
Explanation: We can obtain the same result with:
End of explanation
[(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]
Explanation: This is also equivalent to squares = map(lambda x: x**2, range(10)), but it’s more concise and readable.
A list comprehension consists of brackets containing an expression followed by a for clause, then zero or more for or if clauses.
The result will be a new list resulting from evaluating the expression in the context of the for and if clauses which follow it.
For example, this listcomp combines the elements of two lists if they are not equal:
End of explanation
combs = []
for x in [1,2,3]:
for y in [3,1,4]:
if x != y:
combs.append((x, y))
combs
Explanation: and it’s equivalent to:
End of explanation
vec = [-4, -2, 0, 2, 4]
# create a new list with the values doubled
[x*2 for x in vec]
# filter the list to exclude negative numbers
[x for x in vec if x >= 0]
# apply a function to all the elements
[abs(x) for x in vec]
# call a method on each element
freshfruit = [' banana', ' loganberry ', 'passion fruit ']
[weapon.strip() for weapon in freshfruit]
# create a list of 2-tuples like (number, square)
[(x, x**2) for x in range(6)]
# the tuple must be parenthesized, otherwise an error is raised
# [x, x**2 for x in range(6)]
# flatten a list using a listcomp with two 'for'
vec = [[1,2,3], [4,5,6], [7,8,9]]
[num for elem in vec for num in elem]
Explanation: Note how the order of the for and if statements is the same in both these snippets.
If the expression is a tuple (e.g. the (x, y) in the previous example), it must be parenthesized.
Examples
End of explanation
from math import pi
[str(round(pi, i)) for i in range(1, 6)]
Explanation: List comprehensions can contain complex expressions and nested functions:
End of explanation
matrix = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
]
matrix
Explanation: Nested List Comprehensions
The initial expression in a list comprehension can be any arbitrary expression, including another list comprehension.
Consider the following example of a 3x4 matrix implemented as a list of 3 lists of length 4:
End of explanation
[[row[i] for row in matrix] for i in range(4)]
Explanation: The following list comprehension will transpose rows and columns:
End of explanation
transposed = []
for i in range(4):
transposed.append([row[i] for row in matrix])
transposed
Explanation: As we saw in the previous section, the nested listcomp is evaluated in the context of the for that follows it, so this example is equivalent to:
End of explanation
transposed = []
for i in range(4):
# the following 3 lines implement the nested listcomp
transposed_row = []
for row in matrix:
transposed_row.append(row[i])
transposed.append(transposed_row)
transposed
Explanation: which, in turn, is the same as:
End of explanation
zip(*matrix)
Explanation: zip()
In the real world, you should prefer built-in functions to complex flow statements.
The zip() function would do a great job for this use case:
End of explanation
a = [-1, 1, 66.25, 333, 333, 1234.5]
del a[0]
a
del a[2:4]
a
del a[:]
a
Explanation: See Unpacking Argument Lists for details on the asterisk in this line.
The del statement
There is a way to remove an item from a list given its index instead of its value: the del statement.
This differs from the pop() method which returns a value.
The del statement can also be used to remove slices from a list or clear the entire list (which we did earlier by assignment of an empty list to the slice).
For example:
End of explanation
del a
Explanation: del can also be used to delete entire variables:
End of explanation
t = 12345, 54321, 'hello!'
t[0]
t
# Tuples may be nested:
u = t, (1, 2, 3, 4, 5)
u
# Tuples are immutable:
# t[0] = 88888
# but they can contain mutable objects:
v = ([1, 2, 3], [3, 2, 1])
v
Explanation: Referencing the name a hereafter is an error (at least until another value is assigned to it). We’ll find other uses for del later.
Tuples and Sequences
We saw that lists and strings have many common properties, such as indexing and slicing operations.
They are two examples of sequence data types (see Sequence Types — str, unicode, list, tuple, bytearray, buffer, xrange).
Since Python is an evolving language, other sequence data types may be added.
There is also another standard sequence data type: the tuple.
A tuple consists of a number of values separated by commas, for instance:
End of explanation
empty = ()
singleton = 'hello', # <-- note trailing comma
len(empty)
len(singleton)
singleton
Explanation: Parentheses
As you see, on output tuples are always enclosed in parentheses, so that nested tuples are interpreted correctly.
They may be input with or without surrounding parentheses, although often parentheses are necessary anyway (if the tuple is part of a larger expression).
It is not possible to assign to the individual items of a tuple, however it is possible to create tuples which contain mutable objects, such as lists.
Lists vs Tuples
Though tuples may seem similar to lists, they are often used in different situations and for different purposes.
Tuples are immutable, and usually contain a heterogeneous sequence of elements that are accessed via unpacking (see later in this section) or indexing (or even by attribute in the case of namedtuples).
Lists are mutable, and their elements are usually homogeneous and are accessed by iterating over the list.
Tuple Creation
A special problem is the construction of tuples containing 0 or 1 items: the syntax has some extra quirks to accommodate these.
Empty tuples are constructed by an empty pair of parentheses; a tuple with one item is constructed by following a value with a comma (it is not sufficient to enclose a single value in parentheses).
Ugly, but effective. For example:
End of explanation
x, y, z = t
print(x)
print(y)
print(z)
Explanation: The statement t = 12345, 54321, 'hello!' is an example of tuple packing: the values 12345, 54321 and 'hello!' are packed together in a tuple. The reverse operation is also possible:
End of explanation
basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
fruit = set(basket) # create a set without duplicates
fruit
'orange' in fruit # fast membership testing
'crabgrass' in fruit
# Demonstrate set operations on unique letters from two words
a = set('abracadabra')
b = set('alacazam')
a # unique letters in a
a - b # letters in a but not in b
a | b # letters in either a or b
a & b # letters in both a and b
a ^ b # letters in a or b but not both
Explanation: This is called, appropriately enough, sequence unpacking and works for any sequence on the right-hand side.
Sequence unpacking requires the list of variables on the left to have the same number of elements as the length of the sequence.
Note that multiple assignment is really just a combination of tuple packing and sequence unpacking.
Sets
Python also includes a data type for sets.
A set is an unordered collection with no duplicate elements.
Basic uses include membership testing and eliminating duplicate entries.
Set objects also support mathematical operations like
- union
- intersection
- difference
- symmetric difference.
Curly braces or the set() function can be used to create sets.
Note: to create an empty set you have to use set(), not {}; the latter creates an empty dictionary, a data structure that we discuss in the next section.
Here is a brief demonstration:
End of explanation
a = {x for x in 'abracadabra' if x not in 'abc'}
a
Explanation: Similarly to list comprehensions, set comprehensions are also supported:
End of explanation
tel = {'jack': 4098, 'sape': 4139}
tel['guido'] = 4127
tel
tel['jack']
del tel['sape']
tel['irv'] = 4127
tel
tel.keys()
'guido' in tel
Explanation: Dictionaries
Another useful data type built into Python is the dictionary (see Mapping Types — dict).
Dictionaries are sometimes found in other languages as “associative memories” or “associative arrays”.
Unlike sequences, which are indexed by a range of numbers, dictionaries are indexed by keys, which can be any immutable type; strings and numbers can always be keys.
Tuples can be used as keys if they contain only strings, numbers, or tuples; if a tuple contains any mutable object either directly or indirectly, it cannot be used as a key.
You can’t use lists as keys, since lists can be modified in place using index assignments, slice assignments, or methods like append() and extend().
It is best to think of a dictionary as an unordered set of key: value pairs, with the requirement that the keys are unique (within one dictionary). A pair of braces creates an empty dictionary: {}. Placing a comma-separated list of key:value pairs within the braces adds initial key:value pairs to the dictionary; this is also the way dictionaries are written on output.
The main operations on a dictionary are storing a value with some key and extracting the value given the key. It is also possible to delete a key:value pair with del. If you store using a key that is already in use, the old value associated with that key is forgotten. It is an error to extract a value using a non-existent key.
The keys() method of a dictionary object returns a list of all the keys used in the dictionary, in arbitrary order (if you want it sorted, just apply the sorted() function to it). To check whether a single key is in the dictionary, use the in keyword.
Here is a small example using a dictionary:
End of explanation
dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])
Explanation: The dict() constructor builds dictionaries directly from sequences of key-value pairs:
End of explanation
{x: x**2 for x in (2, 4, 6)}
Explanation: In addition, dict comprehensions can be used to create dictionaries from arbitrary key and value expressions:
End of explanation
dict(sape=4139, guido=4127, jack=4098)
Explanation: When the keys are simple strings, it is sometimes easier to specify pairs using keyword arguments:
End of explanation
for i, v in enumerate(['tic', 'tac', 'toe']):
print(i, v)
Explanation: Looping Techniques
When looping through a sequence, the position index and corresponding value can be retrieved at the same time using the enumerate() function.
End of explanation
questions = ['name', 'quest', 'favorite color']
answers = ['lancelot', 'the holy grail', 'blue']
for q, a in zip(questions, answers):
print('What is your {0}? It is {1}.'.format(q, a))
Explanation: zip()
To loop over two or more sequences at the same time, the entries can be paired with the zip() function.
End of explanation
for i in reversed(range(1,10,2)):
print(i)
Explanation: reversed()
To loop over a sequence in reverse, first specify the sequence in a forward direction and then call the reversed() function.
End of explanation
basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
for f in sorted(set(basket)):
print(f)
Explanation: sorted()
To loop over a sequence in sorted order, use the sorted() function which returns a new sorted list while leaving the source unaltered.
End of explanation
knights = {'gallahad': 'the pure', 'robin': 'the brave'}
for k, v in knights.items():
print(k, v)
Explanation: items()
When looping through dictionaries, the key and corresponding value can be retrieved at the same time using the items() method.
End of explanation
import math
raw_data = [56.2, float('NaN'), 51.7, 55.3, 52.5, float('NaN'), 47.8]
filtered_data = []
for value in raw_data:
if not math.isnan(value):
filtered_data.append(value)
filtered_data
Explanation: It is sometimes tempting to change a list while you are looping over it; however, it is often simpler and safer to create a new list instead.
End of explanation |
8,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing a Protein Databank File
In this notebook we will do some simple parsings and analysis on a Protein Databank (PDB) file. You don't need to care about proteins to follow this example. The idea is that some (downloaded) files may need some changes before they can be fed into your favourite program or that you may want to perform some simple analisis on them.
For a regular work on PDB files I recommend the use of the Biopython extension Bio.PDB
In this tutorial we will
Step1: Charge calculations
We will only consider the standard protonation state at pH=7. Of course, we need to define the negative residues and the positive residues. You can find the charged residues here
We could use a list, but as the order here is irrelevant, we will use a set (for large collections, checking the presence of an object in a set is much faster than checking in a list. In our case, this is irrelevant).
Step2: Now we need to count the number of residues. The problem is that if we count the number of GLU, ASP,... occurrences, we will be counting atoms, not residues. And as each residue has a different number of atoms, the translation from atoms to residues is not trivial. But residues are numbered in a PDB. They correspond to columns 23-26.
We need to find a change in that column. As a starter, let's see if we can count all the residues this way
Step3: Good! So now that we know how to detect a change in residue, we need to count how many of these residues are in the charged set. | Python Code:
filein = open('../data/protein.pdb', 'r')
fileout = open('../data/protein_hie.pdb', 'w')
#Finish...
filein.close()
fileout.close()
Explanation: Parsing a Protein Databank File
In this notebook we will do some simple parsings and analysis on a Protein Databank (PDB) file. You don't need to care about proteins to follow this example. The idea is that some (downloaded) files may need some changes before they can be fed into your favourite program or that you may want to perform some simple analisis on them.
For a regular work on PDB files I recommend the use of the Biopython extension Bio.PDB
In this tutorial we will:
Rename HIS residues to HIE residues
Get the total charge of the protein and the number of charged residues.
Translate the protein to its center of mass.
The data directory contains some pdbs.
The PDB is a fixed format. Its format is detailed here. What is relevant here is that the name of the protein residue occupies columns 18-20 and that $x, y, z$ coordinates occupy 31-38, 39-46, 47-54. The numbering start from 1. Atomic positions are in lines starting with ATOM or HETATM records.
HIS to HIE
End of explanation
negative = set(['ARG', 'LYS']) #We consider histidines (HIS) neutral and epsilon protonated (thus the HIE name)
positive = #Finish
charged = #Finish using union method
Explanation: Charge calculations
We will only consider the standard protonation state at pH=7. Of course, we need to define the negative residues and the positive residues. You can find the charged residues here
We could use a list, but as the order here is irrelevant, we will use a set (for large collections, checking the presence of an object in a set is much faster than checking in a list. In our case, this is irrelevant).
End of explanation
total_res = 0
filein = open('data/protein.pdb', 'r')
# Finish...
print("Total number of residues: ", total_res)
Explanation: Now we need to count the number of residues. The problem is that if we count the number of GLU, ASP,... occurrences, we will be counting atoms, not residues. And as each residue has a different number of atoms, the translation from atoms to residues is not trivial. But residues are numbered in a PDB. They correspond to columns 23-26.
We need to find a change in that column. As a starter, let's see if we can count all the residues this way: by counting the times there is residue change. I can tell you that the protein as 903 residues.
End of explanation
total_charged = 0
charge = 0
filein = open('data/protein.pdb', 'r')
# Finish
print("Total number of charged residues: ", total_charged)
print("Net charge of the protein: ", charge)
Explanation: Good! So now that we know how to detect a change in residue, we need to count how many of these residues are in the charged set.
End of explanation |
8,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem 01 - solution
The following are my approach to solving these problems. Note that there may be more than one approach to each, and we could even debate the exact solutions.
Problem 1 - Word Counts
Part A - Part A. Characters in Little Women
How many times are each of the following characters mentioned by name in the text of Little Women
Step1: As we can see from the output of the command above, by the straight wording of the question, there are exactly 1,355 mentions of Jo, 683 of Meg, 645 of Amy, and 459 of Beth in Little Women. If we were to assume that diminutive or nickname forms might count as well, we might add mentions of "Megs", "Bethy", and "Meggy" to these counts, for example. For the purposes of this solution, however, I assume that these are not required, because the text might need to be consulted directly by someone familiar with the novel to determine which nicknames are valid, which seems beyond the scope of the assignment.
Part B - Juliet and Romeo in Romeo and Juliet
How many times do each of the characters Juliet and Romeo have speaking lines in Romeo and Juliet? Keep in mind that this is the text of a play.
Step2: First we must recall -- as the problem highlights -- that this text is that of a play. Because of this, we cannot simply count mentions of "Romeo," as we might accidentally inflate the count due to mentions of this character, for example, by other characters in their speaking lines. Instead, we must first look for a patter that indicates Romeo's speaking lines specifically.
Step3: In this brief sample, we can see title lines and metadata that include mention of Romeo, and both stage directions ("Enter Romeo") and spoken lines that include his name. What stands out, though, is that lines spoken by Romeo appear to be delineated by "Rom.", so we can search for this specific pattern. Let's verify that the same should hold true for mentions of Juliet.
Step4: We see that the pattern seems to hold for both. I will assume that matches of the exact characters "Rom." and "Jul." indicate the start of a speaking line for one or the other characters, and will explicitly count only those lines.
Step5: The two pipelines above indicate that Romeo has 163 speaking lines, while Juliet has only 117. To match the specific case with a trailing ., the first regular expressions in both above cases use the -w flag to denote a word match and the escape sequence \. to match the literal trailing period. The second regular expressions include this literal at the end of the match sequence as well, with the trailing literal period in '\w{{2,}}\.' requiring that the match include the period at the end.
Problem 2 - Capital Bikeshare
Part A - Station counts
Which 10 Capital Bikeshare stations were the most popular departing stations in Q1 2016?
Step6: As we can see in the above results, the top ten starting stations in this time period were led by Columbus Circle / Union Station with over 13,000 rides, followed by Dupont Circle and the Lincoln Memorial and the rest as listed.
In the pipeline above, tail -n +2 ensures we skip the header line before the sort process begins.
Which 10 were the most popular destination stations in Q1 2016?
Step7: The above results show us very similar numbers for destination stations during the same time period, with the first four stations unchanged and led again by Union Station with over 13,000 rides. Thomas Circle appears to be a more prominent start station than end station, as does Eastern Market, which does not even make the top ten destination stations.
Part B - bike counts
For the most popular departure station, which 10 bikes were used most in trips departing from there?
In this part, we will use csvgrep to select only the required stations - Union Station, in both cases.
Step8: We can further limit the columns used to cut down on the data flowing through the pipe.
Step9: Above are the most commonly used bikes in trips departing from Union Station, led by bike number W22227. As we might expect it appears that the distribution seems rather uniform. Note that because several bikes had exactly 15 trips starting from Union Station, the list includes the top twelve bikes, rather than the top ten.
Which 10 bikes were used most in trips ending at the most popular destination station?
Step10: Above are the most commonly used bikes in trips arriving at Union Station, let by bike number W00485. It is interesting to note that bike W22227, the top departing bike, is in second place, but bike W00485, the top arriving bike, does not appear in the top ten departing bikes. In any case these also seem at first glance to be uniformly distributed. Again, the list is expanded, this time to fifteen bikes, to account for the tie at exactly fifteen trips.
Problem 3 - Filters
Part A - split and lowercase filters
Write a Python filter than replaces grep -oE '\w{2,}' to split lines of text into one word per line, and write an additional Python filter to replace tr '[
Step11: The file split.py is modified from the template to split lines of text into one word per line. To demonstrate this, we can compare the original pipeline with a new pipeline with split.py substituting for the first grep command.
Step12: We can ignore the broken pipe and related errors as the output appears to be correct.
Next, we repeat the pipeline with split.py substituted
Step13: Examining the filter script below, the key line, #14, removes trailing newlines, splits tokens by the space (' '), and removes words that are not entirely alphabetical.
Step14: Almost the exact words listed appear in nearly the same order, but with lower counts for each. We can examine the output of each command to see if there are obvious differences
Step15: We can see straight away on the first few lines that there is a difference. Let's look at the text itself
Step16: Three obvious issues jump out. First, the initial "The" is elided; it is not clear why. Next, "Women" is removed, perhaps due to the trailing comma, which will cause the token to fail the isalpha() test. Also, "Alcott" is removed, perhaps having to do with its position at the end of the line.
We can update the filter to use Python's regular expression model and a similar expression, \w{1,} to find all matches more intelligently. Here the regular expression is prepared in line 13 and used in line 18.
Step17: This looks much better. We can try the full pipeline again
Step18: This looks to be an exact match.
Step19: The filter lowercase.py is modified from the template to lowercase incoming lines of text.
Step20: Note that the only line aside from the comments that changes in the above script is line #12, which adds the lower() to the print statement.
Step21: This looks correct, so we'll first attempt to replace the original pipeline's use of tr with lowercase.py
Step22: Looks good so far, we are seeing the exact same counts. To address the problem's challenge, we finally replace both filters at once.
Step23: This completes Problem 3 - Part A.
Part B - stop words
Write a Python filter that removes at least ten common words of English text, commonly known as "stop words". Sources of English stop word lists are readily available online, or you may generate your own list from the text.
We begin by acquiring a common list of English stop words, gathered from the site http
Step24: Next we copy the template filter script as before, renaming it appropriately.
Step25: The key changes in stopwords.py from the template are line #13, which imports the list of stopwords, and line #20, which checks whether an incoming word is in the stopword list. Note also that in line #19 the removal of a trailing newline occurs before checking for stopwords.
The assumption that incoming text will already be split into one word per line and lowercased is stated explicitly in the first comment, lines #6-7.
Step26: This appears to be correct. Let's put it all together
Step27: This would seem to be correct - we see the names we looked for earlier appearing near the top of the list, and common stop words are indeed removed - however the list starts with odd "words", in "t", "s", "m", and "ll". Is it possible that these are occurences of contractions? We can check a few different ways. First, let's see if our split.py is causing the problem
Step28: No, the results are exactly the same. Instead, we'll need to look for occurrences of "t" and "s" by themselves. The --context option to grep might help us here, pointing out surrounding text to search for in the source.
Step29: Aha, it does appear that the occurences of a bare "t" are from contractions. Let's repeat with "s", which might occur in possessives.
Step30: There we have it - the counts from above were correct, and we could eliminate "t" and "s" from consideration with a grep -v, and we can further assume that the "ll" and "m" occurences are also from contractions, so we'll remove them as well.
Step31: Here we have a final count. It is interesting to note that these counts of character names (Jo, Meg, etc.) are slightly different from before, perhaps due to punctuation handling, but it seems beyond the scope of the question to answer it precisely.
Extra credit - parallel stop words
Use GNU parallel to count the 25 most common words across all the 109 texts in the zip file provided, with stop words removed.
Step32: In the above line, I've limited the word size to one character, removed common contractions, and piped the overall result through the new stopwords.py Python filter. | Python Code:
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/women.txt
!cat women.txt | grep -oE '\w{{2,}}' \
| grep -e "Jo\|Beth\|Meg\|Amy" \
| tr '[:upper:]' '[:lower:]' \
| sort | uniq -c | sort -rn
Explanation: Problem 01 - solution
The following are my approach to solving these problems. Note that there may be more than one approach to each, and we could even debate the exact solutions.
Problem 1 - Word Counts
Part A - Part A. Characters in Little Women
How many times are each of the following characters mentioned by name in the text of Little Women: Jo, Beth, Meg, Amy
End of explanation
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/romeo.txt
Explanation: As we can see from the output of the command above, by the straight wording of the question, there are exactly 1,355 mentions of Jo, 683 of Meg, 645 of Amy, and 459 of Beth in Little Women. If we were to assume that diminutive or nickname forms might count as well, we might add mentions of "Megs", "Bethy", and "Meggy" to these counts, for example. For the purposes of this solution, however, I assume that these are not required, because the text might need to be consulted directly by someone familiar with the novel to determine which nicknames are valid, which seems beyond the scope of the assignment.
Part B - Juliet and Romeo in Romeo and Juliet
How many times do each of the characters Juliet and Romeo have speaking lines in Romeo and Juliet? Keep in mind that this is the text of a play.
End of explanation
!cat romeo.txt | grep "Rom" | head -25
Explanation: First we must recall -- as the problem highlights -- that this text is that of a play. Because of this, we cannot simply count mentions of "Romeo," as we might accidentally inflate the count due to mentions of this character, for example, by other characters in their speaking lines. Instead, we must first look for a patter that indicates Romeo's speaking lines specifically.
End of explanation
!cat romeo.txt | grep "Jul" | head -25
Explanation: In this brief sample, we can see title lines and metadata that include mention of Romeo, and both stage directions ("Enter Romeo") and spoken lines that include his name. What stands out, though, is that lines spoken by Romeo appear to be delineated by "Rom.", so we can search for this specific pattern. Let's verify that the same should hold true for mentions of Juliet.
End of explanation
!cat romeo.txt | grep -w "Rom\." \
| grep -oE '\w{{2,}}\.' \
| grep "Rom" \
| sort | uniq -c | sort -rn
!cat romeo.txt | grep -w "Jul\." \
| grep -oE '\w{{2,}}\.' \
| grep "Jul" \
| sort | uniq -c | sort -rn
Explanation: We see that the pattern seems to hold for both. I will assume that matches of the exact characters "Rom." and "Jul." indicate the start of a speaking line for one or the other characters, and will explicitly count only those lines.
End of explanation
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/2016q1.csv.zip
!unzip 2016q1.csv.zip
!head -5 2016q1.csv | csvlook
!csvcut -n 2016q1.csv
!csvcut -c5 2016q1.csv | tail -n +2 | csvsort | uniq -c | sort -rn | head -10
Explanation: The two pipelines above indicate that Romeo has 163 speaking lines, while Juliet has only 117. To match the specific case with a trailing ., the first regular expressions in both above cases use the -w flag to denote a word match and the escape sequence \. to match the literal trailing period. The second regular expressions include this literal at the end of the match sequence as well, with the trailing literal period in '\w{{2,}}\.' requiring that the match include the period at the end.
Problem 2 - Capital Bikeshare
Part A - Station counts
Which 10 Capital Bikeshare stations were the most popular departing stations in Q1 2016?
End of explanation
!csvcut -c7 2016q1.csv | tail -n +2 | csvsort | uniq -c | sort -rn | head -10
Explanation: As we can see in the above results, the top ten starting stations in this time period were led by Columbus Circle / Union Station with over 13,000 rides, followed by Dupont Circle and the Lincoln Memorial and the rest as listed.
In the pipeline above, tail -n +2 ensures we skip the header line before the sort process begins.
Which 10 were the most popular destination stations in Q1 2016?
End of explanation
!csvgrep -c5 -m "Columbus Circle / Union Station" 2016q1.csv | head
Explanation: The above results show us very similar numbers for destination stations during the same time period, with the first four stations unchanged and led again by Union Station with over 13,000 rides. Thomas Circle appears to be a more prominent start station than end station, as does Eastern Market, which does not even make the top ten destination stations.
Part B - bike counts
For the most popular departure station, which 10 bikes were used most in trips departing from there?
In this part, we will use csvgrep to select only the required stations - Union Station, in both cases.
End of explanation
!csvcut -c5,8 2016q1.csv | csvgrep -c1 -m "Columbus Circle / Union Station" | head
!csvcut -c5,8 2016q1.csv \
| csvgrep -c1 -m "Columbus Circle / Union Station" \
| csvcut -c2 \
| tail -n +2 \
| sort | uniq -c | sort -rn | head -12
Explanation: We can further limit the columns used to cut down on the data flowing through the pipe.
End of explanation
!csvcut -c7,8 2016q1.csv \
| csvgrep -c1 -m "Columbus Circle / Union Station" \
| csvcut -c2 \
| tail -n +2 \
| sort | uniq -c | sort -rn | head -15
Explanation: Above are the most commonly used bikes in trips departing from Union Station, led by bike number W22227. As we might expect it appears that the distribution seems rather uniform. Note that because several bikes had exactly 15 trips starting from Union Station, the list includes the top twelve bikes, rather than the top ten.
Which 10 bikes were used most in trips ending at the most popular destination station?
End of explanation
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/simplefilter.py
!cp simplefilter.py split.py
Explanation: Above are the most commonly used bikes in trips arriving at Union Station, let by bike number W00485. It is interesting to note that bike W22227, the top departing bike, is in second place, but bike W00485, the top arriving bike, does not appear in the top ten departing bikes. In any case these also seem at first glance to be uniformly distributed. Again, the list is expanded, this time to fifteen bikes, to account for the tie at exactly fifteen trips.
Problem 3 - Filters
Part A - split and lowercase filters
Write a Python filter than replaces grep -oE '\w{2,}' to split lines of text into one word per line, and write an additional Python filter to replace tr '[:upper:]' '[:lower:]' to transform text into lower case.
With your two new filters, repeat the original pipeline, and substitute your new filters as appropriate. You should obtain the same results.
End of explanation
!cat women.txt \
| grep -oE '\w{{1,}}' \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
Explanation: The file split.py is modified from the template to split lines of text into one word per line. To demonstrate this, we can compare the original pipeline with a new pipeline with split.py substituting for the first grep command.
End of explanation
!chmod +x split.py
Explanation: We can ignore the broken pipe and related errors as the output appears to be correct.
Next, we repeat the pipeline with split.py substituted:
End of explanation
!grep -n '' split.py
!cat women.txt \
| ./split.py \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
Explanation: Examining the filter script below, the key line, #14, removes trailing newlines, splits tokens by the space (' '), and removes words that are not entirely alphabetical.
End of explanation
!cat women.txt | grep -oE '\w{{2,}}' | head -25
!cat women.txt | ./split.py | head -25
Explanation: Almost the exact words listed appear in nearly the same order, but with lower counts for each. We can examine the output of each command to see if there are obvious differences:
End of explanation
!head -3 women.txt
Explanation: We can see straight away on the first few lines that there is a difference. Let's look at the text itself:
End of explanation
!grep -n '' split.py
!cat women.txt | ./split.py | head -25
Explanation: Three obvious issues jump out. First, the initial "The" is elided; it is not clear why. Next, "Women" is removed, perhaps due to the trailing comma, which will cause the token to fail the isalpha() test. Also, "Alcott" is removed, perhaps having to do with its position at the end of the line.
We can update the filter to use Python's regular expression model and a similar expression, \w{1,} to find all matches more intelligently. Here the regular expression is prepared in line 13 and used in line 18.
End of explanation
!cat women.txt \
| ./split.py \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
Explanation: This looks much better. We can try the full pipeline again:
End of explanation
!cp simplefilter.py lowercase.py
Explanation: This looks to be an exact match.
End of explanation
!chmod +x lowercase.py
!grep -n '' lowercase.py
Explanation: The filter lowercase.py is modified from the template to lowercase incoming lines of text.
End of explanation
!head women.txt | ./lowercase.py
Explanation: Note that the only line aside from the comments that changes in the above script is line #12, which adds the lower() to the print statement.
End of explanation
!cat women.txt \
| grep -oE '\w{{1,}}' \
| ./lowercase.py \
| sort \
| uniq -c \
| sort -rn \
| head -10
Explanation: This looks correct, so we'll first attempt to replace the original pipeline's use of tr with lowercase.py:
End of explanation
!cat women.txt \
| ./split.py \
| ./lowercase.py \
| sort \
| uniq -c \
| sort -rn \
| head -10
Explanation: Looks good so far, we are seeing the exact same counts. To address the problem's challenge, we finally replace both filters at once.
End of explanation
!wget http://www.textfixer.com/resources/common-english-words.txt
!head common-english-words.txt
Explanation: This completes Problem 3 - Part A.
Part B - stop words
Write a Python filter that removes at least ten common words of English text, commonly known as "stop words". Sources of English stop word lists are readily available online, or you may generate your own list from the text.
We begin by acquiring a common list of English stop words, gathered from the site http://www.textfixer.com/resources/common-english-words.txt as linked from the Wikipedia page on stop words.
End of explanation
!cp simplefilter.py stopwords.py
!chmod +x stopwords.py
!grep -n '' stopwords.py
Explanation: Next we copy the template filter script as before, renaming it appropriately.
End of explanation
!head women.txt | ./split.py | ./lowercase.py | ./stopwords.py
Explanation: The key changes in stopwords.py from the template are line #13, which imports the list of stopwords, and line #20, which checks whether an incoming word is in the stopword list. Note also that in line #19 the removal of a trailing newline occurs before checking for stopwords.
The assumption that incoming text will already be split into one word per line and lowercased is stated explicitly in the first comment, lines #6-7.
End of explanation
!cat women.txt \
| ./split.py \
| ./lowercase.py \
| ./stopwords.py \
| sort \
| uniq -c \
| sort -rn \
| head -25
Explanation: This appears to be correct. Let's put it all together:
End of explanation
!cat women.txt \
| grep -oE '\w{{1,}}' \
| ./lowercase.py \
| ./stopwords.py \
| sort \
| uniq -c \
| sort -rn \
| head -25
Explanation: This would seem to be correct - we see the names we looked for earlier appearing near the top of the list, and common stop words are indeed removed - however the list starts with odd "words", in "t", "s", "m", and "ll". Is it possible that these are occurences of contractions? We can check a few different ways. First, let's see if our split.py is causing the problem:
End of explanation
!cat women.txt \
| ./split.py \
| ./lowercase.py \
| grep --context=2 -oE '^t$' \
| head -20
!grep -i "we haven't got" women.txt
Explanation: No, the results are exactly the same. Instead, we'll need to look for occurrences of "t" and "s" by themselves. The --context option to grep might help us here, pointing out surrounding text to search for in the source.
End of explanation
!cat women.txt \
| ./split.py \
| ./lowercase.py \
| grep --context=2 -oE '^s$' \
| head -20
!grep -i "amy's valley" women.txt
Explanation: Aha, it does appear that the occurences of a bare "t" are from contractions. Let's repeat with "s", which might occur in possessives.
End of explanation
!cat women.txt \
| ./split.py \
| ./lowercase.py \
| ./stopwords.py \
| grep -v -oE '^s|t|m|ll$' \
| sort \
| uniq -c \
| sort -rn \
| head -25
Explanation: There we have it - the counts from above were correct, and we could eliminate "t" and "s" from consideration with a grep -v, and we can further assume that the "ll" and "m" occurences are also from contractions, so we'll remove them as well.
End of explanation
!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/texts.zip
!unzip -l texts.zip | head -5
!mkdir all-texts
!unzip -d all-texts texts.zip
!time ls all-texts/*.txt \
| parallel --eta -j+0 "grep -oE '\w{1,}' {} | tr '[:upper:]' '[:lower:]' | grep -v -oE '^s|t|m|l|ll|d$' | ./stopwords.py >> all-words.txt"
Explanation: Here we have a final count. It is interesting to note that these counts of character names (Jo, Meg, etc.) are slightly different from before, perhaps due to punctuation handling, but it seems beyond the scope of the question to answer it precisely.
Extra credit - parallel stop words
Use GNU parallel to count the 25 most common words across all the 109 texts in the zip file provided, with stop words removed.
End of explanation
!wc -l all-words.txt
!time sort all-words.txt | uniq -c | sort -rn | head -25
Explanation: In the above line, I've limited the word size to one character, removed common contractions, and piped the overall result through the new stopwords.py Python filter.
End of explanation |
8,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Convolutional Neural Networks
Team StoDIG - Statoil Deep-learning Interest Group
David Wade, John Thurmond & Eskil Kulseth Dahl
In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations
Step1: Data ingest
We load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model. We now incorporate the Imputation from Paolo Bestagini via LA_Team's Submission 5.
Step2: Split data into training data and blind data, and output as Numpy arrays
Step3: Data Augmentation
We expand the input data to be acted on by the convolutional layer.
Step4: Convolutional Neural Network
We build a CNN with the following layers (no longer using Sequential() model)
Step5: We train the CNN and evaluate it on precision/recall.
Step6: We display the learned 1D convolution kernels
Step7: In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.
Step8: Prediction
To predict the STUART and CRAWFORD blind wells we do the following
Step9: Run the model on the blind data
Output a CSV
Plot the wells in the notebook | Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install keras
pip install sklearn
from __future__ import print_function
import time
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from keras.preprocessing import sequence
from keras.models import Model, Sequential
from keras.constraints import maxnorm, nonneg
from keras.optimizers import SGD, Adam, Adamax, Nadam
from keras.regularizers import l2, activity_l2
from keras.layers import Input, Dense, Dropout, Activation, Convolution1D, Cropping1D, Cropping2D, Permute, Flatten, MaxPooling1D, merge
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV
Explanation: Facies classification using Convolutional Neural Networks
Team StoDIG - Statoil Deep-learning Interest Group
David Wade, John Thurmond & Eskil Kulseth Dahl
In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations:
- Inserting a convolutional layer as the first layer in the Neural Network
- Initializing the weights of this layer to detect gradients and extrema
- Adding Dropout regularization to prevent overfitting
Since our submission #2 we have:
- Added the distance to the next NM_M transition as a feature (thanks to geoLEARN where we spotted this)
- Removed Recruit F9 from training
... and since our submission #3 we have:
- Included training/predicting on the Formation categories
- Made our facies plot better, including demonstrating our confidence in each prediction
Problem Modeling
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Setup
Check we have all the libraries we need, and import the modules we require. Note that we have used the Theano backend for Keras, and to achieve a reasonable training time we have used an NVidia K20 GPU.
End of explanation
data = pd.read_csv('train_test_data.csv')
# Set 'Well Name' and 'Formation' fields as categories
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
def coding(col, codeDict):
colCoded = pd.Series(col, copy=True)
for key, value in codeDict.items():
colCoded.replace(key, value, inplace=True)
return colCoded
data['Formation_coded'] = coding(data['Formation'], {'A1 LM':1,'A1 SH':2,'B1 LM':3,'B1 SH':4,'B2 LM':5,'B2 SH':6,'B3 LM':7,'B3 SH':8,'B4 LM':9,'B4 SH':10,'B5 LM':11,'B5 SH':12,'C LM':13,'C SH':14})
formation = data['Formation_coded'].values[:,np.newaxis]
# Parameters
feature_names = ['Depth', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
well_names_test = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', 'NOLAN', 'Recruit F9', 'NEWBY', 'CHURCHMAN BIBLE']
well_names_validate = ['STUART', 'CRAWFORD']
data_vectors = data[feature_names].values
correct_facies_labels = data['Facies'].values
nm_m = data['NM_M'].values
nm_m_dist = np.zeros((nm_m.shape[0],1), dtype=int)
for i in range(nm_m.shape[0]):
count=1
while (i+count<nm_m.shape[0]-1 and nm_m[i+count] == nm_m[i]):
count = count+1
nm_m_dist[i] = count
nm_m_dist.reshape(nm_m_dist.shape[0],1)
well_labels = data[['Well Name', 'Facies']].values
depth = data['Depth'].values
# Fill missing values and normalize for 'PE' field
imp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(data_vectors)
data_vectors = imp.transform(data_vectors)
data_vectors = np.hstack([data_vectors, nm_m_dist, formation])
scaler = preprocessing.StandardScaler().fit(data_vectors)
scaled_features = scaler.transform(data_vectors)
data_out = np.hstack([well_labels, scaled_features])
Explanation: Data ingest
We load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model. We now incorporate the Imputation from Paolo Bestagini via LA_Team's Submission 5.
End of explanation
def preprocess(data_out):
data = data_out
X = data[0:4149,0:12]
y = np.concatenate((data[0:4149,0].reshape(4149,1), np_utils.to_categorical(correct_facies_labels[0:4149]-1)), axis=1)
X_test = data[4149:,0:12]
return X, y, X_test
X_train_in, y_train, X_test_in = preprocess(data_out)
print(X_train_in.shape)
Explanation: Split data into training data and blind data, and output as Numpy arrays
End of explanation
conv_domain = 11
# Reproducibility
np.random.seed(7)
# Load data
def expand_dims(input):
r = int((conv_domain-1)/2)
l = input.shape[0]
n_input_vars = input.shape[1]
output = np.zeros((l, conv_domain, n_input_vars))
for i in range(l):
for j in range(conv_domain):
for k in range(n_input_vars):
output[i,j,k] = input[min(i+j-r,l-1),k]
return output
X_train = np.empty((0,conv_domain,10), dtype=float)
X_test = np.empty((0,conv_domain,10), dtype=float)
y_select = np.empty((0,9), dtype=int)
well_names_train = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', 'NOLAN', 'NEWBY', 'CHURCHMAN BIBLE']
for wellId in well_names_train:
X_train_subset = X_train_in[X_train_in[:, 0] == wellId][:,2:12]
X_train_subset = expand_dims(X_train_subset)
X_train = np.concatenate((X_train,X_train_subset),axis=0)
y_select = np.concatenate((y_select, y_train[y_train[:, 0] == wellId][:,1:11]), axis=0)
for wellId in well_names_validate:
X_test_subset = X_test_in[X_test_in[:, 0] == wellId][:,2:12]
X_test_subset = expand_dims(X_test_subset)
X_test = np.concatenate((X_test,X_test_subset),axis=0)
y_train = y_select
print(X_train.shape)
print(X_test.shape)
print(y_select.shape)
Explanation: Data Augmentation
We expand the input data to be acted on by the convolutional layer.
End of explanation
# Set parameters
input_dim = 10
output_dim = 9
n_per_batch = 128
epochs = 100
crop_factor = int(conv_domain/2)
filters_per_log = 11
n_convolutions = input_dim*filters_per_log
starting_weights = [np.zeros((conv_domain, 1, input_dim, n_convolutions)), np.ones((n_convolutions))]
norm_factor=float(conv_domain)*2.0
for i in range(input_dim):
for j in range(conv_domain):
starting_weights[0][j, 0, i, i*filters_per_log+0] = j/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+1] = j/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+2] = (conv_domain-j)/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+3] = (conv_domain-j)/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+4] = (2*abs(crop_factor-j))/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+5] = (conv_domain-2*abs(crop_factor-j))/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+6] = 0.25
starting_weights[0][j, 0, i, i*filters_per_log+7] = 0.5 if (j%2 == 0) else 0.25
starting_weights[0][j, 0, i, i*filters_per_log+8] = 0.25 if (j%2 == 0) else 0.5
starting_weights[0][j, 0, i, i*filters_per_log+9] = 0.5 if (j%4 == 0) else 0.25
starting_weights[0][j, 0, i, i*filters_per_log+10] = 0.25 if (j%4 == 0) else 0.5
def dnn_model(init_dropout_rate=0.5, main_dropout_rate=0.5,
hidden_dim_1=24, hidden_dim_2=40,
max_norm=10, nb_conv=n_convolutions):
# Define the model
inputs = Input(shape=(conv_domain,input_dim,))
inputs_dropout = Dropout(init_dropout_rate)(inputs)
x1 = Convolution1D(nb_conv, conv_domain, border_mode='valid', weights=starting_weights, activation='tanh', input_shape=(conv_domain,input_dim), input_length=input_dim, W_constraint=nonneg())(inputs_dropout)
x1 = Flatten()(x1)
xn = Cropping1D(cropping=(crop_factor,crop_factor))(inputs_dropout)
xn = Flatten()(xn)
xA = merge([x1, xn], mode='concat')
xA = Dropout(main_dropout_rate)(xA)
xA = Dense(hidden_dim_1, init='uniform', activation='relu', W_constraint=maxnorm(max_norm))(xA)
x = merge([xA, xn], mode='concat')
x = Dropout(main_dropout_rate)(x)
x = Dense(hidden_dim_2, init='uniform', activation='relu', W_constraint=maxnorm(max_norm))(x)
predictions = Dense(output_dim, init='uniform', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
optimizerNadam = Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.004)
model.compile(loss='categorical_crossentropy', optimizer=optimizerNadam, metrics=['accuracy'])
return model
# Load the model
t0 = time.time()
model_dnn = dnn_model()
model_dnn.summary()
t1 = time.time()
print("Load time = %d" % (t1-t0) )
def plot_weights(n_convs_disp=input_dim):
layerID=2
print(model_dnn.layers[layerID].get_weights()[0].shape)
print(model_dnn.layers[layerID].get_weights()[1].shape)
fig, ax = plt.subplots(figsize=(12,10))
for i in range(n_convs_disp):
plt.subplot(input_dim,1,i+1)
plt.imshow(model_dnn.layers[layerID].get_weights()[0][:,0,i,:], interpolation='none')
plt.show()
plot_weights(1)
Explanation: Convolutional Neural Network
We build a CNN with the following layers (no longer using Sequential() model):
Dropout layer on input
One 1D convolutional layer (7-point radius)
One 1D cropping layer (just take actual log-value of interest)
Series of Merge layers re-adding result of cropping layer plus Dropout & Fully-Connected layers
Instead of running CNN with gradient features added, we initialize the Convolutional layer weights to achieve this
This allows the CNN to reject them, adjust them or turn them into something else if required
End of explanation
#Train model
t0 = time.time()
model_dnn.fit(X_train, y_train, batch_size=n_per_batch, nb_epoch=epochs, verbose=2)
t1 = time.time()
print("Train time = %d seconds" % (t1-t0) )
# Predict Values on Training set
t0 = time.time()
y_predicted = model_dnn.predict( X_train , batch_size=n_per_batch, verbose=2)
t1 = time.time()
print("Test time = %d seconds" % (t1-t0) )
# Print Report
# Format output [0 - 8 ]
y_ = np.zeros((len(y_train),1))
for i in range(len(y_train)):
y_[i] = np.argmax(y_train[i])
y_predicted_ = np.zeros((len(y_predicted), 1))
for i in range(len(y_predicted)):
y_predicted_[i] = np.argmax( y_predicted[i] )
# Confusion Matrix
conf = confusion_matrix(y_, y_predicted_)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# Print Results
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("\nConfusion Matrix")
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
Explanation: We train the CNN and evaluate it on precision/recall.
End of explanation
plot_weights()
Explanation: We display the learned 1D convolution kernels
End of explanation
# Cross Validation
def cross_validate():
t0 = time.time()
estimator = KerasClassifier(build_fn=dnn_model, nb_epoch=epochs, batch_size=n_per_batch, verbose=0)
skf = StratifiedKFold(n_splits=5, shuffle=True)
results_dnn = cross_val_score(estimator, X_train, y_train, cv= skf.get_n_splits(X_train, y_train))
t1 = time.time()
print("Cross Validation time = %d" % (t1-t0) )
print(' Cross Validation Results')
print( results_dnn )
print(np.mean(results_dnn))
cross_validate()
Explanation: In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
def make_facies_log_plot(logs, facies_colors, y_test=None, wellId=None):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
facies = np.zeros(2*(int(zbot-ztop)+1))
shift = 0
depth = ztop
for i in range(logs.Depth.count()-1):
while (depth < logs.Depth.values[i] + 0.25 and depth < zbot+0.25):
if (i<logs.Depth.count()-1):
new = logs['Facies'].values[i]
facies[shift] = new
depth += 0.5
shift += 1
facies = facies[0:facies.shape[0]-1]
cluster=np.repeat(np.expand_dims(facies,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=8, gridspec_kw={'width_ratios':[1,1,1,1,1,1,2,2]}, figsize=(10, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
ax[5].plot(logs.NM_M, logs.Depth, '-', color='black')
if (y_test is not None):
for i in range(9):
if (wellId == 'STUART'):
ax[6].plot(y_test[0:474,i], logs.Depth, color=facies_colors[i], lw=1.5)
else:
ax[6].plot(y_test[474:,i], logs.Depth, color=facies_colors[i], lw=1.5)
im=ax[7].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[7])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=5)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel("NM_M")
ax[5].set_xlim(logs.NM_M.min()-1.,logs.NM_M.max()+1.)
ax[6].set_xlabel("Facies Prob")
ax[6].set_xlim(0.0,1.0)
ax[7].set_xlabel('Facies')
ax[0].set_yticklabels([]);
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[6].set_xticklabels([]); ax[7].set_xticklabels([]);
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
Explanation: Prediction
To predict the STUART and CRAWFORD blind wells we do the following:
Set up a plotting function to display the logs & facies.
End of explanation
# DNN model Prediction
y_test = model_dnn.predict( X_test , batch_size=n_per_batch, verbose=0)
predictions_dnn = np.zeros((len(y_test),1))
for i in range(len(y_test)):
predictions_dnn[i] = np.argmax(y_test[i]) + 1
predictions_dnn = predictions_dnn.astype(int)
# Store results
train_data = pd.read_csv('train_test_data.csv')
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data['Facies'] = predictions_dnn
test_data.to_csv('Prediction_StoDIG_3.csv')
for wellId in well_names_validate:
make_facies_log_plot( test_data[test_data['Well Name'] == wellId], facies_colors=facies_colors, y_test=y_test, wellId=wellId)
#for wellId in well_names_test:
# make_facies_log_plot( train_data[train_data['Well Name'] == wellId], facies_colors=facies_colors)
Explanation: Run the model on the blind data
Output a CSV
Plot the wells in the notebook
End of explanation |
8,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
주 단위 데이터를 일 단위 데이터로 늘리기
네이버 검색 트렌드는 주 단위로 특정 검색어에 대한 검색량을 제공하고 있음
주별 검색량을 검색 기간 사이의 최소검색량, 최대검색량을 기준으로 0~100 사이의 수로 scaling하여 보여줌
이 주 단위의 데이터를 차후 데이터 분석 프로젝트(종속변수 예측)에서 입력변수로 사용할 때
단순히 한 주의 검색량을 하나의 값으로 일괄적으로 넣기보단
한 주와 연속되는 다른 주의 검색량 사이 직선 상에 1일 간격으로 점을 찍어 예측의 정확도를 더 높이고자 일 단위로 가공하고자 함
주 단위 데이터 만들기
주 리스트 만들기
Step1: 각 주의 대표일 리스트 만들기
Step2: 난수생성기
Step3: 난수 생성으로 검색량 리스트 만들기
Step4: 데이터 프레임 만들기
Step5: 일 단위 데이터로 늘리기
1과 7 사이에 2개의 수 더 늘리기
Step6: 실제 데이터로 생성
Step7: 나중에 plot을 배우면 더 직관적으로 이해하기 좋을 것 같음
일 단위 데이터 만들기
난수 생성으로 일 리스트 만들기
Step8: 최종 데이터 프레임 만들기 | Python Code:
##1년은 52주로 구성됨
week = list(range(1, 53)) #range 함수의 첫 번째 파라매터에는 시작할 숫자, 두 번째 파라매터에는 끝나는 숫자보다 1 큰 수를 넣어줌
week
len(week)
Explanation: 주 단위 데이터를 일 단위 데이터로 늘리기
네이버 검색 트렌드는 주 단위로 특정 검색어에 대한 검색량을 제공하고 있음
주별 검색량을 검색 기간 사이의 최소검색량, 최대검색량을 기준으로 0~100 사이의 수로 scaling하여 보여줌
이 주 단위의 데이터를 차후 데이터 분석 프로젝트(종속변수 예측)에서 입력변수로 사용할 때
단순히 한 주의 검색량을 하나의 값으로 일괄적으로 넣기보단
한 주와 연속되는 다른 주의 검색량 사이 직선 상에 1일 간격으로 점을 찍어 예측의 정확도를 더 높이고자 일 단위로 가공하고자 함
주 단위 데이터 만들기
주 리스트 만들기
End of explanation
##한 주는 7일로 구성되어 있으므로 첫 번째 주의 가운데 날인 4번째 일을 그 주의 대표값(일)로 표시 (두 번째 주의 대표일은 11일)
representative_day = list(range(4, 365, 7))#range의 세 번째 파라매터에는 간격이 들어감
representative_day
len(representative_day)
##나중에 날짜에 대한 기능을 배우면 이 인덱스로 날짜 갖다 붙이기 가능
Explanation: 각 주의 대표일 리스트 만들기
End of explanation
import random
random.randint(0, 10) #0~10 사이의 정수 난수 생성(0과 10 포함)
Explanation: 난수생성기
End of explanation
search = [] #비어있는 리스트 생성
## 52주(1년) 동안의 주간 검색량 생성
for i in range(0, 52) :
search = search + [random.randint(0, 100)] #검색량 트렌드 데이터가 0~100 사이로 검색량을 나타내므로 0과 100 사이의 숫자 생성
search
len(search)
Explanation: 난수 생성으로 검색량 리스트 만들기
End of explanation
from pandas import DataFrame
###예시
#dataframe 2차원같은 자료구조
#키는 컬럼 안에 들어가는애들은 리스트형식 으로 하면 예쁜 모양으로 그려짐
data = {
'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
} #사전객체
frame = DataFrame(data)
frame
data = {
'week': week,
'day': representative_day,
'search_volume': search,
}
frame = DataFrame(data)
frame
Explanation: 데이터 프레임 만들기
End of explanation
a = [1, 7]
a
b = (a[1]-a[1-1])/3
a1 = a[1-1] + b
a1
a2 = a1 + b
a2
a3 = a2 + b
round(a3, 5) == a[1]
final = [a[1-1]] + [a1] + [a2]
final
Explanation: 일 단위 데이터로 늘리기
1과 7 사이에 2개의 수 더 늘리기
End of explanation
final = []
part = []
for i in range(1, 52) :
b = (search[i] - search[i-1])/7
a1 = search[i-1] + b
a2 = a1 + b
a3 = a2 + b
a4 = a3 + b
a5 = a4 + b
a6 = a5 + b
part = [search[i-1]] + [a1] + [a2] + [a3] + [a4] + [a5] + [a6]
final = final + part
final
len(final)
Explanation: 실제 데이터로 생성
End of explanation
day = list(range(4,361)) #첫째주의 중간 일인 4일부터 마지막주(52주)의 중간 일인 361일 전날까지에 대해서 산출한 것이므로
day
len(day)
Explanation: 나중에 plot을 배우면 더 직관적으로 이해하기 좋을 것 같음
일 단위 데이터 만들기
난수 생성으로 일 리스트 만들기
End of explanation
full_data = {
'day': day,
'search volume': final,
}
full_frame = DataFrame(full_data)
full_frame
Explanation: 최종 데이터 프레임 만들기
End of explanation |
8,171 | Given the following text description, write Python code to implement the functionality described.
Description:
Minimum cost to modify a string
Function to return the minimum cost ; Initialize result ; To store the frequency of characters of the string ; Update the frequencies of the characters of the string ; Loop to check all windows from a - z where window size is K ; Starting index of window ; Ending index of window ; Check if the string contains character ; Check if the character is on left side of window find the cost of modification for character add value to count calculate nearest distance of modification ; Check if the character is on right side of window find the cost of modification for character add value to count calculate nearest distance of modification ; Find the minimum of all costs for modifying the string ; Loop to check all windows Here window contains characters before z and after z of window size K ; Starting index of window ; Ending index of window ; Check if the string contains character ; If characters are outside window find the cost for modifying character add value to count ; Find the minimum of all costs for modifying the string ; Driver code
| Python Code:
def minCost(str1 , K ) :
n = len(str1 )
res = 999999999
count = 0
cnt =[0 for i in range(27 ) ]
for i in range(n ) :
cnt[ord(str1[i ] ) - ord(' a ' ) + 1 ] += 1
for i in range(1 , 26 - K + 1 , 1 ) :
a = i
b = i + K
count = 0
for j in range(1 , 27 , 1 ) :
if(cnt[j ] > 0 ) :
if(j >= a and j >= b ) :
count = count +(min(j - b , 25 - j + a + 1 ) ) * cnt[j ]
elif(j <= a and j <= b ) :
count = count +(min(a - j , 25 + j - b + 1 ) ) * cnt[j ]
res = min(res , count )
for i in range(26 - K + 1 , 27 , 1 ) :
a = i
b =(i + K ) % 26
count = 0
for j in range(1 , 27 , 1 ) :
if(cnt[j ] > 0 ) :
if(j >= b and j <= a ) :
count = count +(min(j - b , a - j ) ) * cnt[j ]
res = min(res , count )
return res
if __name__== ' __main __' :
str1 = "abcdefghi "
K = 2
print(minCost(str1 , K ) )
|
8,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Endpoint layer pattern
Author
Step1: Usage of endpoint layers in the Functional API
An "endpoint layer" has access to the model's targets, and creates arbitrary losses and
metrics using add_loss and add_metric. This enables you to define losses and
metrics that don't match the usual signature fn(y_true, y_pred, sample_weight=None).
Note that you could have separate metrics for training and eval with this pattern.
Step2: Exporting an inference-only model
Simply don't include targets in the model. The weights stay the same.
Step3: Usage of loss endpoint layers in subclassed models | Python Code:
import tensorflow as tf
from tensorflow import keras
import numpy as np
Explanation: Endpoint layer pattern
Author: fchollet<br>
Date created: 2019/05/10<br>
Last modified: 2019/05/10<br>
Description: Demonstration of the "endpoint layer" pattern (layer that handles loss management).
Setup
End of explanation
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy(name="accuracy")
def call(self, logits, targets=None, sample_weight=None):
if targets is not None:
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weight)
self.add_loss(loss)
# Log the accuracy as a metric (we could log arbitrary metrics,
# including different metrics for training and inference.
self.add_metric(self.accuracy_fn(targets, logits, sample_weight))
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
inputs = keras.Input((764,), name="inputs")
logits = keras.layers.Dense(1)(inputs)
targets = keras.Input((1,), name="targets")
sample_weight = keras.Input((1,), name="sample_weight")
preds = LogisticEndpoint()(logits, targets, sample_weight)
model = keras.Model([inputs, targets, sample_weight], preds)
data = {
"inputs": np.random.random((1000, 764)),
"targets": np.random.random((1000, 1)),
"sample_weight": np.random.random((1000, 1)),
}
model.compile(keras.optimizers.Adam(1e-3))
model.fit(data, epochs=2)
Explanation: Usage of endpoint layers in the Functional API
An "endpoint layer" has access to the model's targets, and creates arbitrary losses and
metrics using add_loss and add_metric. This enables you to define losses and
metrics that don't match the usual signature fn(y_true, y_pred, sample_weight=None).
Note that you could have separate metrics for training and eval with this pattern.
End of explanation
inputs = keras.Input((764,), name="inputs")
logits = keras.layers.Dense(1)(inputs)
preds = LogisticEndpoint()(logits, targets=None, sample_weight=None)
inference_model = keras.Model(inputs, preds)
inference_model.set_weights(model.get_weights())
preds = inference_model.predict(np.random.random((1000, 764)))
Explanation: Exporting an inference-only model
Simply don't include targets in the model. The weights stay the same.
End of explanation
class LogReg(keras.Model):
def __init__(self):
super(LogReg, self).__init__()
self.dense = keras.layers.Dense(1)
self.logistic_endpoint = LogisticEndpoint()
def call(self, inputs):
# Note that all inputs should be in the first argument
# since we want to be able to call `model.fit(inputs)`.
logits = self.dense(inputs["inputs"])
preds = self.logistic_endpoint(
logits=logits,
targets=inputs["targets"],
sample_weight=inputs["sample_weight"],
)
return preds
model = LogReg()
data = {
"inputs": np.random.random((1000, 764)),
"targets": np.random.random((1000, 1)),
"sample_weight": np.random.random((1000, 1)),
}
model.compile(keras.optimizers.Adam(1e-3))
model.fit(data, epochs=2)
Explanation: Usage of loss endpoint layers in subclassed models
End of explanation |
8,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
All sky research
Solo ∼ 3 × 10 3 stelle di neutroni osservate su ∼ 10 9
stimate nella Via Lattea
⇒ ricerca “alla cieca” su tutto il cielo.
Problemi
* ampiezze molto piccole;
* modulazione Doppler dovuto ai moti terrestri;
* dimensione dello spazio dei parametri
Step1: Il nuovo codice
Step2: Let's review it | Python Code:
binh_df0ORIG=zeros(nbin_d,nbin_f0); % HM matrix container
for it = 1:nTimeSteps
kf=(peaks(2,ii0:ii(it))-inifr)/ddf; % normalized frequencies
w=peaks(5,ii0:ii(it)); % wiener weights
t=peaks(1,ii0)*Day_inSeconds; % time conversion days to s
tddf=t/ddf;
f0_a=kf-deltaf2;
for id = 1:nbin_d % loop for the creation of half-differential map
td=d(id)*tddf;
a=1+round(f0_a-td+I500);
binh_df0ORIG(id,a)=binh_df0ORIG(id,a)+w; % left edge of the strips
end
ii0=ii(it)+1;
end
binh_df0ORIG(:,deltaf2*2+1:nbin_f0)=...
binh_df0ORIG(:,deltaf2*2+1:nbin_f0)-binh_df0ORIG(:,1:nbin_f0-deltaf2*2); % half to full diff. map - Carl Sabottke idea
binh_df0ORIG=cumsum(binh_df0ORIG,2); % creation of the Hough map
Explanation: All sky research
Solo ∼ 3 × 10 3 stelle di neutroni osservate su ∼ 10 9
stimate nella Via Lattea
⇒ ricerca “alla cieca” su tutto il cielo.
Problemi
* ampiezze molto piccole;
* modulazione Doppler dovuto ai moti terrestri;
* dimensione dello spazio dei parametri: frequenza, spin-down, posizione nel cielo.
Frequency-Hough pipeline
Trasformate di Fourier discrete tali che
∆ν < ∆ν$_{Doppler}$ (8192 s, 4096 s)
Mappa tempo-frequenza (peakmap)
Correzione Doppler per ogni posizione del cielo
Trasformata Frequency-Hough
Selezione candidati
Astone, P. et al., Phys. Rev. D 90.4, 042002 (2014).
Il codice originale
End of explanation
def rowTransform(ithSD):
sdTimed = tf.multiply(spindowns[ithSD], times)
transform = tf.round(frequencies-sdTimed+securbelt/2)
transform = tf.cast(transform, dtype=tf.int32)
values = tf.unsorted_segment_sum(weights, transform, numColumns)
values = tf.cast(values, dtype=tf.float32)
return values
houghLeft = tf.map_fn(rowTransform, tf.range(0, numRows), dtype=tf.float32, parallel_iterations=8)
houghRight = houghLeft[:,enhancement:numColumns]-houghLeft[:,0:numColumns - enhancement]
houghDiff = tf.concat([houghLeft[:,0:enhancement],houghRight],1)
houghMap = tf.cumsum(houghDiff, axis = 1)
Explanation: Il nuovo codice
End of explanation
#this function computes the Hough transform histogram for a given spindown
def rowTransform(ithSD):
sdTimed = tf.multiply(spindowns[ithSD], times)
transform = tf.round(frequencies-sdTimed+securbelt/2)
transform = tf.cast(transform, dtype=tf.int32)
#the rounding operation brings a certain number of peaks in the same
#frequency-spindown bin in the Hough map
#the left edge is then computed binning that peaks properly
#(according to their values if the peakmap was adactive)
#this is the core of the algoritm and brings the most computational effort
values = tf.unsorted_segment_sum(weights, transform, numColumns)
values = tf.cast(values, dtype=tf.float32)
return values
#to keep under control the memory usage, the map function is a
#very useful tool to apply the same function over a vector
#in this way the vectorization is preserved
houghLeft = tf.map_fn(rowTransform, tf.range(0, numRows),
dtype=tf.float32, parallel_iterations=8)
#let's superimpose the right edge on the image
houghRight = houghLeft[:,enhancement:numColumns]-houghLeft[:,0:numColumns - enhancement]
houghDiff = tf.concat([houghLeft[:,0:enhancement],houghRight],1)
#at last, the Hough map is computed integrating along the frequencies
houghMap = tf.cumsum(houghDiff, axis = 1)
Explanation: Let's review it
End of explanation |
8,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimize
Step1: Let's go over the columns
Step2: How many records do we have now?
Step3: Let's break it down by user
Step4: Let's convert it over to a Pandas DataFrame so we can chart it and examine it closer
Step5: That's neat. But let's add in some data from another dataset -- the Estimize Revisions data. For the same timeframe, the revisions data provides each revision to the overall consensus estimates. So where estimates_free data provides every single estimate made by an individual on the Estimize site, revisions_free provides rolled up summaries of the estimates.
Step6: For this quick demonstration, let's just grab the consensus mean from the revisions_free data set and convert it over to Pandas. Note, we need to rename the mean column name because it causes problems otherwise
Step7: Let's chart that in the same chart again so we get a trend of the mean over time, overlayed on a chart of each individual analyst estimate | Python Code:
# import the free sample of the dataset
from quantopian.interactive.data.estimize import estimates_free
# or if you want to import the full dataset, use:
# from quantopian.interactive.data.estimize import estimates
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
# Let's use blaze to understand the data a bit using Blaze dshape()
estimates_free.dshape
# And how many rows are in this free sample?
# N.B. we're using a Blaze function to do this, not len()
estimates_free.count()
# Let's see what the data looks like. We'll grab the first three rows.
estimates_free.head(3)
Explanation: Estimize: Analyst-by-Analyst Estimates
In this notebook, we'll take a look at Estimizes's Analyst-by-Analyst Estimates dataset, available on the Quantopian. This dataset spans January, 2010 through the current day.
This data contains a record for every estimate made by an individual on the Estimize product. By comparison, the Estimize Revisions product provides rolled-up consensus numbers for each possible earnings announcement.
In this notebook, we'll examine these detailed estimates and pull in that consensus data as well.
Blaze
Before we dig into the data, we want to tell you about how you generally access Quantopian Store data sets. These datasets are available using the Blaze library. Blaze provides the Quantopian user with a convenient interface to access very large datasets.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
To learn more about using Blaze and generally accessing Quantopian Store data, clone this tutorial notebook.
Free samples and limits
A few key caveats:
1) We limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
2) There is a free version of this dataset as well as a paid one. The free one includes about three years of historical data, though not up to the current day.
With preamble in place, let's get started:
End of explanation
stocks = symbols('TSLA')
one_quarter = estimates_free[(estimates_free.sid == stocks.sid) &
(estimates_free.fiscal_year == '2014') &
(estimates_free.fiscal_quarter == '1') &
(estimates_free.eps < 100)
]
one_quarter.head(5)
Explanation: Let's go over the columns:
- analyst_id: the unique identifier assigned by Estimize for the person making the estimate.
- asof_date: Estimize's timestamp of event capture.
- eps: EPS estimate made by the analyst on the asof_date
- fiscal_quarter: fiscal quarter for which this estimate is made, related to fiscal_year
- fiscal_year: fiscal year for which this estimate is made, related to fiscal_quarter
- revenue: revenue estimate made by the analyst on the asof_date
- symbol: ticker symbol provided by Estimize for the company for whom these estimates have been made
- username: Estimize username of the analyst making this estimate
- timestamp: the datetime when Quantopian registered the data. For data loaded up via initial, historic loads, this timestamp is an estimate.
- sid: the equity's unique identifier. Use this instead of the symbol. Derived by Quantopian using the symbol and our market data
We've done much of the data processing for you. Fields like asof_date and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, let's just look at the estimates made for TSLA for a particular quarter. Also, we're filtering out some spurious data:
End of explanation
one_quarter.count()
Explanation: How many records do we have now?
End of explanation
one_quarter.username.count_values()
Explanation: Let's break it down by user:
End of explanation
one_q_df = odo(one_quarter.sort('asof_date'), pd.DataFrame)
plt.plot(one_q_df.asof_date, one_q_df.eps, marker='.', linestyle='None', color='r')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("EPS Estimate")
plt.title("Analyst by Analyst EPS Estimates for TSLA")
plt.legend(["Individual Estimate"], loc=2)
Explanation: Let's convert it over to a Pandas DataFrame so we can chart it and examine it closer
End of explanation
from quantopian.interactive.data.estimize import revisions_free
consensus = revisions_free[(revisions_free.sid == stocks.sid) &
(revisions_free.fiscal_year == '2014') &
(revisions_free.fiscal_quarter == '1') &
(revisions_free.source == 'estimize') &
(revisions_free.metric == 'eps')
]
consensus.head(3)
Explanation: That's neat. But let's add in some data from another dataset -- the Estimize Revisions data. For the same timeframe, the revisions data provides each revision to the overall consensus estimates. So where estimates_free data provides every single estimate made by an individual on the Estimize site, revisions_free provides rolled up summaries of the estimates.
End of explanation
consensus_df = odo(consensus[['asof_date', 'mean']].sort('asof_date'), pd.DataFrame)
consensus_df.rename(columns={'mean':'eps_mean'}, inplace=True)
Explanation: For this quick demonstration, let's just grab the consensus mean from the revisions_free data set and convert it over to Pandas. Note, we need to rename the mean column name because it causes problems otherwise:
End of explanation
plt.plot(consensus_df.asof_date, consensus_df.eps_mean)
plt.plot(one_q_df.asof_date, one_q_df.eps, marker='.', linestyle='None', color='r')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("EPS Estimate")
plt.title("EPS Estimates for TSLA")
plt.legend(["Mean Estimate", "Individual Estimates"], loc=2)
Explanation: Let's chart that in the same chart again so we get a trend of the mean over time, overlayed on a chart of each individual analyst estimate:
End of explanation |
8,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Serving models using NVIDIA Triton Inference Server and Vertex AI Prediction
This notebook demonstrates how to serve NVIDIA Merlin HugeCTR deep learning models using NVIDIA Triton Inference Server and Vertex AI Prediction.
The notebook compiles prescriptive guidance for the following tasks
Step1: Change the following variables according to your definitions.
Step2: Change the following variables ONLY if necessary.
You can leave the default variables.
Step3: Initialize Vertex AI SDK
Step4: 1. Exporting Triton ensemble model
A Triton ensemble model represents a pipeline of one or more models and the connection of input and output tensors between these models. Ensemble models are intended to encapsulate inference pipelines that involves multiple steps, each performed by a different model. For example, a common "data preprocessing -> inference -> data postprocessing" pattern. Using ensemble models for this purpose can avoid the overhead of transferring intermediate tensors between client and serving endpoints and minimize the number of requests that must be sent to Triton.
In our case, an inference pipeline comprises two steps
Step5: Export the ensemble model
The src.export.export_ensemble utility function takes a number of arguments that are required to set up a proper flow of tensors between inputs and outputs of the NVTabular workflow and the HugeCTR model.
model_name - The model name that will be used as a prefix for the generated ensemble artifacts.
workflow_path - The local path to the NVTabular workflow
saved_model_path - The local path to the saved HugeCTR model
output_path - The local path to the location where an ensemble will be exported
model_repository_path - The path to use as a root in ps.json and other config files
max_batch - The maximum size of a serving batch that will be supported by the ensemble
The following settings should match the settings of the NVTabular workflow
categorical_columns - The list of names of categorical input features to the NVTabular workflow
continuous_columns - The list of names of continuous input features to the NVTabular workflow
The following settings should match the respective settings in the HugeCTR model
num_outputs - The number of outputs from the HugeCTR model
embedding_vector_size - The size of an embedding vector used by the HugeCTR model
num_slots - The number of slots used for sparse features of the HugeCTR model
max_nnz - This value controls how sparse features are coded in the embedding arrays
As noted before, in this notebook we assume that you generated the NVTabular workflow and the HugeCTR model using the the 01-dataset-preprocessing.ipynb and 02-model-training-hugectr.ipynb notebooks. The workflow captures the preprocessing logic for the Criteo dataset and the HugeCTR model is an implementation of the DeepFM CTR model.
Step6: The previous cell created the following local folder structure
Step7: The deepfm folder contains artifacts and configurations for the HugeCTR model. The deepfm_ens folder contains a configuration for the ensemble model. And the deepfm_nvt contains artifacts and configurations for the NVTabular preprocessing workflow. The ps.json file contains information required by the Triton's HugeCTR backend.
Notice that the file paths in ps.json use the value from model_repository_path.
Step8: Upload the ensemble to GCS
In the later steps you will register the exported ensemble model as a Vertex AI Prediction model resource. Before doing that we need to move the ensemble to GCS.
Step9: 2. Building a custom serving container
The custom serving container is derived from the NVIDIA NGC Merlin inference container. It adds Google Cloud SDK and an entrypoint script that executes the tasks described in detail in the overview.
Step10: As described in detail in the overview, the entry point script copies the ensemble artifacts to the serving container's local file system and starts Triton.
Step11: You use Cloud Build to build the serving container and push it to your projects Container Registry.
Step12: 3. Uploading the model and its metadata to Vertex Models.
In the following cell you will register (upload) the ensemble model as a Vertex AI Prediction Model resource.
Refer to Use a custom container for prediction guide for detailed information about creating Vertex AI Prediction Model resources.
Notice that the value of model_repository_path that was used when exporting the ensemble is passed as a command line parameter to the serving container. The entrypoint script in the container will copy the ensemble artifacts to this location when the container starts. This ensures that the locations of the artifacts in the container's local file system and the paths in the ps.json and other configuration files used by Triton match.
Step13: 4. Deploying the model to Vertex AI Prediction.
Deploying a Vertex AI Prediction Model is a two step process. First you create an endpoint that will expose an external interface to clients consuming the model. After the endpoint is ready you can deploy multiple versions of a model to the endpoint.
Refer to Deploy a model using the Vertex AI API guide for more information about the APIs used in the following cells.
Create the Vertex Endpoint
Before deploying the ensemble model you need to create a Vertex AI Prediction endpoint.
Step14: Deploy the model to Vertex Prediction endpoint
After the endpoint is ready, you can deploy your ensemble model to the endpoint. You will run the ensemble on a GPU node equipped with the NVIDIA Tesla T4 GPUs.
Refer to Deploy a model using the Vertex AI API guide for more information.
Step15: 5. Invoking the model
To invoke the ensemble through Vertex AI Prediction endpoint you need to format your request using a standard Inference Request JSON Object or a Inference Request JSON Object with a binary extension and submit a request to Vertex AI Prediction REST rawPredict endpoint. You need to use the rawPredict rather than predict endpoint because inference request formats used by Triton are not compatible with the Vertex AI Prediction standard input format.
The below cell shows a sample request body formatted as a standard Inference Request JSON Object. The request encapsulates a batch of three records from the Criteo dataset.
Step16: You can invoke the Vertex AI Prediction rawPredict endpoint using any HTTP tool or library, including curl. | Python Code:
import json
import os
import shutil
import time
from pathlib import Path
from src.serving import export
from google.cloud import aiplatform as vertex_ai
Explanation: Serving models using NVIDIA Triton Inference Server and Vertex AI Prediction
This notebook demonstrates how to serve NVIDIA Merlin HugeCTR deep learning models using NVIDIA Triton Inference Server and Vertex AI Prediction.
The notebook compiles prescriptive guidance for the following tasks:
Creating Triton ensemble models that combine NVTabular preprocessing workflows and Merlin HugeCTR models
Building a Vertex Prediction custom serving container image for serving the ensemble models with NVIDIA Triton Inference server.
Uploading the ensemble model and its metadata to Vertex AI Model Resources.
Deploying the model with the Triton Inference Server container to Vertex Prediction Endpoints.
Getting online predictions from the deployed ensemble.
To fully benefit from the content covered in this notebook, you should have a solid understanding of key Vertex AI Prediction concepts like models, endpoints, and model deployments. We strongly recommend reviewing Vertex AI Prediction documentation before proceeding.
NVIDIA Triton Inference Server Overview
Triton Inference Server provides an inferencing solution optimized for both CPUs and GPUs. Triton can run multiple models from the same or different frameworks concurrently on a single GPU or CPU. In a multi-GPU server, it automatically creates an instance of each model on each GPU to increase utilization without extra coding. It supports real-time inferencing, batch inferencing to maximize GPU/CPU utilization, and streaming inference with built-in support for audio streaming input. It also supports model ensembles for use cases that require multiple models to perform end-to-end inference.
The following figure shows the Triton Inference Server high-level architecture.
<img src="images/triton-architecture.png" alt="Triton Architecture" style="width:50%"/>
The model repository is a file-system based repository of the models that Triton will make available for inferencing.
Inference requests arrive at the server via either HTTP/REST or gRPC and are then routed to the appropriate per-model scheduler.
Triton implements multiple scheduling and batching algorithms that can be configured on a model-by-model basis.
The backend performs inferencing using the inputs provided in the batched requests to produce the requested outputs.
Triton server provides readiness and liveness health endpoints, as well as utilization, throughput, and latency metrics, which enable the integration of Triton into deployment environments, such as Vertex AI Prediction.
Refer to Triton Inference Server Architecture for more detailed information.
Triton Inference Server on Vertex AI Prediction
In this section, we describe the deployment of Triton Inference Server on Vertex AI Prediction. Although, the focus of this notebook is on demonstrating how to serve an ensemble of an NVTabular preprocessing workflow and a HugeCTR model, the outlined design patterns are applicable to a wider set of serving scenarios. The following figure shows a deployment architecture.
<img src="./images/triton-in-vertex.png" alt="Triton on Vertex AI Prediction" style="width:70%"/>
Triton Inference Server runs inside a container based on a custom serving image. The custom container image is built on top of NVIDIA Merlin Inference image and adds packages and configurations to align with Vertex AI requirements for custom serving container images.
An ensemble to be served by Triton is registered with Vertex AI Prediction as a Model. The Model's metadata reference a location of the ensemble artifacts in Google Cloud Storage and the custom serving container and its configurations.
After the model is deployed to a Vertex AI Prediction endpoint, the entrypoint script of the custom container copies the ensemble's artifacts from the GCS location to a local file system in the container. It then starts Triton, referencing a local copy of the ensemble as Triton's model repository.
Triton loads the models comprising the ensemble and exposes inference, health, and model management REST endpoints using standard inference protocols. The Triton's inference endpoint - /v2/models/{ENSEMBLE_NAME}/infer is mapped to Vertex AI Prediction predict route and exposed to external clients through Vertex Prediction endpoint. The Triton's health endpoint - /v2/health/ready - is mapped to Vertex AI Prediction health route and used by Vertex AI Prediction for health checks.
Setup
In this section of the notebook you configure your environment settings, including a GCP project, a Vertex AI compute region, and a Vertex AI staging GCS bucket. You also set the locations of the fitted NVTaubular workflow and the trained HugeCTR model created by the 01-dataset-preprocessing.ipynb and 02-model-training-hugectr.ipynb notebooks, as well as the Vertex AI model name, description, and endpoint name.
Make sure to update the below cells with the values reflecting your environment.
First import all the necessary python packages.
End of explanation
# Project definitions
PROJECT_ID = '<YOUR PROJECT ID>' # Change to your project id.
REGION = '<LOCATION OF RESOURCES>' # Change to your region.
# Bucket definitions
STAGING_BUCKET = '<YOUR BUCKET NAME>' # Change to your bucket.
WORKFLOW_MODEL_PATH = "gs://..." # Change to GCS path of the nvt workflow
HUGECTR_MODEL_PATH = "gs://..." # Change to GCS path of the hugectr trained model
Explanation: Change the following variables according to your definitions.
End of explanation
MODEL_ARTIFACTS_REPOSITORY = f'gs://{STAGING_BUCKET}/recsys-models'
MODEL_NAME = 'deepfm'
MODEL_VERSION = 'v01'
MODEL_DISPLAY_NAME = f'criteo-hugectr-{MODEL_NAME}-{MODEL_VERSION}'
MODEL_DESCRIPTION = 'HugeCTR DeepFM model'
ENDPOINT_DISPLAY_NAME = f'hugectr-{MODEL_NAME}-{MODEL_VERSION}'
LOCAL_WORKSPACE = '/home/jupyter/serving_notebook'
IMAGE_NAME = 'triton-serving'
IMAGE_URI = f'gcr.io/{PROJECT_ID}/{IMAGE_NAME}'
DOCKERNAME = 'triton'
Explanation: Change the following variables ONLY if necessary.
You can leave the default variables.
End of explanation
vertex_ai.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=STAGING_BUCKET
)
Explanation: Initialize Vertex AI SDK
End of explanation
if os.path.isdir(LOCAL_WORKSPACE):
shutil.rmtree(LOCAL_WORKSPACE)
os.makedirs(LOCAL_WORKSPACE)
!gsutil -m cp -r {WORKFLOW_MODEL_PATH} {LOCAL_WORKSPACE}
!gsutil -m cp -r {HUGECTR_MODEL_PATH} {LOCAL_WORKSPACE}
Explanation: 1. Exporting Triton ensemble model
A Triton ensemble model represents a pipeline of one or more models and the connection of input and output tensors between these models. Ensemble models are intended to encapsulate inference pipelines that involves multiple steps, each performed by a different model. For example, a common "data preprocessing -> inference -> data postprocessing" pattern. Using ensemble models for this purpose can avoid the overhead of transferring intermediate tensors between client and serving endpoints and minimize the number of requests that must be sent to Triton.
In our case, an inference pipeline comprises two steps: input preprocessing using a fitted NVTabular workflow and generating predictions using a HugeCTR ranking model.
An ensemble model is not an actual serialized model. There are no addtional model artifacts created when an ensemble is defined. It is a configuration that specifies which actual models comprise the ensemble, the execution flow when processing an inference request and the flow of data between inputs and outputs of the component models. This configuration is defined using the same protocol buffer based configuration format as used for serving other model types in Triton. Refer to Triton Inference Server Model Configuration guide for detailed information about configuring models and model ensembles.
You can create an ensemble model manually by arranging the component models into the prescribed folder structure and editing the required configuration files. For ensemble models that utilize the "NVTabular workflow -> Inference Model" processing pattern you can utilize a set of utility functions provided by the nvtabular.inference.triton module. Specifically to create a "NVTabular workflow -> HugeCTR model" ensemble, as utilized in this notebook, you can use the nvtabular.inference.triton.export_hugectr_ensemble function.
We have encapsulated the ensemble export logic in the src.serving.export_ensemble function. In addition to calling nvtabular.inference.triton.export_hugectr_ensemble, the function also creates a JSON configuration file required by Triton when serving HugeCTR models. This file - ps.json - specifies the locations of different components comprising a saved HugeCTR model and is used by Triton HugeCTR backend to correctly load the saved model and prepare it for serving.
Recall that the entrypoint script in the custom serving container copies the ensemble's models artifacts from a source GCS location as prepared by Vertex AI Prediction into the serving container's local file systems. The ps.json file needs to use the paths that correctly point to saved model artifacts in the container's file system. Also some of the paths embedded in the configs generated by nvtabular.inference.triton.export_hugectr_ensemble use absolute paths and need to be properly set. The src.serving.export_ensemble function handles all of that. You can specify the target root folder in the containers local file system using the model_repository_path parameter and all the paths will be adjusted accordingly.
Copy a HugeCTR saved model and a fitted NVTabular workflow to a local staging folder
The nvtabular.inference.triton.export_hugectr_ensemble does not support GCS. As such you need to copy NVTabular workflow and HugeCTR model artifacts to a local file system.
End of explanation
NUM_SLOTS = 26
MAX_NNZ = 2
EMBEDDING_VECTOR_SIZE = 11
MAX_BATCH_SIZE = 64
NUM_OUTPUTS = 1
continuous_columns = ["I" + str(x) for x in range(1, 14)]
categorical_columns = ["C" + str(x) for x in range(1, 27)]
label_columns = ["label"]
local_workflow_path = str(Path(LOCAL_WORKSPACE) / Path(WORKFLOW_MODEL_PATH).parts[-1])
local_saved_model_path = str(Path(LOCAL_WORKSPACE) / Path(HUGECTR_MODEL_PATH).parts[-1])
local_ensemble_path = str(Path(LOCAL_WORKSPACE) / f'triton-ensemble-{time.strftime("%Y%m%d%H%M%S")}')
model_repository_path = '/model'
export.export_ensemble(
model_name=MODEL_NAME,
workflow_path=local_workflow_path,
saved_model_path=local_saved_model_path,
output_path=local_ensemble_path,
categorical_columns=categorical_columns,
continuous_columns=continuous_columns,
label_columns=label_columns,
num_slots=NUM_SLOTS,
max_nnz=MAX_NNZ,
num_outputs=NUM_OUTPUTS,
embedding_vector_size=EMBEDDING_VECTOR_SIZE,
max_batch_size=MAX_BATCH_SIZE,
model_repository_path=model_repository_path
)
Explanation: Export the ensemble model
The src.export.export_ensemble utility function takes a number of arguments that are required to set up a proper flow of tensors between inputs and outputs of the NVTabular workflow and the HugeCTR model.
model_name - The model name that will be used as a prefix for the generated ensemble artifacts.
workflow_path - The local path to the NVTabular workflow
saved_model_path - The local path to the saved HugeCTR model
output_path - The local path to the location where an ensemble will be exported
model_repository_path - The path to use as a root in ps.json and other config files
max_batch - The maximum size of a serving batch that will be supported by the ensemble
The following settings should match the settings of the NVTabular workflow
categorical_columns - The list of names of categorical input features to the NVTabular workflow
continuous_columns - The list of names of continuous input features to the NVTabular workflow
The following settings should match the respective settings in the HugeCTR model
num_outputs - The number of outputs from the HugeCTR model
embedding_vector_size - The size of an embedding vector used by the HugeCTR model
num_slots - The number of slots used for sparse features of the HugeCTR model
max_nnz - This value controls how sparse features are coded in the embedding arrays
As noted before, in this notebook we assume that you generated the NVTabular workflow and the HugeCTR model using the the 01-dataset-preprocessing.ipynb and 02-model-training-hugectr.ipynb notebooks. The workflow captures the preprocessing logic for the Criteo dataset and the HugeCTR model is an implementation of the DeepFM CTR model.
End of explanation
! ls -la {local_ensemble_path}
Explanation: The previous cell created the following local folder structure
End of explanation
! cat {local_ensemble_path}/ps.json
Explanation: The deepfm folder contains artifacts and configurations for the HugeCTR model. The deepfm_ens folder contains a configuration for the ensemble model. And the deepfm_nvt contains artifacts and configurations for the NVTabular preprocessing workflow. The ps.json file contains information required by the Triton's HugeCTR backend.
Notice that the file paths in ps.json use the value from model_repository_path.
End of explanation
gcs_ensemble_path = '{}/{}'.format(MODEL_ARTIFACTS_REPOSITORY, Path(local_ensemble_path).parts[-1])
!gsutil -m cp -r {local_ensemble_path}/* {gcs_ensemble_path}/
Explanation: Upload the ensemble to GCS
In the later steps you will register the exported ensemble model as a Vertex AI Prediction model resource. Before doing that we need to move the ensemble to GCS.
End of explanation
! cat src/Dockerfile.triton
Explanation: 2. Building a custom serving container
The custom serving container is derived from the NVIDIA NGC Merlin inference container. It adds Google Cloud SDK and an entrypoint script that executes the tasks described in detail in the overview.
End of explanation
! cat src/serving/entrypoint.sh
Explanation: As described in detail in the overview, the entry point script copies the ensemble artifacts to the serving container's local file system and starts Triton.
End of explanation
FILE_LOCATION = './src'
! gcloud builds submit --config src/cloudbuild.yaml --substitutions _DOCKERNAME=$DOCKERNAME,_IMAGE_URI=$IMAGE_URI,_FILE_LOCATION=$FILE_LOCATION --timeout=2h --machine-type=e2-highcpu-8
Explanation: You use Cloud Build to build the serving container and push it to your projects Container Registry.
End of explanation
serving_container_args = [model_repository_path]
model = vertex_ai.Model.upload(
display_name=MODEL_DISPLAY_NAME,
description=MODEL_DESCRIPTION,
serving_container_image_uri=IMAGE_URI,
artifact_uri=gcs_ensemble_path,
serving_container_args=serving_container_args,
sync=True
)
model.resource_name
Explanation: 3. Uploading the model and its metadata to Vertex Models.
In the following cell you will register (upload) the ensemble model as a Vertex AI Prediction Model resource.
Refer to Use a custom container for prediction guide for detailed information about creating Vertex AI Prediction Model resources.
Notice that the value of model_repository_path that was used when exporting the ensemble is passed as a command line parameter to the serving container. The entrypoint script in the container will copy the ensemble artifacts to this location when the container starts. This ensures that the locations of the artifacts in the container's local file system and the paths in the ps.json and other configuration files used by Triton match.
End of explanation
endpoint = vertex_ai.Endpoint.create(
display_name=ENDPOINT_DISPLAY_NAME
)
Explanation: 4. Deploying the model to Vertex AI Prediction.
Deploying a Vertex AI Prediction Model is a two step process. First you create an endpoint that will expose an external interface to clients consuming the model. After the endpoint is ready you can deploy multiple versions of a model to the endpoint.
Refer to Deploy a model using the Vertex AI API guide for more information about the APIs used in the following cells.
Create the Vertex Endpoint
Before deploying the ensemble model you need to create a Vertex AI Prediction endpoint.
End of explanation
traffic_percentage = 100
machine_type = "n1-standard-8"
accelerator_type="NVIDIA_TESLA_T4"
accelerator_count = 1
min_replica_count = 1
max_replica_count = 2
model.deploy(
endpoint=endpoint,
deployed_model_display_name=MODEL_DISPLAY_NAME,
machine_type=machine_type,
min_replica_count=min_replica_count,
max_replica_count=max_replica_count,
traffic_percentage=traffic_percentage,
accelerator_type=accelerator_type,
accelerator_count=accelerator_count,
sync=True
)
Explanation: Deploy the model to Vertex Prediction endpoint
After the endpoint is ready, you can deploy your ensemble model to the endpoint. You will run the ensemble on a GPU node equipped with the NVIDIA Tesla T4 GPUs.
Refer to Deploy a model using the Vertex AI API guide for more information.
End of explanation
payload = {
'id': '1',
'inputs': [
{'name': 'I1','shape': [3, 1], 'datatype': 'INT32', 'data': [5, 32, 0]},
{'name': 'I2', 'shape': [3, 1], 'datatype': 'INT32', 'data': [110, 3, 233]},
{'name': 'I3', 'shape': [3, 1], 'datatype': 'INT32', 'data': [0, 5, 1]},
{'name': 'I4', 'shape': [3, 1], 'datatype': 'INT32', 'data': [16, 0, 146]},
{'name': 'I5', 'shape': [3, 1], 'datatype': 'INT32', 'data': [0, 1, 1]},
{'name': 'I6', 'shape': [3, 1], 'datatype': 'INT32', 'data': [1, 0, 0]},
{'name': 'I7', 'shape': [3, 1], 'datatype': 'INT32', 'data': [0, 0, 0]},
{'name': 'I8', 'shape': [3, 1], 'datatype': 'INT32', 'data': [14, 61, 99]},
{'name': 'I9', 'shape': [3, 1], 'datatype': 'INT32', 'data': [7, 5, 7]},
{'name': 'I10', 'shape': [3, 1], 'datatype': 'INT32', 'data': [1, 0, 0]},
{'name': 'I11', 'shape': [3, 1], 'datatype': 'INT32', 'data': [0, 1, 1]},
{'name': 'I12', 'shape': [3, 1], 'datatype': 'INT32', 'data': [306, 3157, 3101]},
{'name': 'I13', 'shape': [3, 1], 'datatype': 'INT32', 'data': [0, 5, 1]},
{'name': 'C1', 'shape': [3, 1], 'datatype': 'INT32', 'data': [1651969401, -436994675, 1651969401]},
{'name': 'C2', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-501260968, -1599406170, -1382530557]},
{'name': 'C3', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-1343601617, 1873417685, 1656669709]},
{'name': 'C4', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-1805877297, -628476895, 946620910]},
{'name': 'C5', 'shape': [3, 1], 'datatype': 'INT32', 'data': [951068488, 1020698403, -413858227]},
{'name': 'C6', 'shape': [3, 1], 'datatype': 'INT32', 'data': [1875733963, 1875733963, 1875733963]},
{'name': 'C7', 'shape': [3, 1], 'datatype': 'INT32', 'data': [897624609, -1424560767, -1242174622]},
{'name': 'C8', 'shape': [3, 1], 'datatype': 'INT32', 'data': [679512323, 1128426537, -772617077]},
{'name': 'C9', 'shape': [3, 1], 'datatype': 'INT32', 'data': [1189011366, 502653268, 776897055]},
{'name': 'C10', 'shape': [3, 1], 'datatype': 'INT32', 'data': [771915201, 2112471209, 771915201]},
{'name': 'C11', 'shape': [3, 1], 'datatype': 'INT32', 'data': [209470001, 1716706404, 209470001]},
{'name': 'C12', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-1785193185, -1712632281, 309420420]},
{'name': 'C13', 'shape': [3, 1], 'datatype': 'INT32', 'data': [12976055, 12976055, 12976055]},
{'name': 'C14', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-1102125769, -1102125769, -1102125769]},
{'name': 'C15', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-1978960692, -205783399, -150008565]},
{'name': 'C16', 'shape': [3, 1], 'datatype': 'INT32', 'data': [1289502458, 1289502458, 1289502458]},
{'name': 'C17', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-771205462, -771205462, -771205462]},
{'name': 'C18', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-1206449222, -1578429167, 1653545869]},
{'name': 'C19', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-1793932789, -1793932789, -1793932789]},
{'name': 'C20', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-1014091992, -20981661, -1014091992]},
{'name': 'C21', 'shape': [3, 1], 'datatype': 'INT32', 'data': [351689309, -1556988767, 351689309]},
{'name': 'C22', 'shape': [3, 1], 'datatype': 'INT32', 'data': [632402057, -924717482, 632402057]},
{'name': 'C23', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-675152885, 391309800, -675152885]},
{'name': 'C24', 'shape': [3, 1], 'datatype': 'INT32', 'data': [2091868316, 1966410890, 883538181]},
{'name': 'C25', 'shape': [3, 1], 'datatype': 'INT32', 'data': [809724924, -1726799382, -10139646]},
{'name': 'C26', 'shape': [3, 1], 'datatype': 'INT32', 'data': [-317696227, -1218975401, -317696227]}]
}
with open('criteo_payload.json', 'w') as f:
json.dump(payload, f)
Explanation: 5. Invoking the model
To invoke the ensemble through Vertex AI Prediction endpoint you need to format your request using a standard Inference Request JSON Object or a Inference Request JSON Object with a binary extension and submit a request to Vertex AI Prediction REST rawPredict endpoint. You need to use the rawPredict rather than predict endpoint because inference request formats used by Triton are not compatible with the Vertex AI Prediction standard input format.
The below cell shows a sample request body formatted as a standard Inference Request JSON Object. The request encapsulates a batch of three records from the Criteo dataset.
End of explanation
uri = f'https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{endpoint.name}:rawPredict'
! curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
{uri} \
-d @criteo_payload.json
Explanation: You can invoke the Vertex AI Prediction rawPredict endpoint using any HTTP tool or library, including curl.
End of explanation |
8,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trips in time and space order
Sorts output trips in time and space order, which is useful for disaggregate (individual) dynamic traffic assignment and person time/space visualization. Trips in time and space order means the trip origin, destination, and depart period from one trip to the next makes sense.
Input and output filenames
Step1: Libraries
Step2: Read tables directly from the pipeline
Step3: Add related fields, including joint trip participant ids
Step4: Create additional trips records for other persons on joint trips
Step5: Pull out at-work trips and put back in at the right spot
Step6: Add fields to verify sorting
Step7: Write all trips | Python Code:
pipeline_filename = '../test_example_mtc/output/pipeline.h5'
output_trip_filename = "../test_example_mtc/output/final_trips_time_space_order.csv"
Explanation: Trips in time and space order
Sorts output trips in time and space order, which is useful for disaggregate (individual) dynamic traffic assignment and person time/space visualization. Trips in time and space order means the trip origin, destination, and depart period from one trip to the next makes sense.
Input and output filenames
End of explanation
import pandas as pd
import numpy as np
import itertools
Explanation: Libraries
End of explanation
# get tables (if run as mp then trip_mode_choice is final state of the tables)
pipeline = pd.io.pytables.HDFStore(pipeline_filename)
tours = pipeline['/tours/stop_frequency']
trips = pipeline['/trips/trip_mode_choice']
jtp = pipeline['/joint_tour_participants/joint_tour_participation']
Explanation: Read tables directly from the pipeline
End of explanation
trips["tour_participants"] = trips.tour_id.map(tours.number_of_participants)
trips["tour_category"] = trips.tour_id.map(tours.tour_category)
trips["parent_tour_id"] = trips.tour_id.map(tours.index.to_series()).map(tours.parent_tour_id)
trips["tour_start"] = trips.tour_id.map(tours.start)
trips["tour_end"] = trips.tour_id.map(tours.end)
trips["parent_tour_start"] = trips.parent_tour_id.map(tours.start)
trips["parent_tour_end"] = trips.parent_tour_id.map(tours.end)
trips["inbound"] = ~trips.outbound
Explanation: Add related fields, including joint trip participant ids
End of explanation
tour_person_ids = jtp.groupby("tour_id").apply(lambda x: pd.Series({"person_ids": " ".join(x["person_id"].astype("str"))}))
trips = trips.join(tour_person_ids, "tour_id")
trips["person_ids"] = trips["person_ids"].fillna("")
trips.person_ids = trips.person_ids.where(trips.person_ids!="", trips.person_id)
trips["person_ids"] = trips["person_ids"].astype(str)
person_ids = [*map(lambda x: x.split(" "),trips.person_ids.tolist())]
person_ids = list(itertools.chain.from_iterable(person_ids))
trips_expanded = trips.loc[np.repeat(trips.index, trips['tour_participants'])]
trips_expanded.person_id = person_ids
trips_expanded["trip_id"] = trips_expanded.index
trips_expanded["trip_id"] = trips_expanded["trip_id"].astype('complex128') #for larger ids
while trips_expanded["trip_id"].duplicated().any():
trips_expanded["trip_id"] = trips_expanded["trip_id"].where(~trips_expanded["trip_id"].duplicated(), trips_expanded["trip_id"] + 0.1)
trips_expanded = trips_expanded.sort_values(['person_id','tour_start','tour_id','inbound','trip_num'])
Explanation: Create additional trips records for other persons on joint trips
End of explanation
atwork_trips = trips_expanded[trips_expanded.tour_category == "atwork"]
trips_expanded_last_trips = trips_expanded[trips_expanded.trip_num == trips_expanded.trip_count]
parent_tour_trips_with_atwork_trips = trips_expanded_last_trips.merge(atwork_trips, left_on="tour_id", right_on="parent_tour_id")
parent_tour_trips_with_atwork_trips["atwork_depart_after"] = parent_tour_trips_with_atwork_trips.eval("depart_y >= depart_x")
parent_trip_id = parent_tour_trips_with_atwork_trips[parent_tour_trips_with_atwork_trips["atwork_depart_after"]]
parent_trip_id.index = parent_trip_id["trip_id_y"]
for person in parent_trip_id["person_id_x"].unique():
person_all_trips = trips_expanded[(trips_expanded["person_id"].astype("str") == person) & (trips_expanded.tour_category != "atwork")]
person_atwork_trips = parent_trip_id[parent_trip_id["person_id_x"].astype("str") == person]
parent_trip_index = person_all_trips.index.astype('complex128').get_loc(person_atwork_trips.trip_id_x[0])
before_trips = person_all_trips.iloc[0:(parent_trip_index+1)]
after_trips = person_all_trips.iloc[(parent_trip_index+1):]
person_actual_atwork_trips = atwork_trips[(atwork_trips["person_id"].astype("str") == person)]
new_person_trips = before_trips.append(person_actual_atwork_trips).append(after_trips)
trips_expanded = trips_expanded[~(trips_expanded["person_id"].astype("str") == person)] #remove and add back due to indexing
trips_expanded = trips_expanded.append(new_person_trips)
Explanation: Pull out at-work trips and put back in at the right spot
End of explanation
trips_expanded["next_person_id"] = trips_expanded["person_id"].shift(-1)
trips_expanded["next_origin"] = trips_expanded["origin"].shift(-1)
trips_expanded["next_depart"] = trips_expanded["depart"].shift(-1)
trips_expanded["spatial_consistent"] = trips_expanded["destination"] == trips_expanded["next_origin"]
trips_expanded["time_consistent"] = trips_expanded["next_depart"] >= trips_expanded["depart"]
trips_expanded["spatial_consistent"].loc[trips_expanded["next_person_id"] != trips_expanded["person_id"]] = True
trips_expanded["time_consistent"].loc[trips_expanded["next_person_id"] != trips_expanded["person_id"]] = True
print("{}\n\n{}".format(trips_expanded["spatial_consistent"].value_counts(), trips_expanded["time_consistent"].value_counts()))
Explanation: Add fields to verify sorting
End of explanation
trips_expanded.to_csv(output_trip_filename)
Explanation: Write all trips
End of explanation |
8,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Unsupervised Learning
Project 3
Step1: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories
Step2: Implementation
Step3: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint
Step4: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint
Step5: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint
Step6: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
Step7: Implementation
Step8: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer
Step9: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint
Step10: Implementation
Step11: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
Step12: Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer
Step13: pseudo code help from forums to decifer cell 136
scores = list
for n_clusters in number_of_clusters_list
Step14: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer
Step15: Implementation
Step16: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint
Step17: Question 9
For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?
Run the code block below to find which cluster each sample point is predicted to be.
Step18: **Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import renders as rs
from IPython.display import display # Allows the use of display() for DataFrames
# Show matplotlib plots inline (nicely formatted in the notebook)
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
Explanation: Machine Learning Engineer Nanodegree
Unsupervised Learning
Project 3: Creating Customer Segments
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# Display a description of the dataset
display(data.describe())
Explanation: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
End of explanation
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [10,18,60]
#[10,2,60]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
import seaborn as sns
import numpy as np
import pandas as pd
from scipy import stats, integrate
import matplotlib.pyplot as plt
Explanation: Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeRegressor
def predict_feature(feature):
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop([feature], axis=1)
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, data[feature], test_size=0.25, random_state=1)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=1).fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print("The score for {:16} is {:+.5f}".format(feature, score))
for feature in data.columns.values:
predict_feature(feature)
Explanation: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant.
**Answer: Using the data from the statistical analysis: The Fresh catagory 50% is 8,504; Milk 3,627; Grocery 4,755;Frozen 1526; Detergents_paper 816.5; Delicatessen 965.5.
Customer 0 has the highest level of detergents_paper,Frozen and grocery with the lowest Fresh I would suspect this is a retailer
Customer 1 has the highest level of Fresh and Milk I would suspect this is a market
Customer 2 has the lowest level of delicatessen and lowest frozen I would suspect this is a cafe. Created a basic matrix below as a guide using the statistics table and I I feel would be a match for the type of establishment
|blank|cafe|Market|Retailer|
|-----|:---------:|:--------:|-----:|
|Fresh|average|high|low|
|Milk|average|high |high|
|Grocery|low|high|high|
|Frozen|low |high|low|
|detergents|low|average|high|
|delicatessen|low|high|low|
**
Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
- Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
- Import a decision tree regressor, set a random_state, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's score function.
End of explanation
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.
Answer: Looking at the scores for all the features, the two highest would be detergents_paper at 0.81524 r2 followed by grocery at 0.79577. It appears detergents_paper has the highest coefficient of determination and should be held out to predict spending habits.
Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
End of explanation
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = pd.DataFrame(log_data.loc[indices], columns = data.keys()).reset_index(drop = True)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
from scipy import stats
for col in data.columns:
# For normally distributed data, the skewness should be about 0.
# skewness value > 0 means more weight in left tail of the distribution.
print '"{}" skew: {}'.format(col, stats.skew(data[col], axis=0))
import seaborn as sns
import matplotlib.pyplot as plt
# create "melted" dataframe
df = pd.DataFrame(columns=['variable', 'value'])
for col in log_data.columns:
df = df.append(pd.melt(log_data, value_vars=[col]))
print df.shape
# create the boxplot with data points overlay
plt.figure(figsize=(8,6))
sns.boxplot(x="value", y="variable", data=df, color="c")
sns.stripplot(x="value", y="variable", data=df, jitter=True,
size=4, alpha=.4, color=".3", linewidth=0)
sns.despine(trim=True)
Explanation: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie?
Answer: The data points in general appear to positive skewed or right skewed with a gaussian distribuition not a normal distribution. The strongest linear correlation does appear to be between detergents_paper and grocery which aligns with the r2 calculations above.
Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to log_data after applying a logarithm scaling. Use the np.log function for this.
- Assign a copy of the sample data to log_samples after applying a logrithm scaling. Again, use np.log.
End of explanation
# Display the log-transformed sample data
display(log_samples)
Explanation: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
End of explanation
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5 * (Q3 - Q1)
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# OPTIONAL: Select the indices for data points you wish to remove
outliers = [65,66,81,75,154,161,128]
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Explanation: Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
- Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
- Assign the calculation of an outlier step for the given feature to step.
- Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
End of explanation
import sklearn.decomposition
from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=len(good_data.columns)).fit(good_data)
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = rs.pca_results(good_data, pca)
Explanation: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer: I choose to remove the outliers from the high correlation data features grocery and detergents_paper and also a couple of data points that appeared to show up on more than one feature set such as 65 and 128. My reasoning was the higher the accuracy of my correlation objects the better performanceand removing outliers that may be causing increased noise in multiple data sets of features should also be removed to help increase accuracy in training and prediction. It would be nice to have a line of code that could collate these outliers into one array to pass to outliers instead of visually searching for them. This would also be beneficial to analysizing the outliers in other datasets where removing the outliers may not be the best option if they are not coundounders but contain valuable insights as to their reason for being an outlier that can affect the interpretation of the data.
Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
- Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Explanation: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.
**Answer: The dimensions calculated are representations of patterns of spending. The first and second principal component explain (0.4341 + 0.2712)= 0.7053=70% of the variance in the data. The first four principal compaonents explain (0.4341+0.2712+0.1237+0.1011)= 0.9301=93 % of the variance.
The first dimension best represents anestablishment whose main spending is on detergents_paper, also purchases grocery,milk and a smaller amount of delicatessen. It would not be worth marketing frozen or fresh goods to this establishment as noted by the negative weights in tese catagories. "The first principal component is made up of large positive weights in Detergents_Paper, and lesser but still sizeable positive weights on Grocery and Milk. It also correlates with a decrease in Fresh and Frozen. This pattern might represent spending in household staples products that are purchased together."
The first component shows that we have a lot of variance in customers who purchase Milk, Grocery and Detergents_Paper — some purchase a lot of these 3 categories while others purchase very little.
The second dimension represents an establishment that purchases all the goods sold however Fresh is the strongest marketing target followed by delicatessen and frozen goods. Less marketing power should be directed towards detergents_paper and grocery for this establishment because they are purchased to a lesser degree.
The third diemsion represents an establishment that spends heavily on fresh and detergent_paper to a lesser degree. Less marketing should be spent on tageting delicatessen and frozen to this establishment as discussed in the above reasoning .
The forth dimension represents an establishment with high frozen expenditure, here less money should be targeted towards marketing this establishment delicatessen and fresh because they have a negative correlation for this . their next highest purchses are detergents_paper.
The data can be viewed in two business aspects by identifying establishments that fit the feature dimension representations corporate can identify where to target marketing dollars to companion sales based upon the main variance feature and use the data to determine where the marketing dollars may not be as effective based upon negative variance to that dimensional category of establishment**
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
End of explanation
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
Explanation: Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with good_data to pca.
- Apply a PCA transformation of good_data using pca.transform, and assign the reuslts to reduced_data.
- Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
# Produce a scatter matrix for pca reduced data
pd.scatter_matrix(reduced_data, alpha = 0.8, figsize = (9,5), diagonal = 'kde');
import seaborn as sns
g = sns.JointGrid("Dimension 1", "Dimension 2", reduced_data, xlim=(-6,6), ylim=(-5,5))
g = g.plot_joint(sns.kdeplot, cmap="Blues", shade=True)
g = g.plot_marginals(sns.kdeplot, shade=True)
Explanation: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
End of explanation
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.mixture import GMM
import numpy as np
import pandas as pd
from numpy import meshgrid
#quick code to test for clusters implementation
import time
n_clusters = 6
start = time.time()
clusters = KMeans(n_clusters=n_clusters).fit(reduced_data)
end = time.time()
print "You have clusters!\nTraining time (secs): {:.3f}".format(end - start)
print clusters
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.mixture import GMM
import numpy as np
import pandas as pd
from numpy import meshgrid
Explanation: Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer: Information obtained from the sklearn docs http://scikit-learn.org/stable/modules/clustering.html#k-means A K-means clustering algorithm clusters data by separating samples in n groups of equal variance and it seeks to minimize inertia which is how internally coherent clusters are (zero is optimal). The algorithm scales well. weaknesses are that it performs poorly on clusters that are elongated or irregularly shaped.Euclidian distances can become inflated in high dimensional space and running PCA can help to overcome this weakness because inertia is not normalized within the algorithm.
Gaussian Mixtures Models (GMM) are not scalable, require many parameters and are good for density estimation. They can draw confidence elipsoids for multivariate models. GMMs do not bias the clusters to fit expected parameters, it will remain true to the data and is one of the fastest clustering models for learning mixture models. The disadvantages are the algorithm can diverge and find solutions with infinite likelihood. The algorithm will use all cmponents it has access to without external clues. Given the observations of the data thus far and the fact that PCA has already been run on the data as well I think both may perform well, however because it appears that the PCA results seem to lean towards K-Means not having the diifculty with arbitrary boundaries which is the major decision choice between the two gien this dataset. I will choose to use K-Means clustering. Because the data is normalized, the data is not elongated and the algorithm is known to perform well and scale.
Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the reduced_data and assign it to clusterer.
- Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
- Find the cluster centers using the algorithm's respective attribute and assign them to centers.
- Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
- Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
- Assign the silhouette score to score and print the result.
End of explanation
# TODO: Apply your clustering algorithm of choice to the reduced data
# TODO: Predict the cluster for each data point
# TODO: Predict the cluster for each transformed sample data point
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
#preds = clusters.predict(pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
scores = [2,3,4,5,6]
for n_clusters in scores:
clusterer = KMeans(n_clusters=n_clusters,init='k-means++',random_state=10 ).fit(reduced_data)
preds = clusterer.predict(reduced_data)
silScore = silhouette_score(reduced_data, preds)
print ("cluster number",n_clusters, "the score is",silScore)
# TODO: Find the cluster centers
centroids = clusters.cluster_centers_ # test run to find centers alone to determine errors from code above
print centroids
# Display the results of the clustering from implementation
rs.cluster_results(reduced_data, preds, centroids, pca_samples)
# TODO: Apply your clustering algorithm of choice to the reduced data
# TODO: Predict the cluster for each data point
# TODO: Predict the cluster for each transformed sample data point
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
#preds = clusters.predict(pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
scores = [2]
for n_clusters in scores:
clusterer = KMeans(n_clusters=n_clusters,init='k-means++',random_state=10 ).fit(reduced_data)
preds = clusterer.predict(reduced_data)
silScore = silhouette_score(reduced_data, preds)
print ("cluster number",n_clusters, "the score is",silScore)
# TODO: Find the cluster centers
centroids = clusters.cluster_centers_ # test run to find centers alone to determine errors from code above
print centroids
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusters.predict(reduced_data)
print sample_preds
Explanation: pseudo code help from forums to decifer cell 136
scores = list
for n_clusters in number_of_clusters_list:
create the clusterer with n_clusters
fit the clusterer to the reduced data
preds = clusterer.predict(reduced_data)
scores.append(silhouette_score(reduced_data, preds))
End of explanation
# Display the results of the clustering from implementation
rs.cluster_results(reduced_data, preds, centroids, pca_samples)
# added after review to implement the sample preds
scores = [2]
for n_clusters in scores:
clusterer = KMeans(n_clusters=n_clusters,init='k-means++',random_state=10 ).fit(reduced_data)
preds = clusterer.predict(reduced_data)
silScore = silhouette_score(reduced_data, preds)
print ("cluster number",n_clusters, "the score is",silScore)
# define centers of the 2-cluster analysis
centers = clusterer.cluster_centers_
# TODO: Predict the cluster for each of the 3 transformed sample data points
sample_preds = clusterer.predict(pca_samples)
print sample_preds
# Display the results of the clustering from implementation
rs.cluster_results(reduced_data, preds, centers, pca_samples)
Explanation: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer: The best silhouette score is from cluster with a number of two. The scores are:
('cluster number', 2, 'the score is', 0.42603583444758714)
('cluster number', 3, 'the score is', 0.39431492168504656)
('cluster number', 4, 'the score is', 0.33179229703918511)
('cluster number', 5, 'the score is', 0.35413467507110197)
('cluster number', 6, 'the score is', 0.36650037766216098)
Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
End of explanation
# print ("cluster number",n_clusters, "the score is",silScore)
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
#inv_logit[np.log]
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
Explanation: Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
- Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
End of explanation
display(data.describe())
Explanation: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.
**Answer: Updated after review and implentation of the 2 cluster true_centers. Based upon the earlier analysis it is most likely that the two clusters represented in the visual of cell 64 are grocery and detergents_paper as we determined earlier using PCA and Box-Cox feature scaling.Interesting that the highest resulting measurements are in order Grocery, Fresh and Milk (12,025; 8,880;7,882).Since the centers represent the average customer for that sector and running the inverse log transforms recovers the spending in the catagories represented inside those averages we found earlier. I think If I had run just the two cluster analysis alone I would have come to a much different conclusion. It is not readily appearent to me viewing only the two cluster where the clusters truly lie. I find viewing and interpreting the 6 cluster much easier. Further explaination using 6 cluster: Here we find segment 5 has the highest purchase of fresh compared with the statistical analysis displayed again below which is between 75% and max of fresh spending; Segment three has the highest in Milk which is just above 75% in milk statistics; segment three also has the highest in grocery just over 75% that catagorical spending.Frozen segment five has the highest spending and the statistical data shows that is over 75% of the statistical spending. For the catagory detergents_paper the highest catagory of spending is in segment two and statistically that is between 75% and max last delicatessen is segment 5 and just over 75% of the statistical purchases. The metric that is missing is the data on spending based upon enterprise type in order to scientifically correlate the segments with the enterprise type acurrately. Any other means of identifying the enterprise is guessing/surmissing the correlation. The best that can be reported is the segment spending data with the analysis from the other cells and alllowing marketing to make projections on targets based upon enterprise names/types since we are not given those variables. The analysis based upon customer assigned to cluster x should best identify with establishments represented by feature of segment x is challenging because the clusters poorly seperated represented by the low silhouette scores (The scores are: ('cluster number', 2, 'the score is', 0.42603583444758714) ('cluster number', 3, 'the score is', 0.39431492168504656) ('cluster number', 4, 'the score is', 0.33179229703918511) ('cluster number', 5, 'the score is', 0.35413467507110197) ('cluster number', 6, 'the score is', 0.36650037766216098) and confirmed by the visual in cell 152. Cluster one aligns with segments 2 and 3 and cluster 0 aligns with segments 0,1,4 and 5.An example of application of this information would be marketing can consider enterprises with high grocery (segment 2 highest sales) may also benefit from additional marketing of milk which is the second highest in segment three etc. Making the analysis in this manner does not depend upon the missing variable of enterprise name market cafe or retailer.
Addition after review: The centers here named centroids( which are the "X" on the visualizations, appear to cluster in 2 clusters even in the 6 cluster visualization and 2 of the 3 are in one cluster in the 2 cluster visualization. which is consistent with the findings enumerated above. That the PCA and Box-Cox were correct in identifying 2 major clusters **
End of explanation
# Display the predictions added after review looking at the two clusters only
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
for i, segments in enumerate(sample_preds):
print "Sample point", i, "predicted to be in segment", segments
Explanation: Question 9
For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?
Run the code block below to find which cluster each sample point is predicted to be.
End of explanation
# Display the clustering results based on 'Channel' data
rs.channel_results(reduced_data, outliers, pca_samples)
Explanation: **Answer: If my code is correct then the segments are identical to the cluster numbers by sample number. I suspect there might be a serious error in my code because I find this exact match hard to believe in spite of the calculation results
Additional interpretation after review and implementing 2 cluster sample point analysis.For Sample '1,2 and 3', the values for 'Grocery', 'Milk', and 'Detergents_Paper' are above average and mimic the Segment '1' center in those categories — so, the predicted cluster seems to be consistent with the sample. Here in cell 71 we see that all three sample points are predicted to be in cluster 1. I find this to be a good example of potential erroneous final interpretations that can be delivered if the data is not analyzed to level where multiple data analysis results in confirming results starting at a high level 6 clusters and coning downward instead of starting in the opposite direction.I can clearly see that the points lie whithin the predicted clusters and the data presented is correct. I feel I need more to develop final conclusions on the data set in its entirety. Personally I could not start with 2 clusters and stop there.Great excersize.(I left the previously run code from the 6 cluster analysis for visual. If re-run will revert back to the 2 cluster results due to code changes after review).**
Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would reach positively to the change in delivery service?
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
Answer: You could use the customer segmentation as the basis of the A/B marketing structure. You could segment the enterprises which are marketed to into the 2 clustered sgments that had the highest degree of clustering detergents_paper and grocery. Both catergories would receive equally 5 days and 3 days a week delivery. Followed by running A/B tests to determine if there was a statistical difference in preference for the frequency of delivery and if that statistical difference can be stratified by cluster difference Equally the same analysis can be run on using all 6 clusters depending on the manpower and degree of effort to be assigned to the task
Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
**Answer: I would conside the target variable to be customer spending. Create a trained model on the current data and then label and run the new customers data as the "test" dataset. There was a very inetereting discussion on this in stack overflow with a question based upon using R. here is the validated answer:
http://stackoverflow.com/questions/21064315/how-do-i-predict-new-datas-cluster-after-clustering-training-data
"Clustering is not supposed to "classify" new data, as the name suggests - it is the core concept of classification.
Some of the clustering algorithms (like those centroid based - kmeans, kmedians etc.) can "label" new instance based on the model created. Unfortunately hierarchical clustering is not one of them - it does not partition the input space, it just "connects" some of the objects given during clustering, so you cannot assign the new point to this model.
The only "solution" to use the hclust in order to "classify" is to create another classifier on top of the labeled data given by hclust. For example you can now train knn (even with k=1) on the data with labels from hclust and use it to assign labels to new points."
Forums answer:"if we use the information taken from the clustering as a new feature, we can then use this feature as, say, the target variable (to say that one cluster has a certain delivery schedule or label, or whatnot) to use a supervised learner to successfully predict that target variable for new customers." I think all the above echoes what I stated in the opening sentence of using the current trained unsupervised model (customer spending clustering as the target variable) and create a supervised model to predict the delivery schedule of the new customer based upon their spending or input the delivery and predict the spending. **
Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
End of explanation |
8,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Apparent horizons
We're now going to use finite differences to find a black hole apparent horizon.
The spacetime we're going to look at is simplified
Step3: We now need to solve the boundary value problem. We'll do this using shooting.
Shooting
Initial Value Problems
If we knew the initial radius of the horizon, $h(\theta = 0) = H_1(\theta = 0) = h_0$, we could solve the initial value problem
$$
\frac{d}{d \theta} {\bf H} = {\bf F}({\bf H}, \theta) = \begin{pmatrix} H_2 \ 2 H_1 + \frac{3}{H_1} H_2^2 + f(\theta, {\bf H}) \end{pmatrix}, \qquad {\bf H}(\theta = 0) = \begin{pmatrix} h_0 \ 0 \end{pmatrix}.
$$
For example, the simple Schwarzschild black hole will have $h_0 = 1/2$, in this slicing.
To solve the initial value problem we can re-use our finite differencing algorithms. For example, we evaluate the initial value problem equation at $\theta_i$ using forward differencing, to get
\begin{align}
\left. \frac{d}{d \theta} {\bf H} \right|_{\theta = \theta_i} & \approx \frac{1}{\Delta \theta} \left( {\bf H}^{(i+1)} - {\bf H}^{(i)} \right) \
& = {\bf F}({\bf H}^{(i)}, \theta_i),
\end{align}
where we have denoted ${\bf H}(\theta_i) \equiv {\bf H}^{(i)}$. We then re-arrange this to get Euler's method
$$
{\bf H}^{(i+1)} = {\bf H}^{(i)} + \Delta \theta \, {\bf F}({\bf H}^{(i)}, \theta_i).
$$
We can use this to solve for the Schwarzschild case
Step5: We see that this has worked nicely. However, Euler's method is very inaccurate on more complex problems, as it's only first order convergent. We would like to use a higher order method.
Runge-Kutta methods
When looking at central differencing earlier we used information from both sides of the point where we took the derivative. This gives higher accuracy, but isn't helpful in the initial value case, where we don't have half the information.
Instead, we use many Euler steps combined. Each one gives an approximation to "future" data, which can be used to approximate the derivative at more locations.
For example, the Euler step above starts from ${\bf H}^{(i)}$ and computes ${\bf F}^{(i)}$ to approximate ${\bf H}^{(i+1)}$. We can use this approximation to give us ${\bf F}^{(i+1)}$.
Now, a more accurate solution would be
$$
{\bf H}^{(i+1)} = {\bf H}^{(i)} + \int_{\theta_i}^{\theta_{i+1}} \text{d} \theta \, {\bf F}({\bf H}, \theta).
$$
In Euler's method we are effectively representing the value of the integral by the value of the integrand at the start, multiplied by the width $\Delta \theta$. We could now approximate it by the average value of the integrand, $({\bf F}^{(i)} + {\bf F}^{(i+1)})/2$, multiplied by the width $\Delta \theta$. This gives the algorithm
\begin{align}
{\bf H}^{(p)} &= {\bf H}^{(i)} + \Delta \theta \, {\bf F}({\bf H}^{(i)}, \theta_i), \
{\bf H}^{(i+1)} &= {\bf H}^{(i)} + \frac{\Delta \theta}{2} \left( {\bf F}({\bf H}^{(i)}, \theta_i) + {\bf F}({\bf H}^{(p)}, \theta_{i+1}) \right) \
&= \frac{1}{2} \left( {\bf H}^{(i)} + {\bf H}^{(p)} + \Delta \theta \, {\bf F}({\bf H}^{(p)}, \theta_{i+1}) \right).
\end{align}
The final re-arrangement ensures we do not have to store or re-compute ${\bf F}^{(i)}$. This is one of the Runge-Kutta methods. This version is second order accurate, and a big improvement over Euler's method.
Step6: Root finding
We still can't find a horizon unless we know the initial radius. However, there is a way around this. Let's see what happens if we compute in the Schwarzschild case, using the wrong initial data
Step7: We see that the the surfaces that start with the radius too small are curving back in; their derivative is negative. The surfaces with radius too large are diverging; their derivative is positive. We know that the true solution has vanishing derivative.
Let's explicitly plot the derivative at the endpoint.
Step9: We see that the derivative vanishes precisely where the horizon should be, exactly as expected.
This also gives us a way of solving for the apparent horizon. We want to solve the equation
$$
R(h_0) = 0.
$$
The function $R$ is given by $R(h_0) = H_2(\pi/2 ; h_0)$. In other words, we
compute the solution ${\bf H}$ given the initial guess $h_0$ for the unknown initial radius $H_1(0)$;
from the solution for ${\bf H}$ at $\theta = \pi/2$, set $R(h_0) = H_2$.
We can code this residual function as
Step11: Finally, we need to find the root of this equation.
Secant method
Problems where we are given an algebraic, nonlinear function ${\bf R}$ and asked to find ${\bf x}$ such that ${\bf R}({\bf x}) = {\bf 0}$ are nonlinear root-finding problems. Many standard solution methods are based on Newton's algorithm
Step12: We apply this to the Schwarzschild case
Step13: And from this we can compute the correct horizon.
What happens if we get the guess wildly wrong? In this simple case it will nearly always converge to the "right" answer, but in general a poor initial guess means the algorithm - or most root-finding algorithms! - won't converge.
Put it together
We can now compute the more interesting binary black hole case, where the singularities are at $z = \pm 0.75$. Using the symmetry, we need
Step14: We can now check what sorts of initial radius $h_0$ will be needed for the horizon
Step15: We see the algorithms are having problems for small radii, but that it suggests that the correct answer is roughly $h_0 \in [1.26, 1.3]$. So we use root-finding
Step16: And finally, we compute and plot the horizon surface. | Python Code:
import numpy
from matplotlib import pyplot
%matplotlib notebook
def horizon_RHS(H, theta, z_singularities):
The RHS function for the apparent horizon problem.
Parameters
----------
H : array
vector [h, dh/dtheta]
theta : double
angle
z_singularities : array
Location of the singularities on the z axis; non-negative
Returns
-------
dHdtheta : array
RHS
assert(numpy.all(numpy.array(z_singularities) >= 0.0)), "Location of singularities cannot be negative"
h = H[0]
dh = H[1]
psi = 1.0
dpsi_dr = 0.0
dpsi_dtheta = 0.0
for z in z_singularities:
distance = numpy.sqrt((h*numpy.sin(theta))**2 + (h*numpy.cos(theta) - z)**2)
psi += 0.5/distance
dpsi_dr -= 0.5*(h-z*numpy.cos(theta))/distance**3
dpsi_dtheta -= 0.5**h*z*numpy.sin(theta)/distance**3
# Apply reflection symmetry
if z > 0.0:
distance = numpy.sqrt((h*numpy.sin(theta))**2 + (h*numpy.cos(theta) + z)**2)
psi += 0.5/distance
dpsi_dr -= 0.5*(h+z*numpy.cos(theta))/distance**3
dpsi_dtheta += 0.5**h*z*numpy.sin(theta)/distance**3
C2 = 1.0 / (1.0 + (dh / h)**2)
# Impose that the term involving cot(theta) vanishes on axis.
if (abs(theta) < 1e-16) or (abs(theta - numpy.pi) < 1e-16):
cot_theta_dh_C2 = 0.0
else:
cot_theta_dh_C2 = dh / (numpy.tan(theta) * C2)
dHdtheta = numpy.zeros_like(H)
dHdtheta[0] = dh
dHdtheta[1] = 2.0*h - cot_theta_dh_C2 + 4.0*h**2/(psi*C2)*(dpsi_dr - dpsi_dtheta*dh/h**2) + 3.0*dh**2/h
return dHdtheta
Explanation: Apparent horizons
We're now going to use finite differences to find a black hole apparent horizon.
The spacetime we're going to look at is simplified:
$3+1$ split (we're looking at one slice, so one instant in "time");
axisymmetric (so we can consider only two dimensions in space, using $r, \theta$);
"bitant" or "reflection" symmetric (so we only consider $\theta \in [0, \pi/2]$);
all singularities have bare mass $1$;
time-symmetric (the extrinsic curvature vanishes).
We then compute the expansion of outgoing null geodesics, and look for where this vanishes. The surface with radius $h(\theta)$ where this occurs is the apparent horizon. With our assumptions, $h$ obeys the boundary value problem
$$
\frac{d^2 h}{d \theta^2} = 2 h + \frac{3}{h} \left( \frac{d h}{d \theta} \right)^2 + f \left( \theta, h, \frac{d h}{d \theta} \right), \qquad \frac{d h}{d \theta} ( \theta = 0 ) = 0 = \frac{d h}{d \theta} ( \theta = \pi/2 ).
$$
The function $f$ encodes the spacetime effects due to the singularities.
To solve this problem we convert to first order form. Introduce the vector
$$
{\bf H} = \begin{pmatrix} h \ \frac{d h}{d \theta} \end{pmatrix}.
$$
Then we have the problem
$$
\frac{d}{d \theta} {\bf H} = {\bf F}({\bf H}, \theta) = \begin{pmatrix} H_2 \ 2 H_1 + \frac{3}{H_1} H_2^2 + f(\theta, {\bf H}) \end{pmatrix}, \qquad H_2(\theta = 0) = 0 = H_2(\theta = \pi/2).
$$
We'll give the entire right-hand-side as code:
End of explanation
def euler_step(Hi, theta_i, dtheta, z_singularity):
Euler's method - one step
return Hi + dtheta*horizon_RHS(Hi, theta_i, z_singularity)
Ntheta = 100
z_singularity = [0.0]
theta = numpy.linspace(0.0, numpy.pi/2.0, Ntheta)
dtheta = theta[1] - theta[0]
H = numpy.zeros((2, Ntheta))
H[:, 0] = [0.5, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = euler_step(H[:, i], theta[i], dtheta, z_singularity)
pyplot.figure()
pyplot.polar(theta, H[0,:])
pyplot.show()
Explanation: We now need to solve the boundary value problem. We'll do this using shooting.
Shooting
Initial Value Problems
If we knew the initial radius of the horizon, $h(\theta = 0) = H_1(\theta = 0) = h_0$, we could solve the initial value problem
$$
\frac{d}{d \theta} {\bf H} = {\bf F}({\bf H}, \theta) = \begin{pmatrix} H_2 \ 2 H_1 + \frac{3}{H_1} H_2^2 + f(\theta, {\bf H}) \end{pmatrix}, \qquad {\bf H}(\theta = 0) = \begin{pmatrix} h_0 \ 0 \end{pmatrix}.
$$
For example, the simple Schwarzschild black hole will have $h_0 = 1/2$, in this slicing.
To solve the initial value problem we can re-use our finite differencing algorithms. For example, we evaluate the initial value problem equation at $\theta_i$ using forward differencing, to get
\begin{align}
\left. \frac{d}{d \theta} {\bf H} \right|_{\theta = \theta_i} & \approx \frac{1}{\Delta \theta} \left( {\bf H}^{(i+1)} - {\bf H}^{(i)} \right) \
& = {\bf F}({\bf H}^{(i)}, \theta_i),
\end{align}
where we have denoted ${\bf H}(\theta_i) \equiv {\bf H}^{(i)}$. We then re-arrange this to get Euler's method
$$
{\bf H}^{(i+1)} = {\bf H}^{(i)} + \Delta \theta \, {\bf F}({\bf H}^{(i)}, \theta_i).
$$
We can use this to solve for the Schwarzschild case:
End of explanation
def rk2_step(Hi, theta_i, dtheta, z_singularity):
RK2 method - one step
Hp = Hi + dtheta * horizon_RHS(Hi, theta_i, z_singularity)
return 0.5*(Hi + Hp + dtheta*horizon_RHS(Hp, theta_i+dtheta, z_singularity))
H = numpy.zeros((2, Ntheta))
H[:, 0] = [0.5, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity)
pyplot.figure()
pyplot.polar(theta, H[0,:])
pyplot.show()
Explanation: We see that this has worked nicely. However, Euler's method is very inaccurate on more complex problems, as it's only first order convergent. We would like to use a higher order method.
Runge-Kutta methods
When looking at central differencing earlier we used information from both sides of the point where we took the derivative. This gives higher accuracy, but isn't helpful in the initial value case, where we don't have half the information.
Instead, we use many Euler steps combined. Each one gives an approximation to "future" data, which can be used to approximate the derivative at more locations.
For example, the Euler step above starts from ${\bf H}^{(i)}$ and computes ${\bf F}^{(i)}$ to approximate ${\bf H}^{(i+1)}$. We can use this approximation to give us ${\bf F}^{(i+1)}$.
Now, a more accurate solution would be
$$
{\bf H}^{(i+1)} = {\bf H}^{(i)} + \int_{\theta_i}^{\theta_{i+1}} \text{d} \theta \, {\bf F}({\bf H}, \theta).
$$
In Euler's method we are effectively representing the value of the integral by the value of the integrand at the start, multiplied by the width $\Delta \theta$. We could now approximate it by the average value of the integrand, $({\bf F}^{(i)} + {\bf F}^{(i+1)})/2$, multiplied by the width $\Delta \theta$. This gives the algorithm
\begin{align}
{\bf H}^{(p)} &= {\bf H}^{(i)} + \Delta \theta \, {\bf F}({\bf H}^{(i)}, \theta_i), \
{\bf H}^{(i+1)} &= {\bf H}^{(i)} + \frac{\Delta \theta}{2} \left( {\bf F}({\bf H}^{(i)}, \theta_i) + {\bf F}({\bf H}^{(p)}, \theta_{i+1}) \right) \
&= \frac{1}{2} \left( {\bf H}^{(i)} + {\bf H}^{(p)} + \Delta \theta \, {\bf F}({\bf H}^{(p)}, \theta_{i+1}) \right).
\end{align}
The final re-arrangement ensures we do not have to store or re-compute ${\bf F}^{(i)}$. This is one of the Runge-Kutta methods. This version is second order accurate, and a big improvement over Euler's method.
End of explanation
initial_guesses = numpy.linspace(0.4, 0.6, 10)
solutions = []
z_singularity = [0.0]
for h0 in initial_guesses:
H = numpy.zeros((2,Ntheta))
H[:, 0] = [h0, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity)
solutions.append(H[0,:])
pyplot.figure()
for r in solutions:
pyplot.polar(theta, r)
pyplot.show()
Explanation: Root finding
We still can't find a horizon unless we know the initial radius. However, there is a way around this. Let's see what happens if we compute in the Schwarzschild case, using the wrong initial data:
End of explanation
initial_guesses = numpy.linspace(0.4, 0.6, 100)
dhdtheta_end = numpy.zeros_like(initial_guesses)
z_singularity = [0.0]
for guess, h0 in enumerate(initial_guesses):
H = numpy.zeros((2,Ntheta))
H[:, 0] = [h0, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity)
dhdtheta_end[guess] = H[1, -1]
pyplot.figure()
pyplot.plot(initial_guesses, dhdtheta_end)
pyplot.xlabel(r"$h_0$")
pyplot.ylabel(r"$dh/d\theta(\pi/2)$")
pyplot.show()
Explanation: We see that the the surfaces that start with the radius too small are curving back in; their derivative is negative. The surfaces with radius too large are diverging; their derivative is positive. We know that the true solution has vanishing derivative.
Let's explicitly plot the derivative at the endpoint.
End of explanation
def residual(h0, z_singularities):
The residual function for the shooting method.
H = numpy.zeros((2, Ntheta))
H[:, 0] = [h0, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity)
return H[1, -1]
Explanation: We see that the derivative vanishes precisely where the horizon should be, exactly as expected.
This also gives us a way of solving for the apparent horizon. We want to solve the equation
$$
R(h_0) = 0.
$$
The function $R$ is given by $R(h_0) = H_2(\pi/2 ; h_0)$. In other words, we
compute the solution ${\bf H}$ given the initial guess $h_0$ for the unknown initial radius $H_1(0)$;
from the solution for ${\bf H}$ at $\theta = \pi/2$, set $R(h_0) = H_2$.
We can code this residual function as
End of explanation
def secant(R, x0, x1, args, tolerance = 1e-10):
Secant method
x = x1
x_p = x0
residual = abs(R(x, args))
while residual > tolerance:
x_new = x - (R(x, args) * (x - x_p)) / (R(x, args) - R(x_p, args))
x_p = x
x = x_new
residual = abs(R(x, args))
return x
Explanation: Finally, we need to find the root of this equation.
Secant method
Problems where we are given an algebraic, nonlinear function ${\bf R}$ and asked to find ${\bf x}$ such that ${\bf R}({\bf x}) = {\bf 0}$ are nonlinear root-finding problems. Many standard solution methods are based on Newton's algorithm:
Guess the root to be ${\bf x}^{(0)}$, and set $n=0$;
Compute the tangent planes to ${\bf R}$ at ${\bf x}^{(n)}$;
Find where these planes intersect zero, and set this to be ${\bf x}^{(n+1)}$;
If not converged to root, go to 2.
Computing the derivative for the tangent in step 2 is slow; instead we use finite differencing again.
In one dimension, Newton's method is
$$
x^{(n+1)} = x^{(n)} - \frac{R(x^{(n)})}{R'(x^{(n)})}.
$$
Replacing the derivative with a finite difference gives
$$
x^{(n+1)} = x^{(n)} - \frac{R(x^{(n)}) \left( x^{(n)} - x^{(n-1)} \right)}{R(x^{(n)}) - R(x^{(n-1)})}.
$$
This is the secant method. It's much easier to implement, but requires two initial guesses.
End of explanation
h0 = secant(residual, 0.4, 0.6, z_singularity)
print("Computed initial radius is {}".format(h0))
Explanation: We apply this to the Schwarzschild case:
End of explanation
z_singularity = [0.75]
Explanation: And from this we can compute the correct horizon.
What happens if we get the guess wildly wrong? In this simple case it will nearly always converge to the "right" answer, but in general a poor initial guess means the algorithm - or most root-finding algorithms! - won't converge.
Put it together
We can now compute the more interesting binary black hole case, where the singularities are at $z = \pm 0.75$. Using the symmetry, we need:
End of explanation
initial_guesses = numpy.linspace(1.2, 1.4, 100)
dhdtheta_end = numpy.zeros_like(initial_guesses)
z_singularity = [0.75]
interval = [0.0, numpy.pi/2.0]
for guess, h0 in enumerate(initial_guesses):
H = numpy.zeros((2,Ntheta))
H[:, 0] = [h0, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity)
dhdtheta_end[guess] = H[1, -1]
pyplot.figure()
pyplot.plot(initial_guesses, dhdtheta_end)
pyplot.xlabel(r"$h_0$")
pyplot.ylabel(r"$dh/d\theta(\pi/2)$")
pyplot.show()
Explanation: We can now check what sorts of initial radius $h_0$ will be needed for the horizon:
End of explanation
h0 = secant(residual, 1.26, 1.3, z_singularity)
Explanation: We see the algorithms are having problems for small radii, but that it suggests that the correct answer is roughly $h_0 \in [1.26, 1.3]$. So we use root-finding:
End of explanation
z_singularity = [0.75]
H0 = [h0, 0.0]
H = numpy.zeros((2,Ntheta))
H[:, 0] = [h0, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity)
pyplot.figure()
pyplot.polar(theta, H[0,:])
pyplot.show()
Explanation: And finally, we compute and plot the horizon surface.
End of explanation |
8,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Session 4
Step2: <a name="part-1---pretrained-networks"></a>
Part 1 - Pretrained Networks
In the libs module, you'll see that I've included a few modules for loading some state of the art networks. These include
Step3: Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step4: Each network returns a dictionary with the following keys defined. Every network has a key for "labels" except for "i2v", since this is a feature only network, e.g. an unsupervised network, and does not have labels.
Step5: <a name="preprocessdeprocessing"></a>
Preprocess/Deprocessing
Each network has a preprocessing/deprocessing function which we'll use before sending the input to the network. This preprocessing function is slightly different for each network. Recall from the previous sessions what preprocess we had done before sending an image to a network. We would often normalize the input by subtracting the mean and dividing by the standard deviation. We'd also crop/resize the input to a standard size. We'll need to do this for each network except for the Inception network, which is a true convolutional network and does not require us to do this (will be explained in more depth later).
Whenever we preprocess the image, and want to visualize the result of adding back the gradient to the input image (when we use deep dream), we'll need to use the deprocess function stored in the dictionary. Let's explore how these work. We'll confirm this is performing the inverse operation, let's try to preprocess the image, then I'll have you try to deprocess it.
Step6: Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for vgg16, we can find the preprocess function as vgg16.preprocess, or for inception, inception.preprocess, or for i2v, i2v.preprocess. Or, we can just use the key preprocess in our dictionary net, as this is just convenience for us to access the corresponding preprocess function.
Step7: Let's undo the preprocessing. Recall that the net dictionary has the key deprocess which is the function we need to use on our processed image, img.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step8: <a name="tensorboard"></a>
Tensorboard
I've added a utility module called nb_utils which includes a function show_graph. This will use Tensorboard to draw the computational graph defined by the various Tensorflow functions. I didn't go over this during the lecture because there just wasn't enough time! But explore it in your own time if it interests you, as it is a really unique tool which allows you to monitor your network's training progress via a web interface. It even lets you monitor specific variables or processes within the network, e.g. the reconstruction of an autoencoder, without having to print to the console as we've been doing. We'll just be using it to draw the pretrained network's graphs using the utility function I've given you.
Be sure to interact with the graph and click on the various modules.
For instance, if you've loaded the inception v5 network, locate the "input" to the network. This is where we feed the image, the input placeholder (typically what we've been denoting as X in our own networks). From there, it goes to the "conv2d0" variable scope (i.e. this uses the code
Step9: If you open up the "mixed3a" node above (double click on it), you'll see the first "inception" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you.
<a name="a-note-on-1x1-convolutions"></a>
A Note on 1x1 Convolutions
The 1x1 convolutions are setting the ksize parameter of the kernels to 1. This is effectively allowing you to change the number of dimensions. Remember that you need a 4-d tensor as input to a convolution. Let's say its dimensions are $\text{N}\ x\ \text{H}\ x\ \text{W}\ x\ \text{C}_I$, where $\text{C}_I$ represents the number of channels the image has. Let's say it is an RGB image, then $\text{C}_I$ would be 3. Or later in the network, if we have already convolved it, it might be 64 channels instead. Regardless, when you convolve it w/ a $\text{K}_H\ x\ \text{K}_W\ x\ \text{C}_I\ x\ \text{C}_O$ filter, where $\text{K}_H$ is 1 and $\text{K}_W$ is also 1, then the filters size is
Step10: <a name="using-context-managers"></a>
Using Context Managers
Up until now, we've mostly used a single tf.Session within a notebook and didn't give it much thought. Now that we're using some bigger models, we're going to have to be more careful. Using a big model and being careless with our session can result in a lot of unexpected behavior, program crashes, and out of memory errors. The VGG network and the I2V networks are quite large. So we'll need to start being more careful with our sessions using context managers.
Let's see how this works w/ VGG
Step11: <a name="part-2---visualizing-gradients"></a>
Part 2 - Visualizing Gradients
Now that we know how to load a network and extract layers from it, let's grab only the pooling layers
Step12: Let's also grab the input layer
Step14: We'll now try to find the gradient activation that maximizes a layer with respect to the input layer x.
Step15: Let's try this w/ an image now. We're going to use the plot_gradient function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing its values using the utils.normalize function.
Step16: <a name="part-3---basic-deep-dream"></a>
Part 3 - Basic Deep Dream
In the lecture we saw how Deep Dreaming takes the backpropagated gradient activations and simply adds it to the image, running the same process again and again in a loop. We also saw many tricks one can add to this idea, such as infinitely zooming into the image by cropping and scaling, adding jitter by randomly moving the image around, or adding constraints on the total activations.
Have a look here for inspiration
Step17: Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, x. Then pass these to our dream function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro).
Step18: Instead of using an image, we can use an image of noise and see how it "hallucinates" the representations that the layer most responds to
Step19: We'll do the same thing as before, now w/ our noise image
Step20: <a name="part-4---deep-dream-extensions"></a>
Part 4 - Deep Dream Extensions
As we saw in the lecture, we can also use the final softmax layer of a network to use during deep dream. This allows us to be explicit about the object we want hallucinated in an image.
<a name="using-the-softmax-layer"></a>
Using the Softmax Layer
Let's get another image to play with, preprocess it, and then make it 4-dimensional.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step21: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step22: Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step23: Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory.
Step24: <a name="fractal"></a>
Fractal
During the lecture we also saw a simple trick for creating an infinite fractal
Step25: <a name="guided-hallucinations"></a>
Guided Hallucinations
Instead of following the gradient of an arbitrary mean or max of a particular layer's activation, or a particular object that we want to synthesize, we can also try to guide our image to look like another image. One way to try this is to take one image, the guide, and find the features at a particular layer or layers. Then, we take our synthesis image and find the gradient which makes its own layers activations look like the guide image.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step26: Preprocess both images
Step27: Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step28: We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step29: Now we train just like before, except we'll need to combine our two loss terms, feature_loss and tv_loss by simply adding them! The one thing we have to keep in mind is that we want to minimize the tv_loss while maximizing the feature_loss. That means we'll need to use the negative tv_loss and the positive feature_loss. As an experiment, try just optimizing the tv_loss and removing the feature_loss from the tf.gradients call. What happens?
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step30: <a name="further-explorations"></a>
Further Explorations
In the libs module, I've included a deepdream module which has two functions for performing Deep Dream and the Guided Deep Dream. Feel free to explore these to create your own deep dreams.
<a name="part-5---style-net"></a>
Part 5 - Style Net
We'll now work on creating our own style net implementation. We've seen all the steps for how to do this during the lecture, and you can always refer to the Lecture Transcript if you need to. I want to you to explore using different networks and different layers in creating your content and style losses. This is completely unexplored territory so it can be frustrating to find things that work. Think of this as your empty canvas! If you are really stuck, you will find a stylenet implementation under the libs module that you can use instead.
Have a look here for inspiration
Step31: Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU.
Step32: Let's then grab the names of every operation in our network
Step33: Now we need an image for our content image and another one for our style image.
Step34: Let's see what the network classifies these images as just for fun
Step35: <a name="content-features"></a>
Content Features
We're going to need to find the layer or layers we want to use to help us define our "content loss". Recall from the lecture when we used VGG, we used the 4th convolutional layer.
Step36: Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step37: <a name="style-features"></a>
Style Features
Let's do the same thing now for the style features. We'll use more than 1 layer though so we'll append all the features in a list. If you aren't using VGG remember to get rid of the dropout stuff!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step38: Now we find the gram matrix which we'll use to optimize our features.
Step39: <a name="remapping-the-input"></a>
Remapping the Input
We're almost done building our network. We just have to change the input to the network to become "trainable". Instead of a placeholder, we'll have a tf.Variable, which allows it to be trained. We could set this to the content image, another image entirely, or an image of noise. Experiment with all three options!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step40: <a name="content-loss"></a>
Content Loss
In the lecture we saw that we'll simply find the l2 loss between our content layer features.
Step41: <a name="style-loss"></a>
Style Loss
Instead of straight l2 loss on the raw feature activations, we're going to calculate the gram matrix and find the loss between these. Intuitively, this is finding what is common across all convolution filters, and trying to enforce the commonality between the synthesis and style image's gram matrix.
Step42: <a name="total-variation-loss"></a>
Total Variation Loss
And just like w/ guided hallucinations, we'll try to enforce some smoothness using a total variation loss.
Step43: <a name="training"></a>
Training
We're almost ready to train! Let's just combine our three loss measures and stick it in an optimizer.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step44: And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here.
Step45: <a name="assignment-submission"></a>
Assignment Submission
After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as | Python Code:
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n',
'You should consider updating to Python 3.4.0 or',
'higher as the libraries built for this course',
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda'
'and then restart `jupyter notebook`:\n',
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
from scipy.ndimage.filters import gaussian_filter
import IPython.display as ipyd
import tensorflow as tf
from libs import utils, gif, datasets, dataset_utils, vae, dft, vgg16, nb_utils
except ImportError:
print("Make sure you have started notebook in the same directory",
"as the provided zip file which includes the 'libs' folder",
"and the file 'utils.py' inside of it. You will NOT be able",
"to complete this assignment unless you restart jupyter",
"notebook inside the directory created by extracting",
"the zip file or cloning the github repo. If you are still")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML(<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>)
Explanation: Session 4: Visualizing Representations
Assignment: Deep Dream and Style Net
<p class='lead'>
Creative Applications of Deep Learning with Google's Tensorflow
Parag K. Mital
Kadenze, Inc.
</p>
Overview
In this homework, we'll first walk through visualizing the gradients of a trained convolutional network. Recall from the last session that we had trained a variational convolutional autoencoder. We also trained a deep convolutional network. In both of these networks, we learned only a few tools for understanding how the model performs. These included measuring the loss of the network and visualizing the W weight matrices and/or convolutional filters of the network.
During the lecture we saw how to visualize the gradients of Inception, Google's state of the art network for object recognition. This resulted in a much more powerful technique for understanding how a network's activations transform or accentuate the representations in the input space. We'll explore this more in Part 1.
We also explored how to use the gradients of a particular layer or neuron within a network with respect to its input for performing "gradient ascent". This resulted in Deep Dream. We'll explore this more in Parts 2-4.
We also saw how the gradients at different layers of a convolutional network could be optimized for another image, resulting in the separation of content and style losses, depending on the chosen layers. This allowed us to synthesize new images that shared another image's content and/or style, even if they came from separate images. We'll explore this more in Part 5.
Finally, you'll packaged all the GIFs you create throughout this notebook and upload them to Kadenze.
<a name="learning-goals"></a>
Learning Goals
Learn how to inspect deep networks by visualizing their gradients
Learn how to "deep dream" with different objective functions and regularization techniques
Learn how to "stylize" an image using content and style losses from different images
Table of Contents
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Part 1 - Pretrained Networks
Graph Definition
Preprocess/Deprocessing
Tensorboard
A Note on 1x1 Convolutions
Network Labels
Using Context Managers
Part 2 - Visualizing Gradients
Part 3 - Basic Deep Dream
Part 4 - Deep Dream Extensions
Using the Softmax Layer
Fractal
Guided Hallucinations
Further Explorations
Part 5 - Style Net
Network
Content Features
Style Features
Remapping the Input
Content Loss
Style Loss
Total Variation Loss
Training
Assignment Submission
<!-- /MarkdownTOC -->
End of explanation
from libs import vgg16, inception, i2v
Explanation: <a name="part-1---pretrained-networks"></a>
Part 1 - Pretrained Networks
In the libs module, you'll see that I've included a few modules for loading some state of the art networks. These include:
Inception v3
This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB!
Inception v5
This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB! It presents a few extensions to v5 which are not documented anywhere that I've found, as of yet...
Visual Group Geometry @ Oxford's 16 layer
This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects. This model is nearly half a gigabyte, about 10x larger in size than the inception network. The trade off is that it is very fast.
Visual Group Geometry @ Oxford's Face Recognition
This network has been trained on the VGG Face Dataset and its final output layer is a softmax layer denoting 1 of 2622 different possible people.
Illustration2Vec
This network has been trained on illustrations and manga and its final output layer is 4096 features.
Illustration2Vec Tag
Please do not use this network if you are under the age of 18 (seriously!)
This network has been trained on manga and its final output layer is one of 1539 labels.
When we use a pre-trained network, we load a network's definition and its weights which have already been trained. The network's definition includes a set of operations such as convolutions, and adding biases, but all of their values, i.e. the weights, have already been trained.
<a name="graph-definition"></a>
Graph Definition
In the libs folder, you will see a few new modules for loading the above pre-trained networks. Each module is structured similarly to help you understand how they are loaded and include example code for using them. Each module includes a preprocess function for using before sending the image to the network. And when using deep dream techniques, we'll be using the deprocess function to undo the preprocess function's manipulations.
Let's take a look at loading one of these. Every network except for i2v includes a key 'labels' denoting what labels the network has been trained on. If you are under the age of 18, please do not use the i2v_tag model, as its labels are unsuitable for minors.
Let's load the libaries for the different pre-trained networks:
End of explanation
# Stick w/ Inception for now, and then after you see how
# the next few sections work w/ this network, come back
# and explore the other networks.
net = inception.get_inception_model(version='v5')
# net = inception.get_inception_model(version='v3')
# net = vgg16.get_vgg_model()
# net = vgg16.get_vgg_face_model()
# net = i2v.get_i2v_model()
# net = i2v.get_i2v_tag_model()
Explanation: Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
print(net.keys())
Explanation: Each network returns a dictionary with the following keys defined. Every network has a key for "labels" except for "i2v", since this is a feature only network, e.g. an unsupervised network, and does not have labels.
End of explanation
# First, let's get an image:
og = plt.imread('clinton.png')[..., :3]
plt.imshow(og)
print(og.min(), og.max())
Explanation: <a name="preprocessdeprocessing"></a>
Preprocess/Deprocessing
Each network has a preprocessing/deprocessing function which we'll use before sending the input to the network. This preprocessing function is slightly different for each network. Recall from the previous sessions what preprocess we had done before sending an image to a network. We would often normalize the input by subtracting the mean and dividing by the standard deviation. We'd also crop/resize the input to a standard size. We'll need to do this for each network except for the Inception network, which is a true convolutional network and does not require us to do this (will be explained in more depth later).
Whenever we preprocess the image, and want to visualize the result of adding back the gradient to the input image (when we use deep dream), we'll need to use the deprocess function stored in the dictionary. Let's explore how these work. We'll confirm this is performing the inverse operation, let's try to preprocess the image, then I'll have you try to deprocess it.
End of explanation
# Now call the preprocess function. This will preprocess our
# image ready for being input to the network, except for changes
# to the dimensions. I.e., we will still need to convert this
# to a 4-dimensional Tensor once we input it to the network.
# We'll see how that works later.
img = net['preprocess'](og)
print(img.min(), img.max())
Explanation: Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for vgg16, we can find the preprocess function as vgg16.preprocess, or for inception, inception.preprocess, or for i2v, i2v.preprocess. Or, we can just use the key preprocess in our dictionary net, as this is just convenience for us to access the corresponding preprocess function.
End of explanation
deprocessed = ...
plt.imshow(deprocessed)
plt.show()
Explanation: Let's undo the preprocessing. Recall that the net dictionary has the key deprocess which is the function we need to use on our processed image, img.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
nb_utils.show_graph(net['graph_def'])
Explanation: <a name="tensorboard"></a>
Tensorboard
I've added a utility module called nb_utils which includes a function show_graph. This will use Tensorboard to draw the computational graph defined by the various Tensorflow functions. I didn't go over this during the lecture because there just wasn't enough time! But explore it in your own time if it interests you, as it is a really unique tool which allows you to monitor your network's training progress via a web interface. It even lets you monitor specific variables or processes within the network, e.g. the reconstruction of an autoencoder, without having to print to the console as we've been doing. We'll just be using it to draw the pretrained network's graphs using the utility function I've given you.
Be sure to interact with the graph and click on the various modules.
For instance, if you've loaded the inception v5 network, locate the "input" to the network. This is where we feed the image, the input placeholder (typically what we've been denoting as X in our own networks). From there, it goes to the "conv2d0" variable scope (i.e. this uses the code: with tf.variable_scope("conv2d0") to create a set of operations with the prefix "conv2d0/". If you expand this scope, you'll see another scope, "pre_relu". This is created using another tf.variable_scope("pre_relu"), so that any new variables will have the prefix "conv2d0/pre_relu". Finally, inside here, you'll see the convolution operation (tf.nn.conv2d) and the 4d weight tensor, "w" (e.g. created using tf.get_variable), used for convolution (and so has the name, "conv2d0/pre_relu/w". Just after the convolution is the addition of the bias, b. And finally after exiting the "pre_relu" scope, you should be able to see the "conv2d0" operation which applies the relu nonlinearity. In summary, that region of the graph can be created in Tensorflow like so:
python
input = tf.placeholder(...)
with tf.variable_scope('conv2d0'):
with tf.variable_scope('pre_relu'):
w = tf.get_variable(...)
h = tf.nn.conv2d(input, h, ...)
b = tf.get_variable(...)
h = tf.nn.bias_add(h, b)
h = tf.nn.relu(h)
End of explanation
net['labels']
label_i = 851
print(net['labels'][label_i])
Explanation: If you open up the "mixed3a" node above (double click on it), you'll see the first "inception" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you.
<a name="a-note-on-1x1-convolutions"></a>
A Note on 1x1 Convolutions
The 1x1 convolutions are setting the ksize parameter of the kernels to 1. This is effectively allowing you to change the number of dimensions. Remember that you need a 4-d tensor as input to a convolution. Let's say its dimensions are $\text{N}\ x\ \text{H}\ x\ \text{W}\ x\ \text{C}_I$, where $\text{C}_I$ represents the number of channels the image has. Let's say it is an RGB image, then $\text{C}_I$ would be 3. Or later in the network, if we have already convolved it, it might be 64 channels instead. Regardless, when you convolve it w/ a $\text{K}_H\ x\ \text{K}_W\ x\ \text{C}_I\ x\ \text{C}_O$ filter, where $\text{K}_H$ is 1 and $\text{K}_W$ is also 1, then the filters size is: $1\ x\ 1\ x\ \text{C}_I$ and this is perfomed for each output channel $\text{C}_O$. What this is doing is filtering the information only in the channels dimension, not the spatial dimensions. The output of this convolution will be a $\text{N}\ x\ \text{H}\ x\ \text{W}\ x\ \text{C}_O$ output tensor. The only thing that changes in the output is the number of output filters.
The 1x1 convolution operation is essentially reducing the amount of information in the channels dimensions before performing a much more expensive operation, e.g. a 3x3 or 5x5 convolution. Effectively, it is a very clever trick for dimensionality reduction used in many state of the art convolutional networks. Another way to look at it is that it is preserving the spatial information, but at each location, there is a fully connected network taking all the information from every input channel, $\text{C}_I$, and reducing it down to $\text{C}_O$ channels (or could easily also be up, but that is not the typical use case for this). So it's not really a convolution, but we can use the convolution operation to perform it at every location in our image.
If you are interested in reading more about this architecture, I highly encourage you to read Network in Network, Christian Szegedy's work on the Inception network, Highway Networks, Residual Networks, and Ladder Networks.
In this course, we'll stick to focusing on the applications of these, while trying to delve as much into the code as possible.
<a name="network-labels"></a>
Network Labels
Let's now look at the labels:
End of explanation
# Load the VGG network. Scroll back up to where we loaded the inception
# network if you are unsure. It is inside the "vgg16" module...
net = ..
assert(net['labels'][0] == (0, 'n01440764 tench, Tinca tinca'))
# Let's explicity use the CPU, since we don't gain anything using the GPU
# when doing Deep Dream (it's only a single image, benefits come w/ many images).
device = '/cpu:0'
# We'll now explicitly create a graph
g = tf.Graph()
# And here is a context manager. We use the python "with" notation to create a context
# and create a session that only exists within this indent, as soon as we leave it,
# the session is automatically closed! We also tell the session which graph to use.
# We can pass a second context after the comma,
# which we'll use to be explicit about using the CPU instead of a GPU.
with tf.Session(graph=g) as sess, g.device(device):
# Now load the graph_def, which defines operations and their values into `g`
tf.import_graph_def(net['graph_def'], name='net')
# Now we can get all the operations that belong to the graph `g`:
names = [op.name for op in g.get_operations()]
print(names)
Explanation: <a name="using-context-managers"></a>
Using Context Managers
Up until now, we've mostly used a single tf.Session within a notebook and didn't give it much thought. Now that we're using some bigger models, we're going to have to be more careful. Using a big model and being careless with our session can result in a lot of unexpected behavior, program crashes, and out of memory errors. The VGG network and the I2V networks are quite large. So we'll need to start being more careful with our sessions using context managers.
Let's see how this works w/ VGG:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# First find all the pooling layers in the network. You can
# use list comprehension to iterate over all the "names" we just
# created, finding whichever ones have the name "pool" in them.
# Then be sure to append a ":0" to the names
features = ...
# Let's print them
print(features)
# This is what we want to have at the end. You could just copy this list
# if you are stuck!
assert(features == ['net/pool1:0', 'net/pool2:0', 'net/pool3:0', 'net/pool4:0', 'net/pool5:0'])
Explanation: <a name="part-2---visualizing-gradients"></a>
Part 2 - Visualizing Gradients
Now that we know how to load a network and extract layers from it, let's grab only the pooling layers:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Use the function 'get_tensor_by_name' and the 'names' array to help you
# get the first tensor in the network. Remember you have to add ":0" to the
# name to get the output of an operation which is the tensor.
x = ...
assert(x.name == 'net/images:0')
Explanation: Let's also grab the input layer:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
def plot_gradient(img, x, feature, g, device='/cpu:0'):
Let's visualize the network's gradient activation
when backpropagated to the original input image. This
is effectively telling us which pixels contribute to the
predicted layer, class, or given neuron with the layer
# We'll be explicit about the graph and the device
# by using a context manager:
with tf.Session(graph=g) as sess, g.device(device):
saliency = tf.gradients(tf.reduce_mean(feature), x)
this_res = sess.run(saliency[0], feed_dict={x: img})
grad = this_res[0] / np.max(np.abs(this_res))
return grad
Explanation: We'll now try to find the gradient activation that maximizes a layer with respect to the input layer x.
End of explanation
og = plt.imread('clinton.png')[..., :3]
img = net['preprocess'](og)[np.newaxis]
fig, axs = plt.subplots(1, len(features), figsize=(20, 10))
for i in range(len(features)):
axs[i].set_title(features[i])
grad = plot_gradient(img, x, g.get_tensor_by_name(features[i]), g)
axs[i].imshow(utils.normalize(grad))
Explanation: Let's try this w/ an image now. We're going to use the plot_gradient function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing its values using the utils.normalize function.
End of explanation
def dream(img, gradient, step, net, x, n_iterations=50, plot_step=10):
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
fig, axs = plt.subplots(1, n_iterations // plot_step, figsize=(20, 10))
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Or we could use the `utils.normalize function:
# this_res = utils.normalize(this_res)
# Experiment with all of the above options. They will drastically
# effect the resulting dream, and really depend on the network
# you use, and the way the network handles normalization of the
# input image, and the step size you choose! Lots to explore!
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
axs[it_i // plot_step].imshow(m)
# We'll run it for 3 iterations
n_iterations = 3
# Think of this as our learning rate. This is how much of
# the gradient we'll add back to the input image
step = 1.0
# Every 1 iterations, we'll plot the current deep dream
plot_step = 1
Explanation: <a name="part-3---basic-deep-dream"></a>
Part 3 - Basic Deep Dream
In the lecture we saw how Deep Dreaming takes the backpropagated gradient activations and simply adds it to the image, running the same process again and again in a loop. We also saw many tricks one can add to this idea, such as infinitely zooming into the image by cropping and scaling, adding jitter by randomly moving the image around, or adding constraints on the total activations.
Have a look here for inspiration:
https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB
https://mtyka.github.io/deepdream/2016/02/05/bilateral-class-vis.html
Let's stick the necessary bits in a function and try exploring how deep dream amplifies the representations of the chosen layers:
End of explanation
for feature_i in range(len(features)):
with tf.Session(graph=g) as sess, g.device(device):
# Get a feature layer
layer = g.get_tensor_by_name(features[feature_i])
# Find the gradient of this layer's mean activation
# with respect to the input image
gradient = tf.gradients(tf.reduce_mean(layer), x)
# Dream w/ our image
dream(img, gradient, step, net, x, n_iterations=n_iterations, plot_step=plot_step)
Explanation: Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, x. Then pass these to our dream function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro).
End of explanation
noise = net['preprocess'](
np.random.rand(256, 256, 3) * 0.1 + 0.45)[np.newaxis]
Explanation: Instead of using an image, we can use an image of noise and see how it "hallucinates" the representations that the layer most responds to:
End of explanation
for feature_i in range(len(features)):
with tf.Session(graph=g) as sess, g.device(device):
# Get a feature layer
layer = ...
# Find the gradient of this layer's mean activation
# with respect to the input image
gradient = ...
# Dream w/ the noise image. Complete this!
dream(...)
Explanation: We'll do the same thing as before, now w/ our noise image:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Load your own image here
og = ...
plt.imshow(og)
# Preprocess the image and make sure it is 4-dimensional by adding a new axis to the 0th dimension:
img = ...
assert(img.ndim == 4)
# Let's get the softmax layer
print(names[-2])
layer = g.get_tensor_by_name(names[-2] + ":0")
# And find its shape
with tf.Session(graph=g) as sess, g.device(device):
layer_shape = tf.shape(layer).eval(feed_dict={x:img})
# We can find out how many neurons it has by feeding it an image and
# calculating the shape. The number of output channels is the last dimension.
n_els = layer_shape[-1]
# Let's pick a label. First let's print out every label and then find one we like:
print(net['labels'])
Explanation: <a name="part-4---deep-dream-extensions"></a>
Part 4 - Deep Dream Extensions
As we saw in the lecture, we can also use the final softmax layer of a network to use during deep dream. This allows us to be explicit about the object we want hallucinated in an image.
<a name="using-the-softmax-layer"></a>
Using the Softmax Layer
Let's get another image to play with, preprocess it, and then make it 4-dimensional.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Pick a neuron. Or pick a random one. This should be 0-n_els
neuron_i = ...
print(net['labels'][neuron_i])
assert(neuron_i >= 0 and neuron_i < n_els)
# And we'll create an activation of this layer which is very close to 0
layer_vec = np.ones(layer_shape) / 100.0
# Except for the randomly chosen neuron which will be very close to 1
layer_vec[..., neuron_i] = 0.99
Explanation: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Explore different parameters for this section.
n_iterations = 51
plot_step = 5
# If you use a different network, you will definitely need to experiment
# with the step size, as each network normalizes the input image differently.
step = 0.2
Explanation: Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
imgs = []
with tf.Session(graph=g) as sess, g.device(device):
gradient = tf.gradients(tf.reduce_max(layer), x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={
x: img_copy, layer: layer_vec})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.figure(figsize=(5, 5))
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
# Save the gif
gif.build_gif(imgs, saveto='softmax.gif')
ipyd.Image(url='softmax.gif?i={}'.format(
np.random.rand()), height=300, width=300)
Explanation: Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory.
End of explanation
n_iterations = 101
plot_step = 10
step = 0.1
crop = 1
imgs = []
n_imgs, height, width, *ch = img.shape
with tf.Session(graph=g) as sess, g.device(device):
# Explore changing the gradient here from max to mean
# or even try using different concepts we learned about
# when creating style net, such as using a total variational
# loss on `x`.
gradient = tf.gradients(tf.reduce_max(layer), x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer
# we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={
x: img_copy, layer: layer_vec})[0]
# This is just one way we could normalize the
# gradient. It helps to look at the range of your image's
# values, e.g. if it is 0 - 1, or -115 to +115,
# and then consider the best way to normalize the gradient.
# For some networks, it might not even be necessary
# to perform this normalization, especially if you
# leave the dream to run for enough iterations.
# this_res = this_res / (np.std(this_res) + 1e-10)
this_res = this_res / (np.max(np.abs(this_res)) + 1e-10)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Optionally, we could apply any number of regularization
# techniques... Try exploring different ways of regularizing
# gradient. ascent process. If you are adventurous, you can
# also explore changing the gradient above using a
# total variational loss, as we used in the style net
# implementation during the lecture. I leave that to you
# as an exercise!
# Crop a 1 pixel border from height and width
img_copy = img_copy[:, crop:-crop, crop:-crop, :]
# Resize (Note: in the lecture, we used scipy's resize which
# could not resize images outside of 0-1 range, and so we had
# to store the image ranges. This is a much simpler resize
# method that allows us to `preserve_range`.)
img_copy = resize(img_copy[0], (height, width), order=3,
clip=False, preserve_range=True
)[np.newaxis].astype(np.float32)
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
# Create a GIF
gif.build_gif(imgs, saveto='fractal.gif')
ipyd.Image(url='fractal.gif?i=2', height=300, width=300)
Explanation: <a name="fractal"></a>
Fractal
During the lecture we also saw a simple trick for creating an infinite fractal: crop the image and then resize it. This can produce some lovely aesthetics and really show some strong object hallucinations if left long enough and with the right parameters for step size/normalization/regularization. Feel free to experiment with the code below, adding your own regularizations as shown in the lecture to produce different results!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Replace these with your own images!
guide_og = plt.imread(...)[..., :3]
dream_og = plt.imread(...)[..., :3]
assert(guide_og.ndim == 3 and guide_og.shape[-1] == 3)
assert(dream_og.ndim == 3 and dream_og.shape[-1] == 3)
Explanation: <a name="guided-hallucinations"></a>
Guided Hallucinations
Instead of following the gradient of an arbitrary mean or max of a particular layer's activation, or a particular object that we want to synthesize, we can also try to guide our image to look like another image. One way to try this is to take one image, the guide, and find the features at a particular layer or layers. Then, we take our synthesis image and find the gradient which makes its own layers activations look like the guide image.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
guide_img = net['preprocess'](guide_og)[np.newaxis]
dream_img = net['preprocess'](dream_og)[np.newaxis]
fig, axs = plt.subplots(1, 2, figsize=(7, 4))
axs[0].imshow(guide_og)
axs[1].imshow(dream_og)
Explanation: Preprocess both images:
End of explanation
x = g.get_tensor_by_name(names[0] + ":0")
# Experiment with the weighting
feature_loss_weight = 1.0
with tf.Session(graph=g) as sess, g.device(device):
feature_loss = tf.Variable(0.0)
# Explore different layers/subsets of layers. This is just an example.
for feature_i in features[3:5]:
# Get the activation of the feature
layer = g.get_tensor_by_name(feature_i)
# Do the same for our guide image
guide_layer = sess.run(layer, feed_dict={x: guide_img})
# Now we need to measure how similar they are!
# We'll use the dot product, which requires us to first reshape both
# features to a 2D vector. But you should experiment with other ways
# of measuring similarity such as l1 or l2 loss.
# Reshape each layer to 2D vector
layer = tf.reshape(layer, [-1, 1])
guide_layer = guide_layer.reshape(-1, 1)
# Now calculate their dot product
correlation = tf.matmul(guide_layer.T, layer)
# And weight the loss by a factor so we can control its influence
feature_loss += feature_loss_weight * correlation
Explanation: Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
n_img, height, width, ch = dream_img.shape
# We'll weight the overall contribution of the total variational loss
# Experiment with this weighting
tv_loss_weight = 1.0
with tf.Session(graph=g) as sess, g.device(device):
# Penalize variations in neighboring pixels, enforcing smoothness
dx = tf.square(x[:, :height - 1, :width - 1, :] - x[:, :height - 1, 1:, :])
dy = tf.square(x[:, :height - 1, :width - 1, :] - x[:, 1:, :width - 1, :])
# We will calculate their difference raised to a power to push smaller
# differences closer to 0 and larger differences higher.
# Experiment w/ the power you raise this to to see how it effects the result
tv_loss = tv_loss_weight * tf.reduce_mean(tf.pow(dx + dy, 1.2))
Explanation: We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Experiment with the step size!
step = 0.1
imgs = []
with tf.Session(graph=g) as sess, g.device(device):
# Experiment with just optimizing the tv_loss or negative tv_loss to understand what it is doing!
gradient = tf.gradients(-tv_loss + feature_loss, x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = dream_img.copy()
with tf.Session(graph=g) as sess, g.device(device):
sess.run(tf.global_variables_initializer())
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.figure(figsize=(5, 5))
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
gif.build_gif(imgs, saveto='guided.gif')
ipyd.Image(url='guided.gif?i=0', height=300, width=300)
Explanation: Now we train just like before, except we'll need to combine our two loss terms, feature_loss and tv_loss by simply adding them! The one thing we have to keep in mind is that we want to minimize the tv_loss while maximizing the feature_loss. That means we'll need to use the negative tv_loss and the positive feature_loss. As an experiment, try just optimizing the tv_loss and removing the feature_loss from the tf.gradients call. What happens?
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
sess.close()
tf.reset_default_graph()
# Stick w/ VGG for now, and then after you see how
# the next few sections work w/ this network, come back
# and explore the other networks.
net = vgg16.get_vgg_model()
# net = vgg16.get_vgg_face_model()
# net = inception.get_inception_model(version='v5')
# net = inception.get_inception_model(version='v3')
# net = i2v.get_i2v_model()
# net = i2v.get_i2v_tag_model()
# Let's explicity use the CPU, since we don't gain anything using the GPU
# when doing Deep Dream (it's only a single image, benefits come w/ many images).
device = '/cpu:0'
# We'll now explicitly create a graph
g = tf.Graph()
Explanation: <a name="further-explorations"></a>
Further Explorations
In the libs module, I've included a deepdream module which has two functions for performing Deep Dream and the Guided Deep Dream. Feel free to explore these to create your own deep dreams.
<a name="part-5---style-net"></a>
Part 5 - Style Net
We'll now work on creating our own style net implementation. We've seen all the steps for how to do this during the lecture, and you can always refer to the Lecture Transcript if you need to. I want to you to explore using different networks and different layers in creating your content and style losses. This is completely unexplored territory so it can be frustrating to find things that work. Think of this as your empty canvas! If you are really stuck, you will find a stylenet implementation under the libs module that you can use instead.
Have a look here for inspiration:
https://mtyka.github.io/code/2015/10/02/experiments-with-style-transfer.html
http://kylemcdonald.net/stylestudies/
<a name="network"></a>
Network
Let's reset the graph and load up a network. I'll include code here for loading up any of our pretrained networks so you can explore each of them!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# And here is a context manager. We use the python "with" notation to create a context
# and create a session that only exists within this indent, as soon as we leave it,
# the session is automatically closed! We also tel the session which graph to use.
# We can pass a second context after the comma,
# which we'll use to be explicit about using the CPU instead of a GPU.
with tf.Session(graph=g) as sess, g.device(device):
# Now load the graph_def, which defines operations and their values into `g`
tf.import_graph_def(net['graph_def'], name='net')
Explanation: Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU.
End of explanation
names = [op.name for op in g.get_operations()]
Explanation: Let's then grab the names of every operation in our network:
End of explanation
content_og = plt.imread('arles.png')[..., :3]
style_og = plt.imread('clinton.png')[..., :3]
fig, axs = plt.subplots(1, 2)
axs[0].imshow(content_og)
axs[0].set_title('Content Image')
axs[0].grid('off')
axs[1].imshow(style_og)
axs[1].set_title('Style Image')
axs[1].grid('off')
# We'll save these with a specific name to include in your submission
plt.imsave(arr=content_og, fname='content.png')
plt.imsave(arr=style_og, fname='style.png')
content_img = net['preprocess'](content_og)[np.newaxis]
style_img = net['preprocess'](style_og)[np.newaxis]
Explanation: Now we need an image for our content image and another one for our style image.
End of explanation
# Grab the tensor defining the input to the network
x = ...
# And grab the tensor defining the softmax layer of the network
softmax = ...
# Remember from the lecture that we have to set the dropout
# "keep probability" to 1.0.
keep_probability = np.ones([1, 4096])
for img in [content_img, style_img]:
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
res = softmax.eval(feed_dict={x: img,
'net/dropout_1/random_uniform:0': keep_probability,
'net/dropout/random_uniform:0': keep_probability})[0]
print([(res[idx], net['labels'][idx])
for idx in res.argsort()[-5:][::-1]])
Explanation: Let's see what the network classifies these images as just for fun:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
print(names)
Explanation: <a name="content-features"></a>
Content Features
We're going to need to find the layer or layers we want to use to help us define our "content loss". Recall from the lecture when we used VGG, we used the 4th convolutional layer.
End of explanation
# Experiment w/ different layers here. You'll need to change this if you
# use another network!
content_layer = 'net/conv3_2/conv3_2:0'
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
content_features = g.get_tensor_by_name(content_layer).eval(
session=sess,
feed_dict={x: content_img,
'net/dropout_1/random_uniform:0': keep_probability,
'net/dropout/random_uniform:0': keep_probability})
Explanation: Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Experiment with different layers and layer subsets. You'll need to change these
# if you use a different network!
style_layers = ['net/conv1_1/conv1_1:0',
'net/conv2_1/conv2_1:0',
'net/conv3_1/conv3_1:0',
'net/conv4_1/conv4_1:0',
'net/conv5_1/conv5_1:0']
style_activations = []
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
for style_i in style_layers:
style_activation_i = g.get_tensor_by_name(style_i).eval(
feed_dict={x: style_img,
'net/dropout_1/random_uniform:0': keep_probability,
'net/dropout/random_uniform:0': keep_probability})
style_activations.append(style_activation_i)
Explanation: <a name="style-features"></a>
Style Features
Let's do the same thing now for the style features. We'll use more than 1 layer though so we'll append all the features in a list. If you aren't using VGG remember to get rid of the dropout stuff!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
style_features = []
for style_activation_i in style_activations:
s_i = np.reshape(style_activation_i, [-1, style_activation_i.shape[-1]])
gram_matrix = np.matmul(s_i.T, s_i) / s_i.size
style_features.append(gram_matrix.astype(np.float32))
Explanation: Now we find the gram matrix which we'll use to optimize our features.
End of explanation
tf.reset_default_graph()
g = tf.Graph()
# Get the network again
net = vgg16.get_vgg_model()
# Load up a session which we'll use to import the graph into.
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# We can set the `net_input` to our content image
# or perhaps another image
# or an image of noise
# net_input = tf.Variable(content_img / 255.0)
net_input = tf.get_variable(
name='input',
shape=content_img.shape,
dtype=tf.float32,
initializer=tf.random_normal_initializer(
mean=np.mean(content_img), stddev=np.std(content_img)))
# Now we load the network again, but this time replacing our placeholder
# with the trainable tf.Variable
tf.import_graph_def(
net['graph_def'],
name='net',
input_map={'images:0': net_input})
Explanation: <a name="remapping-the-input"></a>
Remapping the Input
We're almost done building our network. We just have to change the input to the network to become "trainable". Instead of a placeholder, we'll have a tf.Variable, which allows it to be trained. We could set this to the content image, another image entirely, or an image of noise. Experiment with all three options!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
content_loss = tf.nn.l2_loss((g.get_tensor_by_name(content_layer) -
content_features) /
content_features.size)
Explanation: <a name="content-loss"></a>
Content Loss
In the lecture we saw that we'll simply find the l2 loss between our content layer features.
End of explanation
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
style_loss = np.float32(0.0)
for style_layer_i, style_gram_i in zip(style_layers, style_features):
layer_i = g.get_tensor_by_name(style_layer_i)
layer_shape = layer_i.get_shape().as_list()
layer_size = layer_shape[1] * layer_shape[2] * layer_shape[3]
layer_flat = tf.reshape(layer_i, [-1, layer_shape[3]])
gram_matrix = tf.matmul(tf.transpose(layer_flat), layer_flat) / layer_size
style_loss = tf.add(style_loss, tf.nn.l2_loss((gram_matrix - style_gram_i) / np.float32(style_gram_i.size)))
Explanation: <a name="style-loss"></a>
Style Loss
Instead of straight l2 loss on the raw feature activations, we're going to calculate the gram matrix and find the loss between these. Intuitively, this is finding what is common across all convolution filters, and trying to enforce the commonality between the synthesis and style image's gram matrix.
End of explanation
def total_variation_loss(x):
h, w = x.get_shape().as_list()[1], x.get_shape().as_list()[1]
dx = tf.square(x[:, :h-1, :w-1, :] - x[:, :h-1, 1:, :])
dy = tf.square(x[:, :h-1, :w-1, :] - x[:, 1:, :w-1, :])
return tf.reduce_sum(tf.pow(dx + dy, 1.25))
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
tv_loss = total_variation_loss(net_input)
Explanation: <a name="total-variation-loss"></a>
Total Variation Loss
And just like w/ guided hallucinations, we'll try to enforce some smoothness using a total variation loss.
End of explanation
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# Experiment w/ the weighting of these! They produce WILDLY different
# results.
loss = 5.0 * content_loss + 1.0 * style_loss + 0.001 * tv_loss
optimizer = tf.train.AdamOptimizer(0.05).minimize(loss)
Explanation: <a name="training"></a>
Training
We're almost ready to train! Let's just combine our three loss measures and stick it in an optimizer.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
imgs = []
n_iterations = 100
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
sess.run(tf.global_variables_initializer())
# map input to noise
og_img = net_input.eval()
for it_i in range(n_iterations):
_, this_loss, synth = sess.run([optimizer, loss, net_input], feed_dict={
'net/dropout_1/random_uniform:0': keep_probability,
'net/dropout/random_uniform:0': keep_probability})
print("%d: %f, (%f - %f)" %
(it_i, this_loss, np.min(synth), np.max(synth)))
if it_i % 5 == 0:
m = vgg16.deprocess(synth[0])
imgs.append(m)
plt.imshow(m)
plt.show()
gif.build_gif(imgs, saveto='stylenet.gif')
ipyd.Image(url='stylenet.gif?i=0', height=300, width=300)
Explanation: And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here.
End of explanation
utils.build_submission('session-4.zip',
('softmax.gif',
'fractal.gif',
'guided.gif',
'content.png',
'style.png',
'stylenet.gif',
'session-4.ipynb'))
Explanation: <a name="assignment-submission"></a>
Assignment Submission
After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:
<pre>
session-4/
session-4.ipynb
softmax.gif
fractal.gif
guided.gif
content.png
style.png
stylenet.gif
</pre>
You'll then submit this zip file for your third assignment on Kadenze for "Assignment 4: Deep Dream and Style Net"! Remember to complete the rest of the assignment, gallery commenting on your peers work, to receive full credit! If you have any questions, remember to reach out on the forums and connect with your peers or with me.
To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info
Also, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work!
End of explanation |
8,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artificial Intelligence Nanodegree
Convolutional Neural Networks
In your upcoming project, you will download pre-computed bottleneck features. In this notebook, we'll show you how to calculate VGG-16 bottleneck features on a toy dataset. Note that unless you have a powerful GPU, computing the bottleneck features takes a significant amount of time.
1. Load and Preprocess Sample Images
Before supplying an image to a pre-trained network in Keras, there are some required preprocessing steps. You will learn more about this in the project; for now, we have implemented this functionality for you in the first code cell of the notebook. We have imported a very small dataset of 8 images and stored the preprocessed image input as img_input. Note that the dimensionality of this array is (8, 224, 224, 3). In this case, each of the 8 images is a 3D tensor, with shape (224, 224, 3).
Step1: 2. Recap How to Import VGG-16
Recall how we import the VGG-16 network (including the final classification layer) that has been pre-trained on ImageNet.
Step2: For this network, model.predict returns a 1000-dimensional probability vector containing the predicted probability that an image returns each of the 1000 ImageNet categories. The dimensionality of the obtained output from passing img_input through the model is (8, 1000). The first value of 8 merely denotes that 8 images were passed through the network.
Step3: 3. Import the VGG-16 Model, with the Final Fully-Connected Layers Removed
When performing transfer learning, we need to remove the final layers of the network, as they are too specific to the ImageNet database. This is accomplished in the code cell below.
Step4: 4. Extract Output of Final Max Pooling Layer
Now, the network stored in model is a truncated version of the VGG-16 network, where the final three fully-connected layers have been removed. In this case, model.predict returns a 3D array (with dimensions $7\times 7\times 512$) corresponding to the final max pooling layer of VGG-16. The dimensionality of the obtained output from passing img_input through the model is (8, 7, 7, 512). The first value of 8 merely denotes that 8 images were passed through the network. | Python Code:
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
import numpy as np
import glob
img_paths = glob.glob("images/*.jpg")
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in img_paths]
return np.vstack(list_of_tensors)
# calculate the image input. you will learn more about how this works the project!
img_input = preprocess_input(paths_to_tensor(img_paths))
print(img_input.shape)
Explanation: Artificial Intelligence Nanodegree
Convolutional Neural Networks
In your upcoming project, you will download pre-computed bottleneck features. In this notebook, we'll show you how to calculate VGG-16 bottleneck features on a toy dataset. Note that unless you have a powerful GPU, computing the bottleneck features takes a significant amount of time.
1. Load and Preprocess Sample Images
Before supplying an image to a pre-trained network in Keras, there are some required preprocessing steps. You will learn more about this in the project; for now, we have implemented this functionality for you in the first code cell of the notebook. We have imported a very small dataset of 8 images and stored the preprocessed image input as img_input. Note that the dimensionality of this array is (8, 224, 224, 3). In this case, each of the 8 images is a 3D tensor, with shape (224, 224, 3).
End of explanation
from keras.applications.vgg16 import VGG16
model = VGG16()
model.summary()
Explanation: 2. Recap How to Import VGG-16
Recall how we import the VGG-16 network (including the final classification layer) that has been pre-trained on ImageNet.
End of explanation
model.predict(img_input).shape
Explanation: For this network, model.predict returns a 1000-dimensional probability vector containing the predicted probability that an image returns each of the 1000 ImageNet categories. The dimensionality of the obtained output from passing img_input through the model is (8, 1000). The first value of 8 merely denotes that 8 images were passed through the network.
End of explanation
from keras.applications.vgg16 import VGG16
model = VGG16(include_top=False)
model.summary()
Explanation: 3. Import the VGG-16 Model, with the Final Fully-Connected Layers Removed
When performing transfer learning, we need to remove the final layers of the network, as they are too specific to the ImageNet database. This is accomplished in the code cell below.
End of explanation
print(model.predict(img_input).shape)
Explanation: 4. Extract Output of Final Max Pooling Layer
Now, the network stored in model is a truncated version of the VGG-16 network, where the final three fully-connected layers have been removed. In this case, model.predict returns a 3D array (with dimensions $7\times 7\times 512$) corresponding to the final max pooling layer of VGG-16. The dimensionality of the obtained output from passing img_input through the model is (8, 7, 7, 512). The first value of 8 merely denotes that 8 images were passed through the network.
End of explanation |
8,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
With an equal number of origins and destinations (n=16)
Step1: # With non-equal number of origins (n=9) and destinations (m=25) | Python Code:
origins = ps.weights.lat2W(4,4)
dests = ps.weights.lat2W(4,4)
origins.n
dests.n
ODw = ODW(origins, dests)
print ODw.n, 16*16
ODw.full()[0].shape
Explanation: With an equal number of origins and destinations (n=16)
End of explanation
origins = ps.weights.lat2W(3,3)
dests = ps.weights.lat2W(5,5)
origins.n
dests.n
ODw = ODW(origins, dests)
print ODw.n, 9*25
ODw.full()[0].shape
Explanation: # With non-equal number of origins (n=9) and destinations (m=25)
End of explanation |
8,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook we will add diffusion in addition to reactions. We will first study the simplest possible chemical reaction set
Step1: The diffusion follows Fick's law of diffusion
Step2: We will solve the partial differential equation (PDE) using method of lines. We discretize space into a series of bins (lines), in each of these bins we calculate the contribution of chemical reactions to the rate of change, and then add the diffusion contribution based on a finite difference estimate of the second derivative.
SymPy contains an algorithm to calculate finite difference weights
Step4: In this case, we are dealing with an equidistant grid and you may very well recognize this result from standard text books (it is actually quite easy to derive from the definition of the derivative).
The number of dependent variables in our ODE system is then the number of species multiplied by the number of bins. There is no need to create that many symbols, instead we rely on writing an outer loop calculating the local reactions rates. We create a new subclass of our ODEsys class from earlier to do this | Python Code:
reactions = [
('k', {'A': 1}, {'B': 1, 'A': -1}),
]
names, params = 'A B'.split(), ['k']
Explanation: In this notebook we will add diffusion in addition to reactions. We will first study the simplest possible chemical reaction set:
$$
A \overset{k}{\rightarrow} B
$$
we will consider a flat geometry where we assume the concentration is constant in all directions except one (giving us a one dimensional system with respect to space). Following the serialization format we introduced earlier:
End of explanation
D = [8e-9, 8e-9] # He diffusion constant in water at room temperature
Explanation: The diffusion follows Fick's law of diffusion:
$$
\frac{\partial c_i}{\partial t} = D \frac{\partial^2 c_i}{\partial x}
$$
where $t$ is time, $c_i$ is the local concentration of species $i$, $x$ is the spatial variable and $D$ the diffusion constant. Note that a pure diffusion process is identical to the perhaps more well known heat equation. We will, however, also consider contributions ($r_i$) from chemical reactions:
$$
\frac{\partial c_i}{\partial t} = D \frac{\partial^2 c_i}{\partial x} + r_i
$$
We will set the diffusion constant ($m^2/s$ in SI units) equal for our two species in this example:
End of explanation
import sympy as sym
x, h = sym.symbols('x h')
d2fdx2 = sym.Function('f')(x).diff(x, 2)
d2fdx2.as_finite_difference([x-h, x, x+h], x).factor()
Explanation: We will solve the partial differential equation (PDE) using method of lines. We discretize space into a series of bins (lines), in each of these bins we calculate the contribution of chemical reactions to the rate of change, and then add the diffusion contribution based on a finite difference estimate of the second derivative.
SymPy contains an algorithm to calculate finite difference weights:
End of explanation
# %load ../scipy2017codegen/odesys_diffusion.py
from itertools import chain
import numpy as np
import matplotlib.pyplot as plt
from scipy2017codegen.odesys import ODEsys
class MOLsys(ODEsys):
System of ODEs based on method of lines on the interval x = [0, x_end]
def __init__(self, *args, **kwargs):
self.x_end = kwargs.pop('x_end')
self.n_lines = kwargs.pop('n_lines')
self.D = kwargs.pop('D')
self.dx = self.x_end / self.n_lines
super(MOLsys, self).__init__(*args, **kwargs)
def f_eval(self, y, t, *params):
f_out = np.empty(self.ny*self.n_lines)
for i in range(self.n_lines):
slc = slice(i*self.ny, (i+1)*self.ny)
y_bis = self.second_derivatives_spatial(i, y, f_out[slc])
f_out[slc] *= self.D
f_out[slc] += self.lambdified_f(*chain(y[slc], params))
return f_out
def central_reference_bin(self, i):
return np.clip(i, 1, self.ny - 2)
def j_eval(self, y, t, *params):
j_out = np.zeros((self.ny*self.n_lines, self.ny*self.n_lines)) # dense matrix
for i in range(self.n_lines):
slc = slice(i*self.ny, (i+1)*self.ny)
j_out[slc, slc] = self.lambdified_j(*chain(y[slc], params))
k = self.central_reference_bin(i)
for j in range(self.ny):
j_out[i*self.ny + j, (k-1)*self.ny + j] += self.D[j]/self.dx**2
j_out[i*self.ny + j, (k )*self.ny + j] += -2*self.D[j]/self.dx**2
j_out[i*self.ny + j, (k+1)*self.ny + j] += self.D[j]/self.dx**2
return j_out
def second_derivatives_spatial(self, i, y, out):
k = self.central_reference_bin(i)
for j in range(self.ny):
left = y[(k-1)*self.ny + j]
cent = y[(k )*self.ny + j]
rght = y[(k+1)*self.ny + j]
out[j] = (left - 2*cent + rght)/self.dx**2
def integrate(self, tout, y0, params=(), **kwargs):
y0 = np.array(np.vstack(y0).T.flat)
yout, info = super(MOLsys, self).integrate(tout, y0, params, **kwargs)
return yout.reshape((tout.size, self.n_lines, self.ny)).transpose((0, 2, 1)), info
def x_centers(self):
return np.linspace(self.dx/2, self.x_end - self.dx/2, self.n_lines)
def plot_result(self, tout, yout, info=None, ax=None):
ax = ax or plt.subplot(1, 1, 1)
x_lines = self.x_centers()
for i, t in enumerate(tout):
for j in range(self.ny):
c = [0.0, 0.0, 0.0]
c[j] = t/tout[-1]
plt.plot(x_lines, yout[i, j, :], color=c)
self.print_info(info)
from scipy2017codegen.chem import mk_rsys
molsys = mk_rsys(MOLsys, reactions, names, params, x_end=0.01, n_lines=50, D=D)
xc = molsys.x_centers()
xm = molsys.x_end/2
A0 = np.exp(-1e6*(xc-xm)**2)
B0 = np.zeros_like(A0)
tout = np.linspace(0, 30, 40)
yout, info = molsys.integrate(tout, [A0, B0], [0.00123])
yout.shape
%matplotlib inline
molsys.plot_result(tout[::10], yout[::10, ...], info)
Explanation: In this case, we are dealing with an equidistant grid and you may very well recognize this result from standard text books (it is actually quite easy to derive from the definition of the derivative).
The number of dependent variables in our ODE system is then the number of species multiplied by the number of bins. There is no need to create that many symbols, instead we rely on writing an outer loop calculating the local reactions rates. We create a new subclass of our ODEsys class from earlier to do this:
End of explanation |
8,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is a dataset?
A dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest.
What is an example?
An example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example.
Examples are often referred to with the letter $x$.
What is a feature?
A feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be
Step1: Import the dataset
Import the dataset and store it to a variable called iris. This dataset is similar to a python dictionary, with the keys
Step2: Visualizing the data
Visualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of sepal length (x-axis) vs sepal width (y-axis). The colors of the datapoints correspond to the labeled species of iris for that example.
After plotting, look at the data. What do you notice about the way it is arranged?
Step3: Make your own plot
Below, try making your own plots. First, modify the previous code to create a similar plot, showing the petal width vs the petal length. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work.
How is the data arranged differently? Do you think these additional features would be helpful in determining to which species of iris a new plant should be categorized?
What about plotting other feature combinations, like petal length vs sepal length?
Once you've plotted the data several different ways, think about how you would predict the species of a new iris plant, given only the length and width of its sepals and petals.
Step4: Training and Testing Sets
In order to evaluate our data properly, we need to divide our dataset into training and testing sets.
* Training Set - Portion of the data used to train a machine learning algorithm. These are the examples that the computer will learn from in order to try to predict data labels.
* Testing Set - Portion of the data (usually 10-30%) not used in training, used to evaluate performance. The computer does not "see" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct.
* Validation Set - (Optional) A third section of data used for parameter tuning or classifier selection. When selecting among many classifiers, or when a classifier parameter must be adjusted (tuned), a this data is used like a test set to select the best parameter value(s). The final performance is then evaluated on the remaining, previously unused, testing set.
Creating training and testing sets
Below, we create a training and testing set from the iris dataset using using the train_test_split() function.
Step5: Create validation set using crossvalidation
Crossvalidation allows us to use as much of our data as possible for training without training on our test data. We use it to split our training set into training and validation sets.
* Divide data into multiple equal sections (called folds)
* Hold one fold out for validation and train on the other folds
* Repeat using each fold as validation
The KFold() function returns an iterable with pairs of indices for training and testing data. | Python Code:
# Print figures in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets # Import datasets from scikit-learn
# Import patch for drawing rectangles in the legend
from matplotlib.patches import Rectangle
# Create color maps
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# Create a legend for the colors, using rectangles for the corresponding colormap colors
labelList = []
for color in cmap_bold.colors:
labelList.append(Rectangle((0, 0), 1, 1, fc=color))
Explanation: What is a dataset?
A dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest.
What is an example?
An example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example.
Examples are often referred to with the letter $x$.
What is a feature?
A feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be: the square footage, the number of bedrooms, or the number of bathrooms. Some features are more useful than others. When predicting the list price of a house the number of bedrooms is a useful feature while the color of the walls is not, even though they both describe the house.
Features are sometimes specified as a single element of an example, $x_i$
What is a label?
A label identifies a piece of information about an example that is of particular interest. In machine learning, the label is the information we want the computer to learn to predict. In our housing example, the label would be the list price of the house.
Labels can be continuous (e.g. price, length, width) or they can be a category label (e.g. color, species of plant/animal). They are typically specified by the letter $y$.
The Iris Dataset
Here, we use the Iris dataset, available through scikit-learn. Scikit-learn's explanation of the dataset is here.
This dataset contains information on three species of iris flowers: Setosa, Versicolour, and Virginica.
|<img src="Images/Setosa.jpg" width=200>|<img src="Images/Versicolor.jpg" width=200>|<img src="Images/Virginica.jpg" width=200>|
|:-------------------------------------:|:-----------------------------------------:|:----------------------------------------:|
| Iris Setosa source | Iris Versicolour source | Iris Virginica source |
Each example has four features (or measurements): sepal length, sepal width, petal length, and petal width. All measurements are in cm.
|<img src="Images/Petal-sepal.jpg" width=200>|
|:------------------------------------------:|
|Petal and sepal of a primrose plant. From wikipedia|
Examples
The datasets consists of 150 examples, 50 examples from each species of iris.
Features
The features are the columns of the dataset. In order from left to right (or 0-3) they are: sepal length, sepal width, petal length, and petal width.
Target
The target value is the species of Iris. The three three species are Setosa, Versicolour, and Virginica.
Our goal
The goal, for this dataset, is to train a computer to predict the species of a new iris plant, given only the measured length and width of its sepal and petal.
Setup
Tell matplotlib to print figures in the notebook. Then import numpy (for numerical data), pyplot (for plotting figures) ListedColormap (for plotting colors), and datasets (to download the iris dataset from scikit-learn).
Also create the color maps to use to color the plotted data, and "labelList", which is a list of colored rectangles to use in plotted legends
End of explanation
# Import some data to play with
iris = datasets.load_iris()
# List the data keys
print('Keys: ' + str(iris.keys()))
print('Label names: ' + str(iris.target_names))
print('Feature names: ' + str(iris.feature_names))
print('')
# Store the labels (y), label names, features (X), and feature names
y = iris.target # Labels are stored in y as numbers
labelNames = iris.target_names # Species names corresponding to labels 0, 1, and 2
X = iris.data
featureNames = iris.feature_names
# Show the first five examples
print(X[1:5,:])
Explanation: Import the dataset
Import the dataset and store it to a variable called iris. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target_names', 'target', 'data', 'feature_names']
The data features are stored in iris.data, where each row is an example from a single flower, and each column is a single feature. The feature names are stored in iris.feature_names. Labels are stored as the numbers 0, 1, or 2 in iris.target, and the names of these labels are in iris.target_names.
End of explanation
# Plot the data
# Sepal length and width
X_sepal = X[:,:2]
# Get the minimum and maximum values with an additional 0.5 border
x_min, x_max = X_sepal[:, 0].min() - .5, X_sepal[:, 0].max() + .5
y_min, y_max = X_sepal[:, 1].min() - .5, X_sepal[:, 1].max() + .5
plt.figure(figsize=(8, 6))
# Plot the training points
plt.scatter(X_sepal[:, 0], X_sepal[:, 1], c=y, cmap=cmap_bold)
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
plt.title('Sepal width vs length')
# Set the plot limits
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(labelList, labelNames)
plt.show()
Explanation: Visualizing the data
Visualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of sepal length (x-axis) vs sepal width (y-axis). The colors of the datapoints correspond to the labeled species of iris for that example.
After plotting, look at the data. What do you notice about the way it is arranged?
End of explanation
# Put your code here!
Explanation: Make your own plot
Below, try making your own plots. First, modify the previous code to create a similar plot, showing the petal width vs the petal length. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work.
How is the data arranged differently? Do you think these additional features would be helpful in determining to which species of iris a new plant should be categorized?
What about plotting other feature combinations, like petal length vs sepal length?
Once you've plotted the data several different ways, think about how you would predict the species of a new iris plant, given only the length and width of its sepals and petals.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print('Original dataset size: ' + str(X.shape))
print('Training dataset size: ' + str(X_train.shape))
print('Test dataset size: ' + str(X_test.shape))
Explanation: Training and Testing Sets
In order to evaluate our data properly, we need to divide our dataset into training and testing sets.
* Training Set - Portion of the data used to train a machine learning algorithm. These are the examples that the computer will learn from in order to try to predict data labels.
* Testing Set - Portion of the data (usually 10-30%) not used in training, used to evaluate performance. The computer does not "see" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct.
* Validation Set - (Optional) A third section of data used for parameter tuning or classifier selection. When selecting among many classifiers, or when a classifier parameter must be adjusted (tuned), a this data is used like a test set to select the best parameter value(s). The final performance is then evaluated on the remaining, previously unused, testing set.
Creating training and testing sets
Below, we create a training and testing set from the iris dataset using using the train_test_split() function.
End of explanation
from sklearn.model_selection import KFold
# Older versions of scikit learn used n_folds instead of n_splits
kf = KFold(n_splits=5)
for trainInd, valInd in kf.split(X_train):
X_tr = X_train[trainInd,:]
y_tr = y_train[trainInd]
X_val = X_train[valInd,:]
y_val = y_train[valInd]
print("%s %s" % (X_tr.shape, X_val.shape))
Explanation: Create validation set using crossvalidation
Crossvalidation allows us to use as much of our data as possible for training without training on our test data. We use it to split our training set into training and validation sets.
* Divide data into multiple equal sections (called folds)
* Hold one fold out for validation and train on the other folds
* Repeat using each fold as validation
The KFold() function returns an iterable with pairs of indices for training and testing data.
End of explanation |
8,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
R-CNN is a state-of-the-art detector that classifies region proposals by a finetuned Caffe model. For the full details of the R-CNN system and model, refer to its project site and the paper
Step1: This run was in GPU mode. For CPU mode detection, call detect.py without the --gpu argument.
Running this outputs a DataFrame with the filenames, selected windows, and their detection scores to an HDF5 file.
(We only ran on one image, so the filenames will all be the same.)
Step2: 1570 regions were proposed with the R-CNN configuration of selective search. The number of proposals will vary from image to image based on its contents and size -- selective search isn't scale invariant.
In general, detect.py is most efficient when running on a lot of images
Step3: Let's look at the activations.
Step4: Now let's take max across all windows and plot the top classes.
Step5: The top detections are in fact a person and bicycle.
Picking good localizations is a work in progress; we pick the top-scoring person and bicycle detections.
Step7: That's cool. Let's take all 'bicycle' detections and NMS them to get rid of overlapping windows.
Step8: Show top 3 NMS'd detections for 'bicycle' in the image and note the gap between the top scoring box (red) and the remaining boxes.
Step9: This was an easy instance for bicycle as it was in the class's training set. However, the person result is a true detection since this was not in the set for that class.
You should try out detection on an image of your own next!
(Remove the temp directory to clean up, and we're done.) | Python Code:
!mkdir -p _temp
!echo `pwd`/images/fish-bike.jpg > _temp/det_input.txt
!../python/detect.py --crop_mode=selective_search --pretrained_model=../models/bvlc_reference_rcnn_ilsvrc13/bvlc_reference_rcnn_ilsvrc13.caffemodel --model_def=../models/bvlc_reference_rcnn_ilsvrc13/deploy.prototxt --gpu --raw_scale=255 _temp/det_input.txt _temp/det_output.h5
Explanation: R-CNN is a state-of-the-art detector that classifies region proposals by a finetuned Caffe model. For the full details of the R-CNN system and model, refer to its project site and the paper:
Rich feature hierarchies for accurate object detection and semantic segmentation. Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik. CVPR 2014. Arxiv 2013.
In this example, we do detection by a pure Caffe edition of the R-CNN model for ImageNet. The R-CNN detector outputs class scores for the 200 detection classes of ILSVRC13. Keep in mind that these are raw one vs. all SVM scores, so they are not probabilistically calibrated or exactly comparable across classes. Note that this off-the-shelf model is simply for convenience, and is not the full R-CNN model.
Let's run detection on an image of a bicyclist riding a fish bike in the desert (from the ImageNet challenge—no joke).
First, we'll need region proposals and the Caffe R-CNN ImageNet model:
Selective Search is the region proposer used by R-CNN. The selective_search_ijcv_with_python Python module takes care of extracting proposals through the selective search MATLAB implementation. To install it, download the module and name its directory selective_search_ijcv_with_python, run the demo in MATLAB to compile the necessary functions, then add it to your PYTHONPATH for importing. (If you have your own region proposals prepared, or would rather not bother with this step, detect.py accepts a list of images and bounding boxes as CSV.)
-Run ./scripts/download_model_binary.py models/bvlc_reference_rcnn_ilsvrc13 to get the Caffe R-CNN ImageNet model.
With that done, we'll call the bundled detect.py to generate the region proposals and run the network. For an explanation of the arguments, do ./detect.py --help.
End of explanation
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_hdf('_temp/det_output.h5', 'df')
print(df.shape)
print(df.iloc[0])
Explanation: This run was in GPU mode. For CPU mode detection, call detect.py without the --gpu argument.
Running this outputs a DataFrame with the filenames, selected windows, and their detection scores to an HDF5 file.
(We only ran on one image, so the filenames will all be the same.)
End of explanation
with open('../data/ilsvrc12/det_synset_words.txt') as f:
labels_df = pd.DataFrame([
{
'synset_id': l.strip().split(' ')[0],
'name': ' '.join(l.strip().split(' ')[1:]).split(',')[0]
}
for l in f.readlines()
])
labels_df.sort('synset_id')
predictions_df = pd.DataFrame(np.vstack(df.prediction.values), columns=labels_df['name'])
print(predictions_df.iloc[0])
Explanation: 1570 regions were proposed with the R-CNN configuration of selective search. The number of proposals will vary from image to image based on its contents and size -- selective search isn't scale invariant.
In general, detect.py is most efficient when running on a lot of images: it first extracts window proposals for all of them, batches the windows for efficient GPU processing, and then outputs the results.
Simply list an image per line in the images_file, and it will process all of them.
Although this guide gives an example of R-CNN ImageNet detection, detect.py is clever enough to adapt to different Caffe models’ input dimensions, batch size, and output categories. You can switch the model definition and pretrained model as desired. Refer to python detect.py --help for the parameters to describe your data set. There's no need for hardcoding.
Anyway, let's now load the ILSVRC13 detection class names and make a DataFrame of the predictions. Note you'll need the auxiliary ilsvrc2012 data fetched by data/ilsvrc12/get_ilsvrc12_aux.sh.
End of explanation
plt.gray()
plt.matshow(predictions_df.values)
plt.xlabel('Classes')
plt.ylabel('Windows')
Explanation: Let's look at the activations.
End of explanation
max_s = predictions_df.max(0)
max_s.sort(ascending=False)
print(max_s[:10])
Explanation: Now let's take max across all windows and plot the top classes.
End of explanation
# Find, print, and display the top detections: person and bicycle.
i = predictions_df['person'].argmax()
j = predictions_df['bicycle'].argmax()
# Show top predictions for top detection.
f = pd.Series(df['prediction'].iloc[i], index=labels_df['name'])
print('Top detection:')
print(f.order(ascending=False)[:5])
print('')
# Show top predictions for second-best detection.
f = pd.Series(df['prediction'].iloc[j], index=labels_df['name'])
print('Second-best detection:')
print(f.order(ascending=False)[:5])
# Show top detection in red, second-best top detection in blue.
im = plt.imread('images/fish-bike.jpg')
plt.imshow(im)
currentAxis = plt.gca()
det = df.iloc[i]
coords = (det['xmin'], det['ymin']), det['xmax'] - det['xmin'], det['ymax'] - det['ymin']
currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor='r', linewidth=5))
det = df.iloc[j]
coords = (det['xmin'], det['ymin']), det['xmax'] - det['xmin'], det['ymax'] - det['ymin']
currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor='b', linewidth=5))
Explanation: The top detections are in fact a person and bicycle.
Picking good localizations is a work in progress; we pick the top-scoring person and bicycle detections.
End of explanation
def nms_detections(dets, overlap=0.3):
Non-maximum suppression: Greedily select high-scoring detections and
skip detections that are significantly covered by a previously
selected detection.
This version is translated from Matlab code by Tomasz Malisiewicz,
who sped up Pedro Felzenszwalb's code.
Parameters
----------
dets: ndarray
each row is ['xmin', 'ymin', 'xmax', 'ymax', 'score']
overlap: float
minimum overlap ratio (0.3 default)
Output
------
dets: ndarray
remaining after suppression.
x1 = dets[:, 0]
y1 = dets[:, 1]
x2 = dets[:, 2]
y2 = dets[:, 3]
ind = np.argsort(dets[:, 4])
w = x2 - x1
h = y2 - y1
area = (w * h).astype(float)
pick = []
while len(ind) > 0:
i = ind[-1]
pick.append(i)
ind = ind[:-1]
xx1 = np.maximum(x1[i], x1[ind])
yy1 = np.maximum(y1[i], y1[ind])
xx2 = np.minimum(x2[i], x2[ind])
yy2 = np.minimum(y2[i], y2[ind])
w = np.maximum(0., xx2 - xx1)
h = np.maximum(0., yy2 - yy1)
wh = w * h
o = wh / (area[i] + area[ind] - wh)
ind = ind[np.nonzero(o <= overlap)[0]]
return dets[pick, :]
scores = predictions_df['bicycle']
windows = df[['xmin', 'ymin', 'xmax', 'ymax']].values
dets = np.hstack((windows, scores[:, np.newaxis]))
nms_dets = nms_detections(dets)
Explanation: That's cool. Let's take all 'bicycle' detections and NMS them to get rid of overlapping windows.
End of explanation
plt.imshow(im)
currentAxis = plt.gca()
colors = ['r', 'b', 'y']
for c, det in zip(colors, nms_dets[:3]):
currentAxis.add_patch(
plt.Rectangle((det[0], det[1]), det[2]-det[0], det[3]-det[1],
fill=False, edgecolor=c, linewidth=5)
)
print 'scores:', nms_dets[:3, 4]
Explanation: Show top 3 NMS'd detections for 'bicycle' in the image and note the gap between the top scoring box (red) and the remaining boxes.
End of explanation
!rm -rf _temp
Explanation: This was an easy instance for bicycle as it was in the class's training set. However, the person result is a true detection since this was not in the set for that class.
You should try out detection on an image of your own next!
(Remove the temp directory to clean up, and we're done.)
End of explanation |
8,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Indutores
Jupyter Notebook desenvolvido por Gustavo S.S.
Um indutor consiste em uma bobina de fio condutor.
Qualquer condutor de corrente elétrica possui propriedades indutivas e
pode ser considerado um indutor. Mas, para aumentar o efeito indutivo, um indutor
usado na prática é normalmente formado em uma bobina cilíndrica com
várias espiras de fio condutor, conforme ilustrado na Figura 6.21.
Ao passar uma corrente através de um indutor, constata-se que a tensão nele
é diretamente proporcional à taxa de variação da corrente
\begin{align}
{\Large v = L \frac{di}{dt}}
\end{align}
onde L é a constante de proporcionalidade denominada indutância do indutor.
Indutância é a propriedade segundo a qual um indutor se opõe à mudança
do fluxo de corrente através dele, medida em henrys (H).
A indutância de um indutor depende de suas dimensões físicas e de sua
construção.
\begin{align}
{\Large L = \frac{N^2 µ A}{l}}
\end{align}
onde N é o número de espiras, / é o comprimento, A é a área da seção transversal
e µ é a permeabilidade magnética do núcleo
Relação Tensão-Corrente
Step1: Problema Prático 6.8
Se a corrente através de um indutor de 1 mH for i(t) = 60 cos(100t) mA, determine a tensão entre os terminais e a energia armazenada.
Step2: Exemplo 6.9
Determine a corrente através de um indutor de 5 H se a tensão nele for
v(t)
Step3: Problema Prático 6.9
A tensão entre os terminais de um indutor de 2 H é v = 10(1 – t) V. Determine a corrente
que passa através dele no instante t = 4 s e a energia armazenada nele no instante t = 4s.
Suponha i(0) = 2 A.
Step4: Exemplo 6.10
Considere o circuito da Figura 6.27a. Em CC, determine
Step5: Problema Prático 6.10
Determine vC, iL e a energia armazenada no capacitor e no indutor no circuito da Figura
6.28 em CC.
Step6: Indutores em Série e Paralelo
A indutância equivalente de indutores conectados em série é a soma das
indutâncias individuais.
\begin{align}
L_{eq} = L_1 + L_2 + ... + L_N = \sum_{i = 1}^{N}L_i
\end{align}
A indutância equivalente de indutores paralelos é o inverso da soma dos
inversos das indutâncias individuais.
\begin{align}
L_{eq} = \frac{1}{L_1} + \frac{1}{L_2} + ... + \frac{1}{L_N} = (\sum_{i = 1}^{N} \frac{1}{L_i})^{-1}
\end{align}
Ou, para duas Indutâncias
Step7: Problema Prático 6.11
Calcule a indutância equivalente para o circuito indutivo em escada da Figura 6.32.
Step8: Exemplo 6.12
Para o circuito da Figura 6.33,
i(t) = 4(2 – e–10t) mA.
Se i2(0) = –1 mA, determine
Step9: Problema Prático 6.12
No circuito da Figura 6.34,
i1(t) = 0,6e–2t A.
Se i(0) = 1,4 A, determine | Python Code:
print("Exemplo 6.8")
import numpy as np
from sympy import *
L = 0.1
t = symbols('t')
i = 10*t*exp(-5*t)
v = L*diff(i,t)
w = (L*i**2)/2
print("Tensão no indutor:",v,"V")
print("Energia:",w,"J")
Explanation: Indutores
Jupyter Notebook desenvolvido por Gustavo S.S.
Um indutor consiste em uma bobina de fio condutor.
Qualquer condutor de corrente elétrica possui propriedades indutivas e
pode ser considerado um indutor. Mas, para aumentar o efeito indutivo, um indutor
usado na prática é normalmente formado em uma bobina cilíndrica com
várias espiras de fio condutor, conforme ilustrado na Figura 6.21.
Ao passar uma corrente através de um indutor, constata-se que a tensão nele
é diretamente proporcional à taxa de variação da corrente
\begin{align}
{\Large v = L \frac{di}{dt}}
\end{align}
onde L é a constante de proporcionalidade denominada indutância do indutor.
Indutância é a propriedade segundo a qual um indutor se opõe à mudança
do fluxo de corrente através dele, medida em henrys (H).
A indutância de um indutor depende de suas dimensões físicas e de sua
construção.
\begin{align}
{\Large L = \frac{N^2 µ A}{l}}
\end{align}
onde N é o número de espiras, / é o comprimento, A é a área da seção transversal
e µ é a permeabilidade magnética do núcleo
Relação Tensão-Corrente:
\begin{align}
{\Large i = \frac{1}{L} \int_{t_0}^{t} v(τ)dτ + i(t_0)}
\end{align}
Potência Liberada pelo Indutor:
\begin{align}
{\Large p = vi = (L \frac{di}{dt})i}
\end{align}
Energia Armazenada:
\begin{align}
{\Large w = \int_{-∞}^{t} p(τ)dτ = L \int_{-∞}^{t} \frac{di}{dτ} idτ = L \int_{-∞}^{t} i di}
\end{align}
\begin{align}
{\Large w = \frac{1}{2} Li^2}
\end{align}
Um indutor atua como um curto-circuito em CC.
A corrente através de um indutor não pode mudar instantaneamente.
Assim como o capacitor ideal, o indutor ideal não dissipa energia; a energia armazenada nele pode ser recuperada posteriormente. O indutor absorve potência do circuito quando está armazenando energia e libera potência para o circuito quando retorna a energia previamente armazenada.
Um indutor real, não ideal, tem um componente resistivo significativo, conforme pode ser visto na Figura 6.26. Isso se deve ao fato de que o indutor é feito de um material condutor como cobre, que possui certa resistência denominada resistência de enrolamento Rw, que aparece em série com a indutância do indutor. A presença de Rw o torna tanto um dispositivo armazenador de energia como um dispositivo dissipador de energia. Uma vez que Rw normalmente é muito pequena, ela é ignorada na maioria dos casos. O indutor não ideal também tem uma capacitância de enrolamento Cw em decorrência do acoplamento capacitivo entre as bobinas condutoras. A Cw é muito pequena e pode ser ignorada na maioria dos casos, exceto em altas frequências
Exemplo 6.8
A corrente que passa por um indutor de 0,1 H é i(t) = 10te–5t A. Calcule a tensão no
indutor e a energia armazenada nele.
End of explanation
print("Problema Prático 6.8")
m = 10**-3 #definicao de mili
L = 1*m
i = 60*cos(100*t)*m
v = L*diff(i,t)
w = (L*i**2)/2
print("Tensão:",v,"V")
print("Energia:",w,"J")
Explanation: Problema Prático 6.8
Se a corrente através de um indutor de 1 mH for i(t) = 60 cos(100t) mA, determine a tensão entre os terminais e a energia armazenada.
End of explanation
print("Exemplo 6.9")
L = 5
v = 30*t**2
i = integrate(v,t)/L
print("Corrente:",i,"A")
w = L*(i.subs(t,5)**2)/2
print("Energia:",w,"J")
Explanation: Exemplo 6.9
Determine a corrente através de um indutor de 5 H se a tensão nele for
v(t):
30t^2, t>0
0, t<0
Determine, também, a energia armazenada no instante t = 5s. Suponha i(v)>0.
End of explanation
print("Problema Prático 6.9")
L = 2
v = 10*(1 - t)
i0 = 2
i = integrate(v,t)/L + i0
i4 = i.subs(t,4)
print("Corrente no instante t = 4s:",i4,"A")
p = v*i
w = integrate(p,(t,0,4))
print("Energia no instante t = 4s:",w,"J")
Explanation: Problema Prático 6.9
A tensão entre os terminais de um indutor de 2 H é v = 10(1 – t) V. Determine a corrente
que passa através dele no instante t = 4 s e a energia armazenada nele no instante t = 4s.
Suponha i(0) = 2 A.
End of explanation
print("Exemplo 6.10")
Req = 1 + 5
Vf = 12
C = 1
L = 2
i = Vf/Req
print("Corrente i:",i,"A")
#vc = tensao sobre o capacitor = tensao sobre resistore de 5ohms
vc = 5*i
print("Tensão Vc:",vc,"V")
print("Corrente il:",i,"A")
wl = (L*i**2)/2
wc = (C*vc**2)/2
print("Energia no Indutor:",wl,"J")
print("Energia no Capacitor:",wc,"J")
Explanation: Exemplo 6.10
Considere o circuito da Figura 6.27a. Em CC, determine:
(a) i, vC e iL;
(b) a energia armazenada no capacitor e no indutor.
End of explanation
print("Problema Prático 6.10")
Cf = 10
C = 4
L = 6
il = 10*6/(6 + 2) #divisor de corrente
vc = 2*il
wl = (L*il**2)/2
wc = (C*vc**2)/2
print("Corrente il:",il,"A")
print("Tensão vC:",vc,"V")
print("Energia no Capacitor:",wc,"J")
print("Energia no Indutor:",wl,"J")
Explanation: Problema Prático 6.10
Determine vC, iL e a energia armazenada no capacitor e no indutor no circuito da Figura
6.28 em CC.
End of explanation
print("Exemplo 6.11")
Leq1 = 20 + 12 + 10
Leq2 = Leq1*7/(Leq1 + 7)
Leq3 = 4 + Leq2 + 8
print("Indutância Equivalente:",Leq3,"H")
Explanation: Indutores em Série e Paralelo
A indutância equivalente de indutores conectados em série é a soma das
indutâncias individuais.
\begin{align}
L_{eq} = L_1 + L_2 + ... + L_N = \sum_{i = 1}^{N}L_i
\end{align}
A indutância equivalente de indutores paralelos é o inverso da soma dos
inversos das indutâncias individuais.
\begin{align}
L_{eq} = \frac{1}{L_1} + \frac{1}{L_2} + ... + \frac{1}{L_N} = (\sum_{i = 1}^{N} \frac{1}{L_i})^{-1}
\end{align}
Ou, para duas Indutâncias:
\begin{align}
L_{eq} = \frac{L_1 L_2}{L_1 + L_2}
\end{align}
Exemplo 6.11
Determine a indutância equivalente do circuito mostrado na Figura 6.31.
End of explanation
print("Problema Prático 6.11")
def Leq(x,y): #definicao de funcao para calculo de duas indutancias equivalentes em paralelo
L = x*y/(x + y)
return L
Leq1 = 40*m + 20*m
Leq2 = Leq(30*m,Leq1)
Leq3 = Leq2 + 100*m
Leq4 = Leq(40*m,Leq3)
Leq5 = 20*m + Leq4
Leq6 = Leq(Leq5,50*m)
print("Indutância Equivalente:",Leq6,"H")
Explanation: Problema Prático 6.11
Calcule a indutância equivalente para o circuito indutivo em escada da Figura 6.32.
End of explanation
print("Exemplo 6.12")
i = 4*(2 - exp(-10*t))*m
i2_0 = -1*m
i1_0 = i.subs(t,0) - i2_0
print("Corrente i1(0):",i1_0,"A")
Leq1 = Leq(4,12)
Leq2 = Leq1 + 2
v = Leq2*diff(i,t)
v1 = 2*diff(i,t)
v2 = v - v1
print("Tensão v(t):",v,"V")
print("Tensão v1(t):",v1,"V")
print("Tensão v2(t):",v2,"V")
i1 = integrate(v1,(t,0,t))/4 + i1_0
i2 = integrate(v2,(t,0,t))/12 + i2_0
print("Corrente i1(t):",i1,"A")
print("Corrente i2(t):",i2,"A")
Explanation: Exemplo 6.12
Para o circuito da Figura 6.33,
i(t) = 4(2 – e–10t) mA.
Se i2(0) = –1 mA, determine:
(a) i1(0);
(b) v(t), v1(t) e v2(t);
(c) i1(t) e i2(t).
End of explanation
print("Problema Prático 6.12")
i1 = 0.6*exp(-2*t)
i_0 = 1.4
i2_0 = i_0 - i1.subs(t,0)
print("Corrente i2(0):",i2_0,"A")
v1 = 6*diff(i1,t)
i2 = integrate(v1,(t,0,t))/3 + i2_0
i = i1 + i2
print("Corrente i2(t):",i2,"A")
print("Corrente i(t):",i,"A")
Leq1 = Leq(3,6)
Leq2 = Leq1 + 8
v = Leq2*diff(i)
v2 = v - v1
print("Tensão v1(t):",v1,"V")
print("Tensão v2(t):",v2,"V")
print("Tensão v(t):",v,"V")
Explanation: Problema Prático 6.12
No circuito da Figura 6.34,
i1(t) = 0,6e–2t A.
Se i(0) = 1,4 A, determine:
(a) i2(0);
(b) i2(t) e i(t);
(c) v1(t), v2(t) e v(t).
End of explanation |
8,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Basic-Data-Structure" data-toc-modified-id="Basic-Data-Structure-1"><span class="toc-item-num">1 </span>Basic Data Structure</a></span><ul class="toc-item"><li><span><a href="#Stack" data-toc-modified-id="Stack-1.1"><span class="toc-item-num">1.1 </span>Stack</a></span></li><li><span><a href="#Queue" data-toc-modified-id="Queue-1.2"><span class="toc-item-num">1.2 </span>Queue</a></span></li><li><span><a href="#Unordered-(Linked)List" data-toc-modified-id="Unordered-(Linked)List-1.3"><span class="toc-item-num">1.3 </span>Unordered (Linked)List</a></span></li><li><span><a href="#Ordered-(Linked)List" data-toc-modified-id="Ordered-(Linked)List-1.4"><span class="toc-item-num">1.4 </span>Ordered (Linked)List</a></span></li></ul></li></ul></div>
Step6: Basic Data Structure
Following the online book, Problem Solving with Algorithms and Data Structures. Chapter 4 works through basic data structures and some of its use cases.
Stack
Step7: Write a function rev_string(mystr) that uses a stack to reverse the characters in a string.
Step9: Check for balance parenthesis.
Step11: Convert numbers into binary representation.
Step13: Convert operators from infix to postfix.
Step15: Queue
Step16: From Python Documentation
Step23: Unordered (Linked)List
Blog
Step26: Ordered (Linked)List
As the name suggests, compared to unordered linkedlist, the elements will always be ordered for a ordered linked list. The is_empty and size method are exactly the same as the unordered linkedlist as they don't have anything to do with the actual item in the list. | Python Code:
from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[1])
%load_ext watermark
%watermark -a 'Ethen' -d -t -v -p jupyterthemes
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Basic-Data-Structure" data-toc-modified-id="Basic-Data-Structure-1"><span class="toc-item-num">1 </span>Basic Data Structure</a></span><ul class="toc-item"><li><span><a href="#Stack" data-toc-modified-id="Stack-1.1"><span class="toc-item-num">1.1 </span>Stack</a></span></li><li><span><a href="#Queue" data-toc-modified-id="Queue-1.2"><span class="toc-item-num">1.2 </span>Queue</a></span></li><li><span><a href="#Unordered-(Linked)List" data-toc-modified-id="Unordered-(Linked)List-1.3"><span class="toc-item-num">1.3 </span>Unordered (Linked)List</a></span></li><li><span><a href="#Ordered-(Linked)List" data-toc-modified-id="Ordered-(Linked)List-1.4"><span class="toc-item-num">1.4 </span>Ordered (Linked)List</a></span></li></ul></li></ul></div>
End of explanation
class Stack:
Last In First Out (LIFO)
creates a new stack that is empty,
assumes that the end of the list will hold
the top element of the stack
References
----------
https://docs.python.org/3/tutorial/datastructures.html#using-lists-as-stacks
def __init__(self):
self.items = []
def is_empty(self):
For sequences, (strings, lists, tuples),
use the fact that empty sequences are false
http://stackoverflow.com/questions/53513/best-way-to-check-if-a-list-is-empty
return not self.items
def push(self, item):
adds a new item to the top of the stack
self.items.append(item)
def pop(self):
removes the top item from the stack,
popping an empty stack (list) will result in an error
return self.items.pop()
def peek(self):
returns the top item from the stack but does not remove it
return self.items[len(self.items) - 1]
def size(self):
return len(self.items)
s = Stack()
print(s.is_empty())
s.push(4)
s.push(5)
print(s.is_empty())
print(s.pop())
Explanation: Basic Data Structure
Following the online book, Problem Solving with Algorithms and Data Structures. Chapter 4 works through basic data structures and some of its use cases.
Stack
End of explanation
def rev_string(string):
s = Stack()
for char in string:
s.push(char)
rev_str = ''
while not s.is_empty():
rev_str += s.pop()
return rev_str
test = 'apple'
rev_string(test)
Explanation: Write a function rev_string(mystr) that uses a stack to reverse the characters in a string.
End of explanation
def match(top, char):
if top == '[' and char == ']':
return True
if top == '{' and char == '}':
return True
if top == '(' and char == ')':
return True
return False
def check_paren(text):
confirm balance parenthesis in operations
# define the set of opening
# and closing brackets
opens = '([{'
close = ')]}'
s = Stack()
balanced = True
for char in text:
if char in opens:
s.push(char)
if char in close:
# if a closing bracket appeared
# without a opening bracket
if s.is_empty():
balanced = False
break
else:
# if there is a mismatch between the
# closing and opening bracket
top = s.pop()
if not match(top, char):
balanced = False
break
if balanced and s.is_empty():
return True
else:
return False
test1 = '{{([][])}()}'
balanced = check_paren(test1)
print(balanced)
test2 = '{test}'
balanced = check_paren(test2)
print(balanced)
test3 = ']'
balanced = check_paren(test3)
print(balanced)
Explanation: Check for balance parenthesis.
End of explanation
def convert_binary(num):
assumes positive number is given
s = Stack()
while num > 0:
remainder = num % 2
s.push(remainder)
num = num // 2
binary_str = ''
while not s.is_empty():
binary_str += str(s.pop())
return binary_str
num = 42
binary_str = convert_binary(num)
binary_str
Explanation: Convert numbers into binary representation.
End of explanation
import string
def infix_to_postfix(formula):
assume input formula is space-delimited
s = Stack()
output = []
prec = {'*': 3, '/': 3, '+': 2, '-': 2, '(': 1}
operand = string.digits + string.ascii_uppercase
for token in formula.split():
if token in operand:
output.append(token)
elif token == '(':
s.push(token)
elif token == ')':
top = s.pop()
while top != '(':
output.append(top)
top = s.pop()
else:
while (not s.is_empty()) and prec[s.peek()] > prec[token]:
top = s.pop()
output.append(top)
s.push(token)
while not s.is_empty():
top = s.pop()
output.append(top)
postfix = ' '.join(output)
return postfix
formula = 'A + B * C'
output = infix_to_postfix(formula)
output
formula = '( A + B ) * C'
output = infix_to_postfix(formula)
output
Explanation: Convert operators from infix to postfix.
End of explanation
class Queue:
First In First Out (FIFO)
assume rear is at position 0 in the list,
so the last element of the list is the front
def __init__(self):
self.items = []
def is_empty(self):
return not self.items
def enqueue(self, item):
self.items.insert(0, item)
def dequeue(self):
return self.items.pop()
def size(self):
return len(self.items)
q = Queue()
q.enqueue(4)
q.enqueue(3)
q.enqueue(10)
# 4 is the first one added, hence it's the first
# one that gets popped
print(q.dequeue())
print(q.size())
Explanation: Queue
End of explanation
from collections import deque
def check_palindrome(word):
equal = True
queue = deque([token for token in word])
while len(queue) > 1 and equal:
first = queue.popleft()
last = queue.pop()
if first != last:
equal = False
return equal
test = 'radar'
check_palindrome(test)
Explanation: From Python Documentation: Using Lists as Queues
Although the queue functionality can be implemented using a list, it is not efficient for this purpose. Because doing inserts or pops from the beginning of a list is slow (all of the other elements have to be shifted by one).
We instead use deque. It is designed to have fast appends and pops from both ends.
Implement the palindrome checker to check if a given word is a palindrom. A palindrome is a string that reads the same forward and backward, e.g. radar.
End of explanation
class Node:
node must contain the list item itself (data)
node must hold a reference that points to the next node
def __init__(self, initdata):
# None will denote the fact that there is no next node
self._data = initdata
self._next = None
def get_data(self):
return self._data
def get_next(self):
return self._next
def set_data(self, newdata):
self._data = newdata
def set_next(self, newnext):
self._next = newnext
class UnorderedList:
def __init__(self):
we need identify the position for the first node,
the head, When the list is first initialized
it has no nodes
self.head = None
def is_empty(self):
return self.head is None
def add(self, item):
takes data, initializes a new node with the given data
and add it to the list, the easiest way is to
place it at the head of the list and
point the new node at the old head
node = Node(item)
node.set_next(self.head)
self.head = node
return self
def size(self):
traverse the linked list and keep a count
of the number of nodes that occurred
count = 0
current = self.head
while current is not None:
count += 1
current = current.get_next()
return count
def search(self, item):
goes through the entire list to check
and see there's a matched value
found = False
current = self.head
while current is not None and not found:
if current.get_data() == item:
found = True
else:
current = current.get_next()
return found
def delete(self, item):
traverses the list in the same way that search does,
but this time we keep track of the current node and the previous node.
When delete finally arrives at the node it wants to delete,
it looks at the previous node and resets that previous node’s pointer
so that, rather than pointing to the soon-to-be-deleted node,
it will point to the next node in line;
this assumes item is in the list
found = False
previous = None
current = self.head
while current is not None and not found:
if current.get_data() == item:
found = True
else:
previous = current
current = current.get_next()
# note this assumes the item is in the list,
# if not we'll have to write another if else statement
# here to handle it
if previous is None:
# handle cases where the first item is deleted,
# i.e. where we are modifying the head
self.head = current.get_next()
else:
previous.set_next(current.get_next())
return self
def __repr__(self):
value = []
current = self.head
while current is not None:
data = str(current.get_data())
value.append(data)
current = current.get_next()
return '[' + ', '.join(value) + ']'
mylist = UnorderedList()
mylist.add(31)
mylist.add(17)
mylist.add(91)
mylist.delete(17)
print(mylist.search(35))
mylist
Explanation: Unordered (Linked)List
Blog: Implementing a Singly Linked List in Python
End of explanation
class OrderedList:
def __init__(self):
self.head = None
def add(self, item):
takes data, initializes a new node with the given data
and add it to the list while maintaining relative order
# stop when the current node is larger than the item
stop = False
previous = None
current = self.head
while current is not None and not stop:
if current.get_data() > item:
stop = True
else:
previous = current
current = current.get_next()
# check whether it will be added to the head
# or somewhether in the middle and handle
# both cases
node = Node(item)
if previous is None:
node.set_next(self.head)
self.head = node
else:
previous.set_next(node)
node.set_next(current)
return self
def search(self, item):
goes through the entire list to check
and see there's a matched value, but
we can stop if the current node's value
is greater than item since the list is sorted
stop = False
found = False
current = self.head
while current is not None and not found and not stop:
if current.get_data() == item:
found = True
else:
if current.get_data() > item:
stop = True
else:
current = current.get_next()
return found
def __repr__(self):
value = []
current = self.head
while current is not None:
data = str(current.get_data())
value.append(data)
current = current.get_next()
return '[' + ', '.join(value) + ']'
mylist = OrderedList()
mylist.add(17)
mylist.add(77)
mylist.add(31)
print(mylist.search(31))
mylist
Explanation: Ordered (Linked)List
As the name suggests, compared to unordered linkedlist, the elements will always be ordered for a ordered linked list. The is_empty and size method are exactly the same as the unordered linkedlist as they don't have anything to do with the actual item in the list.
End of explanation |
8,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Intro" data-toc-modified-id="Intro-1"><span class="toc-item-num">1 </span>Intro</a></span></li><li><span><a href="#1D-Automata" data-toc-modified-id="1D-Automata-2"><span class="toc-item-num">2 </span>1D Automata</a></span><ul class="toc-item"><li><span><a href="#Animation" data-toc-modified-id="Animation-2.1"><span class="toc-item-num">2.1 </span>Animation</a></span></li></ul></li><li><span><a href="#Conway’s-Game-Of-Life" data-toc-modified-id="Conway’s-Game-Of-Life-3"><span class="toc-item-num">3 </span>Conway’s Game Of Life</a></span><ul class="toc-item"><li><span><a href="#Animation-in-Matplotlib" data-toc-modified-id="Animation-in-Matplotlib-3.1"><span class="toc-item-num">3.1 </span>Animation in Matplotlib</a></span><ul class="toc-item"><li><span><a href="#Interactive-Animation" data-toc-modified-id="Interactive-Animation-3.1.1"><span class="toc-item-num">3.1.1 </span>Interactive Animation</a></span></li></ul></li><li><span><a href="#Game-of-Life-3D" data-toc-modified-id="Game-of-Life-3D-3.2"><span class="toc-item-num">3.2 </span>Game of Life 3D</a></span></li><li><span><a href="#Performances-Profiling" data-toc-modified-id="Performances-Profiling-3.3"><span class="toc-item-num">3.3 </span>Performances Profiling</a></span></li></ul></li><li><span><a href="#Multiple-Neighborhood-CA" data-toc-modified-id="Multiple-Neighborhood-CA-4"><span class="toc-item-num">4 </span>Multiple Neighborhood CA</a></span><ul class="toc-item"><li><span><a href="#Performances-Profiling" data-toc-modified-id="Performances-Profiling-4.1"><span class="toc-item-num">4.1 </span>Performances Profiling</a></span></li><li><span><a href="#Generate-Video" data-toc-modified-id="Generate-Video-4.2"><span class="toc-item-num">4.2 </span>Generate Video</a></span></li></ul></li></ul></div>
Intro
Cellular Automata are discrete mathematical models of Artificial Life.
Discrete because they exist in a discrete space, for example a 2D cell grid for 2-Dimensional automata.
Other primary properties of a cellular automaton
Step3: 1D Automata
Step4: Animation
Step7: Conway’s Game Of Life
Game Of Life (GOL) is possibly one of the most notorious examples of a cellular automata.
Defined by mathematician John Horton Conway, it plays out on a two dimensional grid for which each cell can be in one of two possible states. Starting from an initial grid configuration the system evolves at each unit step taking into account only the immediate preceding configuration. If for each cell we consider the eight surrounding cells as neighbors, the system transition can be defined by four simple rules.
Step8: Animation in Matplotlib
Step9: Interactive Animation
Step12: Game of Life 3D
Regarding the grid structure and neighbors counting is purely a matter of using a 3-dimensional numpy array and related indexing.
For the rules, original GOL ones are not so stable for a 3D setup.
Step13: Performances Profiling
Relying on the utility code for generic CA
Step14: Multiple Neighborhood CA
Expands further on CA like GOL by considering more neighbors or multiple combinations of neighbors.
See Multiple Neighborhood Cellular Automata (MNCA)
Step15: Performances Profiling
Step16: Generate Video | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from PIL import Image, ImageDraw
import tqdm
from pathlib import Path
%matplotlib notebook
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Intro" data-toc-modified-id="Intro-1"><span class="toc-item-num">1 </span>Intro</a></span></li><li><span><a href="#1D-Automata" data-toc-modified-id="1D-Automata-2"><span class="toc-item-num">2 </span>1D Automata</a></span><ul class="toc-item"><li><span><a href="#Animation" data-toc-modified-id="Animation-2.1"><span class="toc-item-num">2.1 </span>Animation</a></span></li></ul></li><li><span><a href="#Conway’s-Game-Of-Life" data-toc-modified-id="Conway’s-Game-Of-Life-3"><span class="toc-item-num">3 </span>Conway’s Game Of Life</a></span><ul class="toc-item"><li><span><a href="#Animation-in-Matplotlib" data-toc-modified-id="Animation-in-Matplotlib-3.1"><span class="toc-item-num">3.1 </span>Animation in Matplotlib</a></span><ul class="toc-item"><li><span><a href="#Interactive-Animation" data-toc-modified-id="Interactive-Animation-3.1.1"><span class="toc-item-num">3.1.1 </span>Interactive Animation</a></span></li></ul></li><li><span><a href="#Game-of-Life-3D" data-toc-modified-id="Game-of-Life-3D-3.2"><span class="toc-item-num">3.2 </span>Game of Life 3D</a></span></li><li><span><a href="#Performances-Profiling" data-toc-modified-id="Performances-Profiling-3.3"><span class="toc-item-num">3.3 </span>Performances Profiling</a></span></li></ul></li><li><span><a href="#Multiple-Neighborhood-CA" data-toc-modified-id="Multiple-Neighborhood-CA-4"><span class="toc-item-num">4 </span>Multiple Neighborhood CA</a></span><ul class="toc-item"><li><span><a href="#Performances-Profiling" data-toc-modified-id="Performances-Profiling-4.1"><span class="toc-item-num">4.1 </span>Performances Profiling</a></span></li><li><span><a href="#Generate-Video" data-toc-modified-id="Generate-Video-4.2"><span class="toc-item-num">4.2 </span>Generate Video</a></span></li></ul></li></ul></div>
Intro
Cellular Automata are discrete mathematical models of Artificial Life.
Discrete because they exist in a discrete space, for example a 2D cell grid for 2-Dimensional automata.
Other primary properties of a cellular automaton:
* dimensionality of the space/world it lives in
* evolutionary rules
* neighborhood. For example in a 2D setting Moore Neighboorhood consists of the 8 surrounding cells.
* finite number of states
Also in general updates are applied instantly to all units.
End of explanation
class Automaton_1D:
def __init__(self, n: int, states: int=2):
1D Automaton
:param n: number of cells
self.n = n
self.space = np.zeros(n, dtype=np.uint8)
self.space[n//2] = 1
#np.array([0,0,0,0,1,0,0,0,0,0])#np.random.choice(2, n)
def update(self, rule: dict):
Update automaton state
tmp_space = self.space.copy()
for i in range(self.n):
neighbours = self.get_neighbours(i)
tmp_space[i] = rule["".join([str(s) for s in neighbours])]
self.space = tmp_space
def get_neighbours(self, i: int):
if i == 0:
return np.insert(self.space[:2], 0, self.space[-1])
elif i == self.n - 1:
return np.insert(self.space[-2:], 2, self.space[0])
else:
return self.space[max(0, i-1):i+2]
rule_0 = {'111': 1, '110': 1, '101': 1, '100': 1, '011': 1, '010': 1, '001': 1, '000': 0}
rule_sierpinski = {'111': 0, '110': 1, '101': 0, '100': 1, '011': 1, '010': 0, '001': 1, '000': 0}
rule_x = {'111': 0, '110': 0, '101': 0, '100': 1, '011': 1, '010': 1, '001': 1, '000': 0}
Explanation: 1D Automata
End of explanation
automaton_size = 100
automaton_1d = Automaton_1D(automaton_size)
nb_frames = 100
img = Image.new('RGB', (automaton_size, nb_frames), 'white')
draw = ImageDraw.Draw(img)
fig, ax = plt.subplots(dpi=50, figsize=(5, 5))
#im = ax.imshow(img)
plt.axis('off')
def animate(i, automaton, draw, img):
space_img = Image.fromarray(automaton_1d.space.reshape(1, automaton_size)*255)
img.paste(space_img, (0, i)) #mask=space_img
ax.imshow(img)
automaton.update(rule_x)
ani = animation.FuncAnimation(fig, animate, frames=nb_frames, interval=1,
fargs=[automaton_1d, draw, img])
Explanation: Animation
End of explanation
class ConwayGOL_2D:
def __init__(self, N):
2D Conway Game of Life
:param N: grid side size (resulting grid will be a NxN matrix)
self.N = N
self.grid = np.random.choice(2, (N,N))
def update(self):
Update status of the grid
tmpGrid = self.grid.copy()
for i in range(self.N):
for j in range(self.N):
neighbours = self.grid[max(0, i-1):min(i+2,self.N), max(0, j-1):min(j+2,self.N)].sum()
neighbours -= self.grid[i, j]
if self.grid[i, j] == 1:
if neighbours > 3 or neighbours < 2:
tmpGrid[i, j] = 0
elif neighbours == 3:
tmpGrid[i, j] = 1
self.grid = tmpGrid
Explanation: Conway’s Game Of Life
Game Of Life (GOL) is possibly one of the most notorious examples of a cellular automata.
Defined by mathematician John Horton Conway, it plays out on a two dimensional grid for which each cell can be in one of two possible states. Starting from an initial grid configuration the system evolves at each unit step taking into account only the immediate preceding configuration. If for each cell we consider the eight surrounding cells as neighbors, the system transition can be defined by four simple rules.
End of explanation
gol = ConwayGOL_2D(100)
fig, ax = plt.subplots(dpi=100, figsize=(5, 4))
im = ax.imshow(gol.grid, cmap='Greys', interpolation='nearest')
plt.axis('off')
def animate(i):
gol.update()
im.set_data(gol.grid)
#ani = animation.FuncAnimation(fig, animate, frames=1000, interval=100).save('basic_animation.mp4', writer=animation.FFMpegFileWriter(fps=30))
animation.FuncAnimation(fig, animate, frames=1000, interval=100)
#plt.show()
Explanation: Animation in Matplotlib
End of explanation
from ipywidgets import interact, widgets
def run_conwayGOL_2D(size):
gol = ConwayGOL_2D(size)
fig, ax = plt.subplots(dpi=100, figsize=(5, 4))
im = ax.imshow(gol.grid, cmap='Greys', interpolation='nearest')
plt.axis('off')
def animate(i):
gol.update()
im.set_data(gol.grid)
return animation.FuncAnimation(fig, animate, frames=1000, interval=100)
from ipywidgets import interact, widgets
interact(run_conwayGOL_2D, size=(10,100))
Explanation: Interactive Animation
End of explanation
class ConwayGOL_3D:
def __init__(self, N):
3D Conway Game of Life
:param N: 3D grid side size (resulting grid will be a NxNxN matrix)
self.N = N
self.grid = np.random.choice(2, (N,N,N))
def update(self):
Update status of the grid
tmpGrid = self.grid.copy()
for z in range(self.N):
for y in range(self.N):
for x in range(self.N):
neighbours = self.grid[max(0, z-1):min(z+2,self.N),
max(0, y-1):min(y+2,self.N),
max(0, x-1):min(x+2,self.N)].sum()
neighbours -= self.grid[z, y, x]
if self.grid[z, y, x] == 1:
if neighbours > 3 or neighbours < 2:
tmpGrid[z, y, x] = 0
elif neighbours == 3:
tmpGrid[z, y, x] = 1
self.grid = tmpGrid
Explanation: Game of Life 3D
Regarding the grid structure and neighbors counting is purely a matter of using a 3-dimensional numpy array and related indexing.
For the rules, original GOL ones are not so stable for a 3D setup.
End of explanation
%load_ext autoreload
%autoreload 2
from Automaton import AutomatonND
rule = {'neighbours_count_born': 3, # count required to make a cell alive
'neighbours_maxcount_survive': 3, # max number (inclusive) of neighbours that a cell can handle before dying
'neighbours_mincount_survive': 2, # min number (inclusive) of neighbours that a cell needs in order to stay alive
}
nb_rows = nb_cols = 400
%%prun -s cumulative -l 30 -r
# We profile the cell, sort the report by "cumulative
# time", limit it to 30 lines
ca_2d = AutomatonND((nb_rows, nb_cols), rule, seed=11)
simulation_steps = 100
for step in tqdm.tqdm(range(simulation_steps)):
ca_2d.update()
plt.imshow(ca_2d.grid)
Explanation: Performances Profiling
Relying on the utility code for generic CA
End of explanation
%load_ext autoreload
%autoreload 2
import cv2
from PIL import Image as IMG
from Automaton import AutomatonND, MultipleNeighborhoodAutomaton, get_kernel_2d_square
from mnca_utils import *
from ds_utils.video_utils import generate_video
configs = [
{'neighbours_count_born': [0.300, 0.350],
'neighbours_maxcount_survive': [0.350, 0.400],
'neighbours_mincount_survive': [0.750, 0.850],
},
]
kernels = [
get_circle_grid(17, 17, radius_minmax=[2,10]),
]
nb_rows = nb_cols = 200
mnca = MultipleNeighborhoodAutomaton((nb_rows, nb_cols), configs=configs, kernels=kernels, seed=11)
grid = get_circle_grid(mnca.shape[0], mnca.shape[1], radius_minmax=[0,50])
mnca.set_init_grid(grid)
simulation_steps = 40
fig, ax = plt.subplots(dpi=100, figsize=(5, 4))
im = ax.imshow(mnca.grid, cmap='Greys', interpolation='nearest')
plt.axis('off')
def animate(i):
mnca.update()
im.set_data(mnca.grid)
animation.FuncAnimation(fig, animate, frames=simulation_steps, interval=10)
Explanation: Multiple Neighborhood CA
Expands further on CA like GOL by considering more neighbors or multiple combinations of neighbors.
See Multiple Neighborhood Cellular Automata (MNCA)
End of explanation
%%prun -s cumulative -l 30 -r
# We profile the cell, sort the report by "cumulative
# time", limit it to 30 lines
configs = [
{'neighbours_count_born': [0.300, 0.350],
'neighbours_maxcount_survive': [0.350, 0.400],
'neighbours_mincount_survive': [0.750, 0.850],
},
{'neighbours_count_born': [0.430, 0.550],
'neighbours_maxcount_survive': [0.100, 0.280],
'neighbours_mincount_survive': [0.120, 0.150],
},
]
kernels = [
get_circle_grid(17, 17, radius_minmax=[2,10]),
get_circle_grid(9, 9, radius_minmax=[1,3]),
]
nb_rows = nb_cols = 200
simulation_steps = 40
mnca = MultipleNeighborhoodAutomaton((nb_rows, nb_cols), configs=configs, kernels=kernels, seed=11)
grid = get_circle_grid(mnca.shape[0], mnca.shape[1], radius_minmax=[0,50])
mnca.set_init_grid(grid)
for _ in range(simulation_steps):
mnca.update()
Explanation: Performances Profiling
End of explanation
def base_frame_gen(frame_count, automaton):
automaton.update()
img = cv2.normalize(automaton.grid, None, 255, 0, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
return img
nb_rows = nb_cols = 300
simulation_steps = 120
automaton_name = 'mca_6polygon_kernel_fill_radinc'
out_path = Path.home() / f'Documents/graphics/generative_output/mnca/{automaton_name}/{nb_rows}x{nb_cols}_{simulation_steps}'
out_path.mkdir(exist_ok=False, parents=True)
img_num = [25,3,6,9,10,11,12,15,16,17]
with open(str(out_path / 'logs.txt'), 'w+') as f:
for i in range(10):
configs = [
{'neighbours_count_born': [0.300, 0.350],
'neighbours_maxcount_survive': [0.350, 0.400],
'neighbours_mincount_survive': [0.750, 0.850],
},
# {'neighbours_count_born': [0.430, 0.550],
# 'neighbours_maxcount_survive': [0.100, 0.280],
# 'neighbours_mincount_survive': [0.120, 0.150],
# },
]
# grid1 = get_polygon_mask(17, 17, 4, 4, fill=0)
# grid2 = get_polygon_mask(17, 17, 4, 1, fill=0)
# grid3 = get_polygon_mask(17, 17, 4, 10, fill=0)
# hexa_grid = (grid1 | grid2 | grid3)
# img_path = Path.home() / 'Documents/graphics/generative_output/flat_hexa_logo/9/run_{img_num[i]}.png'
# hexa_grid = get_image_init_grid(img_path, (17, 17))
kernels = [
#hexa_grid,
get_polygon_mask(17, 17, segments=6, radius=i+1, fill=1)
#get_circle_grid(17, 17, radius_minmax=[1+i,5+i]),
#get_circle_grid(9, 9, radius_minmax=[1,3]),
]
automaton = MultipleNeighborhoodAutomaton((nb_rows, nb_cols), configs=configs, kernels=kernels, seed=i)
img_path = Path.home() / 'Documents/graphics/generative_output/flat_hexa_logo/9/run_{img_num[i]}.png'
#grid = get_image_init_grid(img_path, automaton.shape)
#grid = get_perlin_grid(automaton.shape, 50, seed=i)
grid = get_circle_grid(automaton.shape[0], automaton.shape[1], radius_minmax=[0,80])
automaton.set_init_grid(grid)
generate_video(str(out_path/f'run_{i}.mp4'), (automaton.shape[1], automaton.shape[0]),
frame_gen_fun = lambda i: base_frame_gen(i, automaton),
nb_frames = simulation_steps, is_color=False)
f.write(str(configs) + '\n')
grid = get_polygon_mask(17, 17, segments=120, radius=6, fill=0)
plt.imshow(grid)
Explanation: Generate Video
End of explanation |
8,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step4: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step6: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
Step7: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step8: Question
Step9: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
Step10: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step11: Problem 4
Convince yourself that the data is still good after shuffling!
Step12: Finally, let's save the data for later reuse
Step14: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions
Step15: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint
Step16: Question | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
print(type(train_filename))
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
#Problem 1, Tong's solution -> done
def display_image(folder_index = 0, image_index = 0):
try:
sample_folder = train_folders[folder_index]
image_files = os.listdir(sample_folder)
sample_image = os.path.join(sample_folder, image_files[image_index])
print('Displaying image: ', sample_image)
display(Image(filename = sample_image ))
except:
print('Indices out of bound.')
display_image(1, 5)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
image_index = 0
print(folder)
for image in os.listdir(folder):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
image_index += 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
Explanation: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
#Problem 2, Tong's solution -> done
def display_nd_image(folder_index = 0, image_index = 0):
try:
folder = train_datasets[folder_index]
print("Display image in folder: ", folder)
with open(folder, 'rb') as f:
sample_dataset = pickle.load(f)
img = sample_dataset[image_index, :, :]
plt.imshow(img, cmap = "Greys")
plt.show()
except:
print('Something is wrong.')
display_nd_image(1, 5)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
#Problem 3, Tong's solution -> done
print(train_datasets)
sizes = []
for dataset in train_datasets:
with open(dataset, 'rb') as f:
data = pickle.load(f)
sizes.append(data.shape[0])
print("The samples sizes for each class are: ")
print(sizes)
print("Average: ", np.average(sizes))
print("Stdev: ", np.std(sizes))
print("Sum: ", np.sum(sizes))
#Very balanced
Explanation: Question: why does the image look weird?
Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
Explanation: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
#Problem 4, Tong's solution -> done
#Print some random images and lables from each set, see if they match
def check_data(dataset, lables, index=0):
labelset = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I','J']
img = dataset[index, :, :]
label = labelset[lables[index]]
print("Image:")
plt.imshow(img, cmap = "Greys")
plt.show()
print('Lable: ', label)
check_data(train_dataset, train_labels, index = 1001)
check_data(valid_dataset, valid_labels, index = 11)
check_data(test_dataset, test_labels, index = 9)
#LGTM
print(train_labels[1:100])
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
Explanation: Finally, let's save the data for later reuse:
End of explanation
#Problem 5, Tong's solution -> done
#Why is there overlap??!
train_indices = train_dataset[0]
print(train_dataset.shape[0])
print(train_dataset.item(100, 27, 6))
#Brute-force checking how many rows are identical between train and valid
def overlap_rate(a_dataset, b_dataset, sample_size = 1000):
identical_count = 0
test_size = min(a_dataset.shape[0], sample_size)
for i in range(test_size):
a_record = a_dataset[i, :, :]
for j in range(b_dataset.shape[0]):
b_record = b_dataset[j, :, :]
if np.array_equal(a_record, b_record):
identical_count += 1
print('Sample size:', str(test_size))
print('Percent of a dataset that is overlaped in b dataset', str(identical_count*1.0/test_size))
overlap_rate(train_dataset, valid_dataset) #39%, surprisingly high!
overlap_rate(train_dataset, test_dataset) #58%, even higher
Optioanl questions:
-consider using np.allclose for near duplicates
-sanitized validation and test set: leave for later..
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
End of explanation
%%capture
#Learn reshape
#l = np.ndarray(range(27), shape=(3, 3, 3))
a = np.arange(27).reshape((3, 3, 3))
b = a.reshape(3, 9)
print(a);
print(b);
Explanation: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation
#Problem 6, Tong's solution Version 1: no tuning of hyperparameters
#Take subset of training data, reshape for regression
train_size = 1000
train = train_dataset[:train_size, :, :]
test = test_dataset.reshape(test_dataset.shape[0], image_size * image_size)
X = train.reshape(train_size, image_size * image_size)
Y = train_labels[:train_size]
#Build regression graph
logreg = LogisticRegression(C=1.0)
#Fit the model
logreg.fit(X, Y)
#Test predictions on test set
Z = logreg.predict(test)
#Evaluate
np.mean(Z == test_labels) #Accurary 85%
#V2: tune hyperparameters with the validation set. First do this 'by hand'
valid = valid_dataset.reshape(valid_dataset.shape[0], image_size * image_size)
Cs = np.logspace(0.001, 10, num=50)
Accuracys = []
for C in Cs:
logregC = LogisticRegression(C=C)
logregC.fit(X, Y)
pred = logregC.predict(valid)
acc = np.mean(pred == valid_labels)
Accuracys.append(acc)
Accuracys = np.array(Accuracys)
plt.plot(Cs, Accuracys)
#Looks like changing C doesn't matter all that much. Why?
Explanation: Question: is there a more elegant way to do the reshaping?
End of explanation |
8,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started with Symbulate
Section 3. Multiple Random Variables and Joint Distributions
<Random variables | Contents | Conditioning>
Every time you start Symbulate, you must first run (SHIFT-ENTER) the following commands.
Step1: This section provides an introduction to the Symbulate commands for simulating and summarizing values of multiple random variables.
We are often interested in several RVs defined on the same probability space. Of interest are properties of the joint distribution which describe the relationship between the random variables.
<a id='coin_heads_alt'></a>
Example 3.1
Step2: Each RV X and Y can be simulated individually as in Section 2. Within the context of several random variables, the distribution of a single random variable is called its marginal distribution. The plot below approximates the marginal distribution of Y, the number of switches between Heads and Tails in five coin flips. Note that Y can take values 0, 1, 2, 3, 4.
Step3: However, simulating values of X and Y individually does not provide any information about the relationship between the variables. Joining X and Y with an ampersand & and calling .sim() simultaneously simulates the pair of (X, Y) values for each simulated outcome. The simulated results can be used to approximate the joint distribution of X and Y which describes the possible pairs of values and their relative likelihoods.
Step4: Simulating (X, Y) pairs in this way, we can examine the relationship between the random variables X and Y. For example, for an outcome with X = 5 all five flips are Heads so no switches occur and it must be true that Y = 0. For outcomes with X = 4, either Y = 1 or Y = 2, with a value of 2 occuring more frequently. Thus while Y can marginally take values 0, 1, 2, 3, 4, only Y = 0 is possible when X = 5, only Y = 1 or Y = 2 is possible when X = 4, and so on. Also, for example, outcomes for which X = 2 and Y = 3 occur about four times as frequently as those for which X = 5 and Y = 0.
<a id='sum_max_dice'></a>
Exercise 3.2
Step5: Solution
<a id='visualizing'></a>
Example 3.3
Step6: For discrete random variables it is recommended to use the jitter=True option with scatterplots so that points do not overlap.
Step7: Using the type="tile" option create a tile plot (a.k.a. heat map) where the color of each rectangle represents the relative frequency of the corresponding simulated (X, Y) pair .
Step8: <a id='plot_joint_sum_max'></a>
Exercise 3.4
Step9: Solution
<a id='indep_var'></a>
Example 3.5
Step10: The flips of the coin and the rolls of the die are physically independent so it is safe to assume that the random variables X and Y are independent. The AssumeIndependent command below tells Symbulate to treat the random variables X and Y as independent.
Step11: When pairs of independent random variables are simulated, the values of one variable are generated independently of the values of the other.
Step12: For example, the joint probability P(X = 1, Y = 2), which is about 0.035, is equal to the product of the marginal probabilities P(X = 1) and P(Y = 2), as approximated in the following code.
Step13: <a id='#assume_indep'></a>
Exercise 3.6
Step14: Solution
<a id='indep_var_via_dist'></a>
Example 3.7
Step15: Random variables are "independent and identically distribution (i.i.d.)" when they are independent and have a common marginal distribution. For example, if V represents the number of heads in two flips of a penny and W the number of Heads in two flips of a dime, then V and W are i.i.d., with a common marginal Binomial(n=2, p=0.5) distribution. For i.i.d. random variables, defining the joint distribution using the "exponentiation" notation ** makes the code a little more compact.
Step16: <a id='indep_via_unif'></a>
Exercise 3.8
Step17: Solution
<a id='sum_indep_unif'></a>
Example 3.9
Step18: Notice that Z takes values in the interval [0, 2], but Z does not have a Uniform distribution. Intuitively, there is only one (X, Y) pair, (0, 0), for which the sum is 0, but many (X, Y) pairs - (1, 0), (0, 1), (0.5, 0.5), (0.8, 0.2), etc - for which the sum is 1.
<a id='prod_indep'></a>
Exercise 3.10
Step19: Solution
<a id='corr_numb_alt'></a>
Example 3.11
Step20: The positive correlation coefficient of about 0.6 represents a positive associaton
Step21: However, a correlation coefficient of 0 does not necessarily imply that random variables are independent. Correlation measures a particular kind of relationship, namely, how strong is the linear association between two random variables? That is, how closely do the $(x, y)$ pairs tend to follow a straight line?
Consider Example 3.1
Step22: <a id='corr_sum_max'></a>
Exercise 3.12
Step23: Solution
Back to Contents
Additional Exercises
<a id='ratio_max_sum'></a>
Exercise 3.13
Step24: Hint
Solution
<a id='est_corr_unpack'></a>
Exercise 3.14
Step25: 2) Approximate the joint distribution of X and Z.
Step26: Hint
Solution
<a id='Hints'></a>
Hints for Additional Exercises
<a id='hint_ratio_max_sum'></a>
Exercise 3.13
Step27: Back
<a id='sol_plot_joint_sum_max'></a>
Exercise 3.4
Step28: Back
<a id ='sol_assume_indep'></a>
Exercise 3.6
Step29: Back
<a id ='sol_indep_via_unif'></a>
Exercise 3.8
Step30: Back
<a id ='sol_prod_indep'></a>
Exercise 3.10
Step31: Note that Z takes values in the interval [2 * 1, 5 * 3] but some values have greater density than others.
Back
<a id ='sol_corr_sum_max'></a>
Exercise 3.12
Step32: Back
<a id ='sol_ratio_max_sum'></a>
Exercise 3.13
Step33: Back
<a id ='sol_est_corr_unpack'></a>
Exercise 3.14
Step34: 2) Approximate the joint distribution of X and Z. | Python Code:
from symbulate import *
%matplotlib inline
Explanation: Getting Started with Symbulate
Section 3. Multiple Random Variables and Joint Distributions
<Random variables | Contents | Conditioning>
Every time you start Symbulate, you must first run (SHIFT-ENTER) the following commands.
End of explanation
def number_switches(x):
count = 0
for i in list(range(1, len(x))):
if x[i] != x[i-1]:
count += 1
return count
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
Y = RV(P, number_switches)
outcome = (0, 1, 0, 0, 1)
X(outcome), Y(outcome)
Explanation: This section provides an introduction to the Symbulate commands for simulating and summarizing values of multiple random variables.
We are often interested in several RVs defined on the same probability space. Of interest are properties of the joint distribution which describe the relationship between the random variables.
<a id='coin_heads_alt'></a>
Example 3.1: The joint distribution of the number of Heads and the number of switches between Heads and Tails in coin flips
Let X be the number of Heads in a sequence of five coin flips, and let Y be the number of times the sequence switches between Heads and Tails (not counting the first toss). For example, for the outcome (0, 1, 0, 0, 1), X = 2 and since a switch occurs on the second, third, and fifth flip, Y = 3. The code below defines the probability space and the two RVs. The RV Y is defined through a user defined function; see Example 2.18.
End of explanation
Y.sim(10000).plot()
Explanation: Each RV X and Y can be simulated individually as in Section 2. Within the context of several random variables, the distribution of a single random variable is called its marginal distribution. The plot below approximates the marginal distribution of Y, the number of switches between Heads and Tails in five coin flips. Note that Y can take values 0, 1, 2, 3, 4.
End of explanation
(X & Y).sim(10000).tabulate()
Explanation: However, simulating values of X and Y individually does not provide any information about the relationship between the variables. Joining X and Y with an ampersand & and calling .sim() simultaneously simulates the pair of (X, Y) values for each simulated outcome. The simulated results can be used to approximate the joint distribution of X and Y which describes the possible pairs of values and their relative likelihoods.
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: Simulating (X, Y) pairs in this way, we can examine the relationship between the random variables X and Y. For example, for an outcome with X = 5 all five flips are Heads so no switches occur and it must be true that Y = 0. For outcomes with X = 4, either Y = 1 or Y = 2, with a value of 2 occuring more frequently. Thus while Y can marginally take values 0, 1, 2, 3, 4, only Y = 0 is possible when X = 5, only Y = 1 or Y = 2 is possible when X = 4, and so on. Also, for example, outcomes for which X = 2 and Y = 3 occur about four times as frequently as those for which X = 5 and Y = 0.
<a id='sum_max_dice'></a>
Exercise 3.2: Joint distribution of the sum and the max of two dice rolls
Roll two fair six-sided dice and let X be their sum and Y be the larger (max) of the two numbers rolled (or the common value if a tie).
Define appropriate RVs and simulate 10000 (X, Y) pairs using & and sim(). Tabulate the results.
End of explanation
(X & Y).sim(10000).plot()
Explanation: Solution
<a id='visualizing'></a>
Example 3.3: Visualizing a joint distribution
The simluated joint distribution in Example 3.1 can be visualized by calling .plot(). Calling .plot() for two discrete random variables returns a scatterplot of simulated (X,Y) pairs.
End of explanation
(X & Y).sim(10000).plot(type="scatter", jitter=True)
Explanation: For discrete random variables it is recommended to use the jitter=True option with scatterplots so that points do not overlap.
End of explanation
(X & Y).sim(10000).plot(type='tile')
Explanation: Using the type="tile" option create a tile plot (a.k.a. heat map) where the color of each rectangle represents the relative frequency of the corresponding simulated (X, Y) pair .
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: <a id='plot_joint_sum_max'></a>
Exercise 3.4: Visualizing the joint distribution of the sum and the max of two dice rolls
Continuing Exercise 3.2 let X be the sum and Y be the larger (max) of two rolls of a fair six-sided die.
Simulate 10000 (X, Y) pairs using & and sim(). Display the approximate joint distribution with both a tile plot and scatterplot (with jitter=true).
End of explanation
Flips = BoxModel([1, 0], size=2)
X = RV(Flips, sum)
Rolls = BoxModel([1, 0], probs=[1/6, 5/6], size=3) # 1 = roll 1; 0 = do not roll 1
Y = RV(Rolls, sum)
Explanation: Solution
<a id='indep_var'></a>
Example 3.5: Defining independent random variables
Random variables X and Y are independent if knowing information about the value of one does not influence the distribution of the other. For independent random variables, their joint distribution is the product of the respective marginal distributions.
For example, let X represent the number of Heads in two flips of a fair coin, and let Y represent the number of 1s rolled in three rolls of a fair six-sided die.
End of explanation
X, Y = AssumeIndependent(X, Y)
Explanation: The flips of the coin and the rolls of the die are physically independent so it is safe to assume that the random variables X and Y are independent. The AssumeIndependent command below tells Symbulate to treat the random variables X and Y as independent.
End of explanation
(X & Y).sim(10000).tabulate()
Explanation: When pairs of independent random variables are simulated, the values of one variable are generated independently of the values of the other.
End of explanation
(X.sim(10000).count_eq(1) / 10000) * (Y.sim(10000).count_eq(2) / 10000)
Explanation: For example, the joint probability P(X = 1, Y = 2), which is about 0.035, is equal to the product of the marginal probabilities P(X = 1) and P(Y = 2), as approximated in the following code.
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: <a id='#assume_indep'></a>
Exercise 3.6: Using AssumeIndependent for defining independent random variables
Let X be the number of heads in five flips of a fair coin and Y be the sum of two rolls of a fair six-sided die. Define appropriate RVs, and assume independence with the AssumeIndependent() function. Simulate 10000 (X, Y) pairs and plot the approximate joint distribution.
End of explanation
X, Y = RV(Binomial(n=2, p=0.5) * Binomial(n=3, p=1/6))
(X & Y).sim(10000).tabulate()
Explanation: Solution
<a id='indep_var_via_dist'></a>
Example 3.7: Defining independent random variables via distributions
We have seen that it is common to define a random variable via its distribution, as in Example 2.7. When dealing with multiple random variables it is common to specify the marginal distribution of each and assume independence. In Example 3.5, the marginal distribution of X is Binomial with n=2 and p=0.5 while the marginal distribution of Y is Binomial with n=3 and p=1/6. Independence of distributions is represented by the asterisks * (reflecting that under independence the joint distribution is the product of the respective marginal distributions.)
End of explanation
V, W = RV(Binomial(n=2, p=0.5) ** 2)
Explanation: Random variables are "independent and identically distribution (i.i.d.)" when they are independent and have a common marginal distribution. For example, if V represents the number of heads in two flips of a penny and W the number of Heads in two flips of a dime, then V and W are i.i.d., with a common marginal Binomial(n=2, p=0.5) distribution. For i.i.d. random variables, defining the joint distribution using the "exponentiation" notation ** makes the code a little more compact.
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: <a id='indep_via_unif'></a>
Exercise 3.8: Defining independent random variables
Let X be a random variable with a Binomial(n=4, p=0.25) distribution, and let Y be a random variable with a Binomial(n=3, p=0.7) distribution. Use * to specify the joint distribution of the RVs X and Y as the product of their marginal distributions. Simulate 10000 (X,Y) pairs and approximate the joint distribution.
End of explanation
X, Y = RV(Uniform(0, 1) * Uniform(0,1))
Z = X + Y
Z.sim(10000).plot()
Explanation: Solution
<a id='sum_indep_unif'></a>
Example 3.9: Sum of independent RVs
In Example 2.16 we introduced transformations of random variables. It is often of interest to look at a transformation of multiple random variables. Transformations of random variables such as addition, subtraction, etc. are only possible when the random variables are defined on the same probability space.
For example, assume that X and Y are independent and that each of X and Y has a marginal Uniform distribution on [0, 1]. Define the random variable Z = X + Y, the sum of two independent RVs X and Y.
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: Notice that Z takes values in the interval [0, 2], but Z does not have a Uniform distribution. Intuitively, there is only one (X, Y) pair, (0, 0), for which the sum is 0, but many (X, Y) pairs - (1, 0), (0, 1), (0.5, 0.5), (0.8, 0.2), etc - for which the sum is 1.
<a id='prod_indep'></a>
Exercise 3.10: Product of independent RVs
Let X represent the base (inches) and Y the height (inches) of a "random rectangle". Assume that X and Y are independent, and that X has a Uniform(2, 5) distribution and Y has a Uniform(1, 3) distribution. Let Z = X * Y represent the area of the rectangle. Simulate 10000 values of Z and display its approximate distribution in a plot.
End of explanation
X, Y = RV(Binomial(n=2, p=0.5) * Binomial(n=3, p=0.5))
Z = X + Y
xz = (X & Z).sim(10000)
xz.plot(jitter=True)
xz.corr()
Explanation: Solution
<a id='corr_numb_alt'></a>
Example 3.11: Correlation between random variables
The strength of the relationship between two random variables can be estimated with the correlation coefficient of the simulated pairs using .corr(). For example, let X represent the number of Heads in two flips of a penny and let Y represent the number of Heads in three flips of a dime. Then Z = X + Y is the total number of Heads in the five flips. The following code simulates many (X, Z) pairs and estimates their correlation
End of explanation
xy = (X & Y).sim(10000)
xy.plot(jitter=True)
xy.corr()
Explanation: The positive correlation coefficient of about 0.6 represents a positive associaton: above average values of X tend to be associated with above average values of Z; likewise for below average values.
If two random variables are independent, then there is no association between them and so their correlation coefficient is 0.
End of explanation
def number_switches(x):
count = 0
for i in list(range(1, len(x))):
if x[i] != x[i-1]:
count += 1
return count
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
Y = RV(P, number_switches)
xy = (X & Y).sim(10000)
xy.plot(jitter = True)
xy.corr()
Explanation: However, a correlation coefficient of 0 does not necessarily imply that random variables are independent. Correlation measures a particular kind of relationship, namely, how strong is the linear association between two random variables? That is, how closely do the $(x, y)$ pairs tend to follow a straight line?
Consider Example 3.1: X is the number of Heads in a sequence of five coin flips, and Y is the number of times the sequence switches between Heads and Tails (not counting the first toss). Then X and Y are not independent. For example, if X=0 then it must be true that Y=0. However, the correlation coefficient between X and Y is 0.
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: <a id='corr_sum_max'></a>
Exercise 3.12: Correlation between the sum and the max of two dice
Continuing Exercise 3.2, display the approximate joint distribution of X and Y, the sum and the max, respectively, of two fair six-sided dice rolls, and estimate the correlation coefficient.
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: Solution
Back to Contents
Additional Exercises
<a id='ratio_max_sum'></a>
Exercise 3.13: Ratio of max and sum of two six-sided dice
Define Z as the ratio between the larger of two six-sided dice rolls and the sum of two six-sided dice rolls. Approximate the distribution of Z and estimate P(Z <= 1.5), the probability that the ratio is less than or equal to 0.7.
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: Hint
Solution
<a id='est_corr_unpack'></a>
Exercise 3.14: Estimating correlation and plotting joint distributions
Suppose X is Uniform(0,2) and Y is Uniform(1,3). Define Z to be equal to X + Y (assuming X and Y are independent).
1) Estimate the correlation coefficient between X and Z.
End of explanation
### Type your commands in this cell and then run using SHIFT-ENTER.
Explanation: 2) Approximate the joint distribution of X and Z.
End of explanation
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
Y = RV(P, max)
(X & Y).sim(10000).tabulate()
Explanation: Hint
Solution
<a id='Hints'></a>
Hints for Additional Exercises
<a id='hint_ratio_max_sum'></a>
Exercise 3.13: Hint
In Exercise 3.2 you simulated the sum and max of two dice. In Exercise 3.10 you simulated the product of two random variables. Use a count function to estimate P(Z <= 1.5).
Back
<a id='hint_est_corr_unpack'></a>
Exercise 3.14: Hint
In Example 3.11 we estimated the correlation coefficient between two random variables and plotted the joint distribution. In Example 3.9 we defined a random variable as the sum of two other independent variables.
Back
Back to Contents
Solutions to Exercises
<a id ='sol_sum_max_dice'></a>
Exercise 3.2: Solution
End of explanation
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
Y = RV(P, max)
(X & Y).sim(10000).plot(type="tile")
(X & Y).sim(10000).plot(type="scatter", jitter=True)
Explanation: Back
<a id='sol_plot_joint_sum_max'></a>
Exercise 3.4: Solution
End of explanation
P = BoxModel([1, 0], size=5)
X = RV(P, sum)
Q = BoxModel([1, 2, 3, 4, 5, 6], size=2)
Y = RV(Q, sum)
X, Y = AssumeIndependent(X, Y)
(X & Y).sim(10000).plot(jitter='True')
Explanation: Back
<a id ='sol_assume_indep'></a>
Exercise 3.6: Solution
End of explanation
X, Y = RV(Binomial(n=4, p=0.25) * Binomial(n=3, p=0.7))
(X & Y).sim(10000).plot("tile")
Explanation: Back
<a id ='sol_indep_via_unif'></a>
Exercise 3.8: Solution
End of explanation
X, Y = RV(Uniform(a=2, b=5) * Uniform(a=1, b=3))
Z = X * Y
Z.sim(10000).plot()
Explanation: Back
<a id ='sol_prod_indep'></a>
Exercise 3.10: Solution
End of explanation
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
Y = RV(P, max)
xy = (X & Y).sim(10000)
xy.plot(jitter=True)
xy.corr()
Explanation: Note that Z takes values in the interval [2 * 1, 5 * 3] but some values have greater density than others.
Back
<a id ='sol_corr_sum_max'></a>
Exercise 3.12: Solution
End of explanation
P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
Y = RV(P, max)
Z = Y / X
sims = Z.sim(10000)
sims.plot()
sims.count_leq(0.7)/10000
Explanation: Back
<a id ='sol_ratio_max_sum'></a>
Exercise 3.13: Solution
End of explanation
X, Y = RV(Uniform(0, 2) * Uniform(1, 3))
Z = X + Y
(X & Z).sim(10000).corr()
Explanation: Back
<a id ='sol_est_corr_unpack'></a>
Exercise 3.14: Solution
1) Estimate the correlation coefficient between X and Z.
End of explanation
(X & Z).sim(10000).plot()
Explanation: 2) Approximate the joint distribution of X and Z.
End of explanation |
8,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step1: Set up and connect to the ipyparallel cluster
Depending on the number of tests, abba-baba analysis can be computationally intensive, so we will first set up a clustering backend and attach to it.
Step2: A tree-based hypothesis
abba-baba tests are explicitly a tree-based test, and so ipyrad requires that you enter a tree hypothesis in the form of a newick file. This is used by the baba tool to auto-generate hypotheses.
Load in your .loci data file and a tree hypothesis
We are going to use the shape of our tree topology hypothesis to generate 4-taxon tests to perform, therefore we'll start by looking at our tree and making sure it is properly rooted.
Step3: Short tutorial
Step4: Look at the results
By default we do not attach the names of the samples that were included in each test to the results table since it makes the table much harder to read, and we wanted it to look very clean. However, this information is readily available in the .test() attribute of the baba object as shown below. Also, we have made plotting functions to show this information clearly as well.
Step5: Plotting and interpreting results
Interpreting the results of D-statistic tests is actually very complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to appear as if they have also introgressed with other taxa in your data set. This problem is described in great detail in this paper (Eaton et al. 2015). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as partitioned D-statistics (described further below) to tease apart whether a single or multiple introgression events are likely to have occurred.
In the example plot below we find evidence of admixture between the sample 33413_thamno (black) with several other samples, but the signal is strongest with respect to 30556_thamno (tests 12-19). It also appears that admixture is consistently detected with samples of (40578_rex & 35855_rex) when contrasted against 35236_rex (tests 20, 24, 28, 34, and 35). Take note, the tests are indexed starting at 0.
Step6: generating tests
Because tests are generated based on a tree file, it will only generate tests that fit the topology of the test. For example, the entries below generate zero possible tests because the two samples entered for P3 (the two thamnophila subspecies) are paraphyletic on the tree topology, and therefore cannot form a clade together.
Step7: If you want to get results for a test that does not fit on your tree you can always write the result out by hand instead of auto-generating it from the tree. Doing it this way is fine when you have few tests to run, but becomes burdensome when writing many tests.
Step8: Further investigating results with 5-part tests
You can also perform partitioned D-statistic tests like below. Here we are testing the direction of introgression. If the two thamnophila subspecies are in fact sister species then they would be expected to share derived alleles that arose in their ancestor and which would be introduced from together if either one of them introgressed into a P. rex taxon. As you can see, test 0 shows no evidence of introgression, whereas test 1 shows that the two thamno subspecies share introgressed alleles that are present in two samples of rex relative to sample "35236_rex".
More on this further below in this notebook.
Step9: Full Tutorial
Creating a baba object
The fundamental object for running abba-baba tests is the ipa.baba() object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a '.loci' file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (min_samples_locus=4), to maximize the amount of data available for any test. Once an initial baba object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below.
Step10: Linking tests to the baba object
The next thing we need to do is to link a 'test' to each of these objects, or a list of tests. In the Short tutorial above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the baba object named 'cc' below we enter two tests using a list to show how multiple tests can be linked to a single baba object.
Step11: Other parameters
Each baba object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the 'mincov' parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples needs to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set mincov=2. However, for the test above setting mincov=2 would filter out all of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the mincov parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the baba object 'bb'.
Step12: Running the tests
When you execute the 'run()' command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your ipyclient object. The results of the tests will be stored in your baba object under the attributes 'results_table' and 'results_boots'.
Step13: The results table
The results of the tests are stored as a data frame (pandas.DataFrame) in results_table, which can be easily accessed and manipulated. The tests are listed in order and can be referenced by their 'index' (the number in the left-most column). For example, below we see the results for object 'cc' tests 0 and 1. You can see which taxa were used in each test by accessing them from the .tests attribute as a dictionary, or as .taxon_table which returns it as a dataframe. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below.
Step14: Auto-generating tests
Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input rooted tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13.
Step15: Running the tests
The .run() command will run the tests linked to your analysis object. An ipyclient object is required to distribute the jobs in parallel. The .plot() function can then optionally be used to visualize the results on a tree. Or, you can simply look at the results in the .results_table attribute.
Step16: More about input file paths (i/o)
The default (required) input data file is the .loci file produced by ipyrad. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test.
An additional (optional) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need a hypothesis for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree.
Step17: (optional)
Step18: Interpreting results
You can see in the results_table below that the D-statistic range around 0.0-0.15 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement.
In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above). | Python Code:
import ipyrad.analysis as ipa
import ipyparallel as ipp
import toytree
import toyplot
print(ipa.__version__)
print(toyplot.__version__)
print(toytree.__version__)
Explanation: <span style="color:gray">ipyrad-analysis toolkit:</span> abba-baba
The baba tool can be used to measure abba-baba statistics across many different hypotheses on a tree, to easily group individuals into populations for measuring abba-baba using allele frequencies, and to summarize or plot the results of many analyses.
Load packages
End of explanation
# In a terminal on your computer you must launch the ipcluster instance by hand, like this:
# `ipcluster start -n 40 --cluster-id="baba" --daemonize`
# Now you can create a client for the running ipcluster
ipyclient = ipp.Client(cluster_id="baba")
# How many cores are you attached to?
len(ipyclient)
Explanation: Set up and connect to the ipyparallel cluster
Depending on the number of tests, abba-baba analysis can be computationally intensive, so we will first set up a clustering backend and attach to it.
End of explanation
## ipyrad and raxml output files
locifile = "./analysis-ipyrad/pedic_outfiles/pedic.loci"
newick = "./analysis-raxml/RAxML_bipartitions.pedic"
## parse the newick tree, re-root it, and plot it.
rtre = toytree.tree(newick).root(wildcard="prz")
rtre.draw(
height=350,
width=400,
node_labels=rtre.get_node_values("support")
)
## store rooted tree back into a newick string.
newick = rtre.write()
Explanation: A tree-based hypothesis
abba-baba tests are explicitly a tree-based test, and so ipyrad requires that you enter a tree hypothesis in the form of a newick file. This is used by the baba tool to auto-generate hypotheses.
Load in your .loci data file and a tree hypothesis
We are going to use the shape of our tree topology hypothesis to generate 4-taxon tests to perform, therefore we'll start by looking at our tree and making sure it is properly rooted.
End of explanation
## create a baba object linked to a data file and newick tree
bb = ipa.baba(data=locifile, newick=newick)
## generate all possible abba-baba tests meeting a set of constraints
bb.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno"],
})
## show the first 3 tests
bb.tests[:3]
## run all tests linked to bb
bb.run(ipyclient)
## show first 5 results
bb.results_table.head()
Explanation: Short tutorial: calculating abba-baba statistics
To give a gist of what this code can do, here is a quick tutorial version, each step of which we explain in greater detail below. We first create a 'baba' analysis object that is linked to our data file, in this example we name the variable bb. Then we tell it which tests to perform, here by automatically generating a number of tests using the generate_tests_from_tree() function. And finally, we calculate the results and plot them.
End of explanation
## save all results table to a tab-delimited CSV file
bb.results_table.to_csv("bb.abba-baba.csv", sep="\t")
## show the results table sorted by index score (Z)
sorted_results = bb.results_table.sort_values(by="Z", ascending=False)
sorted_results.head()
## get taxon names in the sorted results order
sorted_taxa = bb.taxon_table.iloc[sorted_results.index]
## show taxon names in the first few sorted tests
sorted_taxa.head()
Explanation: Look at the results
By default we do not attach the names of the samples that were included in each test to the results table since it makes the table much harder to read, and we wanted it to look very clean. However, this information is readily available in the .test() attribute of the baba object as shown below. Also, we have made plotting functions to show this information clearly as well.
End of explanation
## plot results on the tree
bb.plot(height=850, width=700, pct_tree_y=0.2, pct_tree_x=0.5, alpha=4.0);
Explanation: Plotting and interpreting results
Interpreting the results of D-statistic tests is actually very complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to appear as if they have also introgressed with other taxa in your data set. This problem is described in great detail in this paper (Eaton et al. 2015). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as partitioned D-statistics (described further below) to tease apart whether a single or multiple introgression events are likely to have occurred.
In the example plot below we find evidence of admixture between the sample 33413_thamno (black) with several other samples, but the signal is strongest with respect to 30556_thamno (tests 12-19). It also appears that admixture is consistently detected with samples of (40578_rex & 35855_rex) when contrasted against 35236_rex (tests 20, 24, 28, 34, and 35). Take note, the tests are indexed starting at 0.
End of explanation
## this is expected to generate zero tests
aa = bb.copy()
aa.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno", "30556_thamno"],
})
Explanation: generating tests
Because tests are generated based on a tree file, it will only generate tests that fit the topology of the test. For example, the entries below generate zero possible tests because the two samples entered for P3 (the two thamnophila subspecies) are paraphyletic on the tree topology, and therefore cannot form a clade together.
End of explanation
## writing tests by hand for a new object
aa = bb.copy()
aa.tests = [
{"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno", "30556_thamno"],
"p2": ["40578_rex", "35855_rex"],
"p1": ["39618_rex", "38362_rex"]},
{"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno", "30556_thamno"],
"p2": ["40578_rex", "35855_rex"],
"p1": ["35236_rex"]},
]
## run the tests
aa.run(ipyclient)
aa.results_table
Explanation: If you want to get results for a test that does not fit on your tree you can always write the result out by hand instead of auto-generating it from the tree. Doing it this way is fine when you have few tests to run, but becomes burdensome when writing many tests.
End of explanation
## further investigate with a 5-part test
cc = bb.copy()
cc.tests = [
{"p5": ["32082_przewalskii", "33588_przewalskii"],
"p4": ["33413_thamno"],
"p3": ["30556_thamno"],
"p2": ["40578_rex", "35855_rex"],
"p1": ["39618_rex", "38362_rex"]},
{"p5": ["32082_przewalskii", "33588_przewalskii"],
"p4": ["33413_thamno"],
"p3": ["30556_thamno"],
"p2": ["40578_rex", "35855_rex"],
"p1": ["35236_rex"]},
]
cc.run(ipyclient)
## the partitioned D results for two tests
cc.results_table
## and view the 5-part test taxon table
cc.taxon_table
Explanation: Further investigating results with 5-part tests
You can also perform partitioned D-statistic tests like below. Here we are testing the direction of introgression. If the two thamnophila subspecies are in fact sister species then they would be expected to share derived alleles that arose in their ancestor and which would be introduced from together if either one of them introgressed into a P. rex taxon. As you can see, test 0 shows no evidence of introgression, whereas test 1 shows that the two thamno subspecies share introgressed alleles that are present in two samples of rex relative to sample "35236_rex".
More on this further below in this notebook.
End of explanation
## create an initial object linked to your data in 'locifile'
aa = ipa.baba(data=locifile)
## create two other copies
bb = aa.copy()
cc = aa.copy()
## print these objects
print aa
print bb
print cc
Explanation: Full Tutorial
Creating a baba object
The fundamental object for running abba-baba tests is the ipa.baba() object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a '.loci' file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (min_samples_locus=4), to maximize the amount of data available for any test. Once an initial baba object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below.
End of explanation
aa.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["29154_superba"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
bb.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["30686_cyathophylla"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
cc.tests = [
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41954_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41478_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
]
Explanation: Linking tests to the baba object
The next thing we need to do is to link a 'test' to each of these objects, or a list of tests. In the Short tutorial above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the baba object named 'cc' below we enter two tests using a list to show how multiple tests can be linked to a single baba object.
End of explanation
## print params for object aa
aa.params
## set the mincov value as a dictionary for object bb
bb.params.mincov = {"p4":2, "p3":1, "p2":1, "p1":1}
bb.params
Explanation: Other parameters
Each baba object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the 'mincov' parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples needs to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set mincov=2. However, for the test above setting mincov=2 would filter out all of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the mincov parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the baba object 'bb'.
End of explanation
## run tests for each of our objects
aa.run(ipyclient)
bb.run(ipyclient)
cc.run(ipyclient)
Explanation: Running the tests
When you execute the 'run()' command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your ipyclient object. The results of the tests will be stored in your baba object under the attributes 'results_table' and 'results_boots'.
End of explanation
## you can sort the results by Z-score
cc.results_table.sort_values(by="Z", ascending=False)
## save the table to a file
cc.results_table.to_csv("cc.abba-baba.csv")
## show the results in notebook
cc.results_table
Explanation: The results table
The results of the tests are stored as a data frame (pandas.DataFrame) in results_table, which can be easily accessed and manipulated. The tests are listed in order and can be referenced by their 'index' (the number in the left-most column). For example, below we see the results for object 'cc' tests 0 and 1. You can see which taxa were used in each test by accessing them from the .tests attribute as a dictionary, or as .taxon_table which returns it as a dataframe. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below.
End of explanation
## create a new 'copy' of your baba object and attach a treefile
dd = bb.copy()
dd.newick = newick
## generate all possible tests
dd.generate_tests_from_tree()
## a dict of constraints
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["40578_rex", "35855_rex"],
}
## generate tests with contraints
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=False,
)
## 'exact' contrainst are even more constrained
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=True,
)
Explanation: Auto-generating tests
Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input rooted tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13.
End of explanation
## run the dd tests
dd.run(ipyclient)
dd.plot(height=500, pct_tree_y=0.2, alpha=4);
dd.results_table
Explanation: Running the tests
The .run() command will run the tests linked to your analysis object. An ipyclient object is required to distribute the jobs in parallel. The .plot() function can then optionally be used to visualize the results on a tree. Or, you can simply look at the results in the .results_table attribute.
End of explanation
## path to a locifile created by ipyrad
locifile = "./analysis-ipyrad/pedicularis_outfiles/pedicularis.loci"
## path to an unrooted tree inferred with tetrad
newick = "./analysis-tetrad/tutorial.tree"
Explanation: More about input file paths (i/o)
The default (required) input data file is the .loci file produced by ipyrad. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test.
An additional (optional) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need a hypothesis for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree.
End of explanation
## load in the tree
tre = toytree.tree(newick)
## set the outgroup either as a list or using a wildcard selector
tre.root(names=["32082_przewalskii", "33588_przewalskii"])
tre.root(wildcard="prz")
## draw the tree
tre.draw(width=400)
## save the rooted newick string back to a variable and print
newick = tre.newick
Explanation: (optional): root the tree
For abba-baba tests you will pretty much always want your tree to be rooted, since the test relies on an assumption about which alleles are ancestral. You can use our simple tree plotting library toytree to root your tree. This library uses Toyplot as its plotting backend, and ete3 as its tree manipulation backend.
Below I load in a newick string and root the tree on the two P. przewalskii samples using the root() function. You can either enter the names of the outgroup samples explicitly or enter a wildcard to select them. We show the rooted tree from a tetrad analysis below. The newick string of the rooted tree can be saved or accessed by the .newick attribute, like below.
End of explanation
## show the results table
print dd.results_table
Explanation: Interpreting results
You can see in the results_table below that the D-statistic range around 0.0-0.15 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement.
In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above).
End of explanation |
8,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spectral Energy Density Fitting and Dark Matter Limit Extraction
Motivation
Now we are going to discuss how we can use build a summary data product that can be used to quickly fit a wide variety of different DM spectra.
Recall that the previous example involved fitting the data across all energy bins using a powerlaw with index -2 for the dark matter target.
What we would like to do is extract the spectrum of any excess (i.e., the flux or limits associated with the various energy bins) and then fit the various DM model spectra to the observed spectra.
Something like this
Step1: Ok, the file contains 24 sets of curves, one for each energy bin.
Let's have a look at one of the sets of results.
Step2: So, as stated, that file contains everything we need to make the Castro plot.
I've put some utilities in SedUtils.py. These are functions to do things like interpolate the log-likelihood in each energy bin and then sum them together. I've added a small python class to manage things.
Step3: Ok, lets go ahead and take a look at the SED that we have.
Step4: Ok, recall that we should never plot upper limits without also giving information about the expected upper limits. There is a file in the "ancil" sub-directory that has the quantiles for the upper limits from 300 Monte Carlo simulations of the analysis.
A pretty standard way to give a sense of the consistency of the results is to show the so called "Brazil" bands for the upper limits. I.e., expectation bands made from simulating the analysis chain numerous times. Typically people show the 1 and 2 $\sigma$ expectation bands and plot them in yellow and green, thus the name "Brazil".
Step5: Question
Step6: Similary to the previous example, the SED object will make a function that we can then pass to the optimizer, this is the NLL_Func function.
Step7: There is also a Minimize function that finds the normalization value that minimizes the negative log likelihood
Step8: So, it looks like there is no signal and we should set upper limits. As before we construct the upper limits at the point were the delta log-likelihood reaches 1.35.
Step9: In the SedUtils.py file you will find a small piece of code to loop over all the channels and masses and to write the output to ../results/draco_dm_results.yaml. Let's go ahead and open that file.
Step10: Displaying the results
Recall the point about how presenting upper limits alone gets rid of the information about the uncertainties and if the result is consistent with the null hypothesis.
Once again, you should never present upper limits without also presenting something that allows people to determine if they think the result is consistent with the null hypothesis.
You can find the quantiles calculated from 300 Monte Carlo simulated instances of the analysis chain in the file draco_spectral_mc_bands.yaml in the ancil folder.
Step11: Ok, there as you can see, the file has a lot more information than the simple limits. The various types of limits presented in the file are | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import LikeFitUtils as lfu
import SedUtils as SED
# lets open the file and have a look
import yaml
f_sed = yaml.load(open("results/draco_sed.yaml"))
len(f_sed)
Explanation: Spectral Energy Density Fitting and Dark Matter Limit Extraction
Motivation
Now we are going to discuss how we can use build a summary data product that can be used to quickly fit a wide variety of different DM spectra.
Recall that the previous example involved fitting the data across all energy bins using a powerlaw with index -2 for the dark matter target.
What we would like to do is extract the spectrum of any excess (i.e., the flux or limits associated with the various energy bins) and then fit the various DM model spectra to the observed spectra.
Something like this:
<img src="figures/ADW_spectrum.png" width=400px>
So then we would be fitting various DM models against the spectral data points, rather than the counts data, as we did in the previous example. Typically we might use the uncertainties of the data points and do a $\chi^2$ fit for the DM spectrum.
There are two main issues with that approach.
Because many of our energy bins have very low statistics, the symmetric error bars that you would want to use, which are obtained by approximating the log-Likelihood surface as a parabola near the minimum, are not actually a good representation of the true log-Likelihood.
Since we are doing a search for signal of new physics, it is likely that in many of the energy bins we will actually be reporting upper limits instead of flux points with error bars. Because upper limits combine to pieces of information (the mean value and the uncertainty) into a single number ( the upper limit ) there isn't a good way to combine upper limits. Consider, for example ,two measurements, the first being $1.0 \pm 0.5$ and the second being $1.9 \pm 0.05$. If we took upper limits as the best-fit value plus 2 sigma both results would give us upper limits of 2.0. What we have lost in reporting only the upper limit is the information about if the data are consistent with the null-hypothesis.
So the best way to combine the information from the various energy bins is to combine the likelihoods.
Overview of the Methodology
First we need to extract the log-likelihood versus flux in each energy bin seperately. In any one energy bin, the analysis is just the same as what we did in the previous example, except that we only use the data and model in a single energy bin.
For a single energy bin the results may look something like this:
<img src="figures/ADW_1bin.png" width=400px>
Where the delta log-likelihood is being plot on the color scale.
For two energy bins the results might look like this:
<img src="figures/ADW_2bins.png" width=400px>
And finally, for all of the energy bins the results might look like this:
<img src="figures/ADW_allBins.png" width=400px>
This last figure is called a "Castro" plot.
Basically, the dark red bands show the regions favored by the data, and the other colors show the regions increasingly disfavored by the data.
Here is another version of the same plot, where we have added the 95% CL upper limits in each of the energy bands.
<img src="figures/ADW_allBins_limits.png" width=400px>
Recall: the confidence level here is not quantifying the probability of the energy flux taking a value below the given 95% limit - that would be a Bayesian statement.
Question:
What is the corect phrasing we should use when describing the meaning of these upper limits?
If we assume a particular spectral form for the DM signal, we can use the data that went into the Castro plot to construct the log-likelihood as a function of the paramaters of the function we assumed. In our case we will be assuming the annihilation channel and mass of the DM, so the only free parameter is the normalization of the signal.
<img src="figures/ADW_castro_spectrum.png" width=400px>
Here are what the 95% CL upper limits would look like in this simulation for DM annihilating to b-quarks, for several different DM masses.
<img src="figures/ADW_spectra_limits.png" width=400px>
By way of comparison, here is the upper limits on the spectrum you would get if you simply required that the curve did not exceed any of the single bin upper limits, which you can see is markedly worse.
<img src="figures/ADW_binLimits_spectrum.png" width=400px>
Question:
Why does this plot not tell us the correct upper limit on the spectrum normalization?
Finally, here is what a positive dection of a signal might look like:
<img src="figures/ADW_detection.png" width=400px>
Our example
There are two file that we will use to work this example that we should take a close look at.
The first is the draco_sed.yaml file in the results directory. It constains the likelihood versus flux results from the same simulation of 6 years of data we used in the first example.
End of explanation
f_sed[0].keys()
print "Energy range of bin 0 is %.1e to %.1e MeV"%(f_sed[0]['emin'],f_sed[0]['emax'])
print "Flux values scanned range from %.1e to %.1e ph cm^-2 s^-1"%(f_sed[0]['flux'][0],f_sed[0]['flux'][-1])
print "The corresponding energy flux values range from %.1e to %.1e MeV cm^-2 s^-1"%(f_sed[0]['eflux'][0],f_sed[0]['eflux'][-1])
print "The resulting delta log-Likelihood values at the edges of the scan are %.1f and %.1f"%(f_sed[0]['logLike'][0],f_sed[0]['logLike'][-1])
print "The conversion factor from energy flux to number of predicted counts is %.1e"%f_sed[0]['eflux2npred']
Explanation: Ok, the file contains 24 sets of curves, one for each energy bin.
Let's have a look at one of the sets of results.
End of explanation
import SedUtils as SED
sed = SED.SED(f_sed)
help(sed)
Explanation: So, as stated, that file contains everything we need to make the Castro plot.
I've put some utilities in SedUtils.py. These are functions to do things like interpolate the log-likelihood in each energy bin and then sum them together. I've added a small python class to manage things.
End of explanation
sed.binByBinUls = None
binByBinULs = sed.BinByBinULs()
figSED,axSED = SED.PlotSED(sed.energyBins,binByBinULs)
Explanation: Ok, lets go ahead and take a look at the SED that we have.
End of explanation
# let's get the file with the expected upper limits
f_sed_bands = yaml.load(open("ancil/draco_sed_mc_bands.yaml"))
figSED2,axSED2 = SED.PlotSED(sed.energyBins,binByBinULs,f_sed_bands)
Explanation: Ok, recall that we should never plot upper limits without also giving information about the expected upper limits. There is a file in the "ancil" sub-directory that has the quantiles for the upper limits from 300 Monte Carlo simulations of the analysis.
A pretty standard way to give a sense of the consistency of the results is to show the so called "Brazil" bands for the upper limits. I.e., expectation bands made from simulating the analysis chain numerous times. Typically people show the 1 and 2 $\sigma$ expectation bands and plot them in yellow and green, thus the name "Brazil".
End of explanation
f_dmspec = yaml.load(open("ancil/DM_spectra.yaml"))
print "Channels loaded are",f_dmspec.keys()
masses_bb = f_dmspec['bb'].keys()
masses_bb.sort()
masses_tau = f_dmspec['tautau'].keys()
masses_tau.sort()
fluxVals = f_dmspec['bb'][100]
print "Masses for the bb channel are",masses_bb
print "Masses for the tautau channel are",masses_tau
print "Flux values for 100GeV bb dark matter:\n",fluxVals
Explanation: Question:
Does this SED plot look reasonable to you?
The second file is the DM_spectra.yaml file in the "ancil" directory. This file gives the DM spectra for several different masses for the $b\hat{b}$ and $\tau^+\tau^-$ channels. I made this file specifically to match our analysis and our energy binning.
End of explanation
help(sed.NLL_func)
nll_func = sed.NLL_func(fluxVals)
nll_null = nll_func(0.)
nll_test = nll_func(1.) # Warning, this is in units of 10^-26 cm^3 s-1
print nll_null,nll_test
Explanation: Similary to the previous example, the SED object will make a function that we can then pass to the optimizer, this is the NLL_Func function.
End of explanation
result = sed.Minimize(fluxVals,1.0)
mle = result[0][0]
nll_mle = result[1]
ts = 2.*(nll_null-nll_mle)
print "Best-fit value %.1f"%(mle)
print "Test Statistic %.1f"%(ts)
Explanation: There is also a Minimize function that finds the normalization value that minimizes the negative log likelihood:
End of explanation
import LikeFitUtils as lfu
xbounds = (1e-4,1e1)
error_level = 1.35
ul = lfu.SolveForErrorLevel(nll_func,nll_mle,error_level,mle,xbounds)
print "Upper limit on <sigma v> is %.2e cm^3 s^-1"%(1e-26*ul)
Explanation: So, it looks like there is no signal and we should set upper limits. As before we construct the upper limits at the point were the delta log-likelihood reaches 1.35.
End of explanation
f_dmlims = yaml.load(open("results/draco_dm_results.yaml"))
print "Channels are:",f_dmlims.keys()
print "Data saved for each channel:",f_dmlims['bb'].keys()
print "Upper limits for bb channel are:\n",1e-26*f_dmlims['bb']['UL95']
Explanation: In the SedUtils.py file you will find a small piece of code to loop over all the channels and masses and to write the output to ../results/draco_dm_results.yaml. Let's go ahead and open that file.
End of explanation
# Ok, first we will load the bands
bands = yaml.load(open("ancil/draco_spectral_mc_bands.yaml"))
print "MC expectation bands for channels: ",bands.keys()
print "Quantities available are: \n",bands['bb'].keys()
Explanation: Displaying the results
Recall the point about how presenting upper limits alone gets rid of the information about the uncertainties and if the result is consistent with the null hypothesis.
Once again, you should never present upper limits without also presenting something that allows people to determine if they think the result is consistent with the null hypothesis.
You can find the quantiles calculated from 300 Monte Carlo simulated instances of the analysis chain in the file draco_spectral_mc_bands.yaml in the ancil folder.
End of explanation
# Ok, let's go ahead and plot the limits against the expectation
f,a = SED.PlotLimits(f_dmlims['bb']['Masses'],f_dmlims['bb']['UL95'],bands['bb']['ulimits'])
Explanation: Ok, there as you can see, the file has a lot more information than the simple limits. The various types of limits presented in the file are:
ulimits and ulimits99: The simple upper limits fo 95% and 99% confidence levels i.e., the thing we want.
pulimits and pulimits99: The upper limits profiled over the unceratintiy in the J-factor of Draco
p2ulimits and p2ulimits99: The upper limits profiled over the unceratintiy in the J-factor of Draco, using a different representation of the unceratintiy of the J-factor
bulimits and bulimits99: The Baysian upper limits, calculated with a flat prior.
b2ulimits and b2ulimits99: The Baysian upper limits, calculated with an exponential prior (appropriate for Poisson data, as we have here)
For each type of limit the file contains inforamation about several quantiles from the Monte Carlo simulation runs.
End of explanation |
8,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Thesis 2019 symposium clinical workshop
by Cyrille BONAMY, Antoine MATHIEU and Julien CHAUCHAT (LEGI, University of Grenoble Alpes, GINP/CNRS, Grenoble, France)
Introduction
The aim of this practical session is to teach you how to use the Eulerian-Eulerian two-phase flow model sedFOAM through a series of tutorials distributed with the solver on gtihub at https
Step1: In the following block, some basic configuration parameter are setup for figures
Step2: In the next block, experimental data from Pham Van Bang et al. (2006) are read as well sedFoam results from 1DSedim case using fluidfoam
Step3: In the next block, the experimental and numerical data are postprocessed to extract the interface positions
Step4: The settling curves plotted above clearly demonstrate that the model is able to reproduce very well the sedimentation of particles. This problem is known for a long time and can be solved analytically using the characteristic method (Kynch, 1952). The major challenge in Eulerian-Eulerian two-phase flow modeling framework is to account for the elastic part of the solid phase stress (the so-called effective stress in soil mechanics) in a viscous framework. This is done here by using Johnson and Jackson (1987) model but this is clearly not optimum and other modeling options are possible and should be developped in the future.
In order to further validate the model we can plot the concentration profiles at different instants during the sedimentation and compare them with experiments. This is done by the python code in the next block
Step5: As you can see the interface dynamics are well captured by the two-phase model however an oscilation of concentration appears at the upper interface which is not physical. This is probably due to the hyperbolic nature of the mass conservation equation for the solid phase. The steep gradient of concentration can probably assimilated to a shock and would require a shock-capturing scheme. Another alternative would be to add diffusion in the mass conservation to make the equation parabolic. Anyway, the goal of this simulation was to obtain a stable deposited sediment bed and this goal is achieved in the last panel.
Now before turning on the flow we want to make sure the particle pressure distribution is well reproduced by the model. In the following block, the particle pressure profile as well as excess pore pressure profile for the fluid phase are plotted at different instants.
Step6: In these graphs, you can see that while the particle pressure is building up in the deposited particle layer the excess pore pressure dissipates. This is a well known problem in geomechanics and the time-lag between excess pore pressure dissipation and the particle pressure distribution is controlled by the permeability of the porous media. The final particle pressure distribution is hydrostatic and the vertical gradient compensates the buoyant weitght of the particles. The acurate prediction of the particle pressure distribution is of outmost importance for simulating bedload transport as the granular rheology is based on a Coulomb frictional model in which the yield stress is given as mu_s p^p.
2. Laminar bedload transport
We know turn to the more complex problem of laminar bedload. In the previous exemple, the particles have settled down to form a rigid bed (as illustrated above). Building up upon this configuration, we can now add a streamwise pressure gradient to drive a fluid flow in the x direction. This can be done by modifying the file
Step7: In the following, the numerical solution is read using fluidfoam and the concentration, velocity and pressure profiles are compared with this analytical solution.
Step8: As you see the velocity ans particle pressure profiles are in very good agreement with the analytical solution even though the concentration exhibits a slight discrepancy at the bed interface. This is because the continuum two-phase flow model smooths out the sharp concentration gradient and the empirical particle pressure model introduces a very small vertical concentration gradient inside the sediment bed.
Overall the numerical results are satisfactory and we can now complexify the model closures by moving from Coulomb rheology to the mu(Iv) rheology. This is done by changing the keyword FrictionModel in constant/granularRheologyProperties from Coulomb to MuIv.
Then you need to modify the file system/controlDict to run from 1900 s to 2000 s.
Go to the terminal and run
sedFoam_rbgh.
Now we will run the same postprocessing script but this time the numerical solution should be different from the analytical one as we changed the rheological model.
Step9: As you can see, by changing the rheological model, the velocity profile in the moving granular layer changes from parabolic (Coulomb) to exponential (mu(Iv)) which is consistent with experimental observations. Also the fact that the velocity magnitude is reduced shows that the overall dissipation induced by the mu(Iv) rheology is much higher than with the Coulomb rheology.
So far the concentration profile remained fixed because no dilatancy effect has been introduced. In the next step, we propose to turn on the dilatancy model of the granular rheology (phi(Iv)) by changing the keyword PPressureModel in constant/granularRheologyProperties from none to MuIv.
Before running the model you need modify system/controlDict to run from 2000 s to 2100 s.
Go to the terminal and run
sedFoam_rbgh.
The next python code block allows you to plot the results
Step10: As you can immediately see, the concentration profile has changed and now exhibits a smoother vertical gradient between the fixed particle bed layer and the pure fluid layer. This modification is due to the shear induced pressure contribution represented as a dashed blue line in the right panel. This pressure contribution actually non-linearly couples the streamwise and wallnormal momentum balance equations for the solid phase as the shear induced pressure is proportional to the velocity shear rate squared. When dilatancy effect are added to the model, the dissipation is again increased leading to lower fluid velocity in the the pure fluid layer.
So far turbulence was not relevant as the bulk Reynolds number remained below the critical value of 3000. If you want to get familiar with turbulence modeling in the two-phase flow model framework please read Chauchat et al. (2017) and run the tutorial 1DSheetFlow available in the github repository.
3. 2D Scour around a horizontal cylinder
SedFoam can be used to simulate more complex multi-dimensional configurations like scour under a horizontal pipeline.
The erosion under a submarine pipeline can be decomposed into three steps | Python Code:
#
# Import section
#
import subprocess
import sys
import numpy as np
import fluidfoam
from pylab import figure, subplot, axis, xlabel, ylabel, show, savefig, plot
from pylab import title, matplotlib
import matplotlib.gridspec as gridspec
import matplotlib as mpl
Explanation: Thesis 2019 symposium clinical workshop
by Cyrille BONAMY, Antoine MATHIEU and Julien CHAUCHAT (LEGI, University of Grenoble Alpes, GINP/CNRS, Grenoble, France)
Introduction
The aim of this practical session is to teach you how to use the Eulerian-Eulerian two-phase flow model sedFOAM through a series of tutorials distributed with the solver on gtihub at https://github.com/sedfoam/sedfoam. The outline of the session is presented in this jupyter-notebook and the postprocessing will be performed using an openfoam package, fluidfoam, developped in python at LEGI https://bitbucket.org/sedfoam/fluidfoam. The documentation of fluidfoam is available here https://fluidfoam.readthedocs.io/en/latest/
During the session, you will have to type some commands in the system terminal to run the model. At the end of the session, two more advanced multi-dimensional configurations will be presented for which the model results are provided (no need to run the model) and the paraview software will be used for rendering.
1. Sedimentation of particles at low particulate Reynolds number
This configuration has been presented in the lecture and more technical details are available in the sedfoam documentation available here:
Document/sedfoam/doc
or online at: http://sedfoam.github.io/sedfoam/tutorials.html
The input files of this 1D tutorial are located here:
Document/sedfoam/tutorials/1DSedim
Open a terminal by either using the terminal from the system or the jupyter-lab terminal (File, New, Terminal). Change directory to the case file location using:
cd Document/sedfoam/tutorials/1DSedim
In order to run the model use the following sequence of commands:
Generate the mesh
blockMesh
Copy the initial condition folder stored in 0_org
cp -r 0_org 0
Run sedFoam
sedFoam_rbgh
the code will run for 1800 seconds in a couple minutes (duration depends on your computer capacity).
A postprocessing python script plot_tutoSedimentation.py using fluidfoam is available in Document/sedfoam/tutorials/Py/ but for the purpose of this practical session the script is directly embeded in the jupyter notebook together with explanations about what each part of the script is doing.
In the first section of the python script, some packages are loaded. The package numpy is a very popular package for matrix manipulation and operation, the fluidfoam package allows to read openFoam output and the pylab and matplotlib packages allows to make figures. As you will see, the syntax is very similar to matlab, the major difference is that the matrix element are denoted using [] instead of () and the indices starts at 0 instead of 1.
End of explanation
#
# Change fontsize
#
matplotlib.rcParams.update({'font.size': 20})
mpl.rcParams['lines.linewidth'] = 3
mpl.rcParams['lines.markersize'] = 10
Explanation: In the following block, some basic configuration parameter are setup for figures
End of explanation
#########################################
#
# Loading experimental results
#
exec(open("Py/DATA/exp_lmsgc.py").read())
#########################################
# Loading OpenFoam results
#
case = '1DSedim'
sol = case + '/'
#
# Mesh size
#
Nx = 1
Ny = 120
Nz = 1
#
# Reading SedFoam results using openFoam command 'foamListTimes'
#
try:
proc = subprocess.Popen(
['foamListTimes', '-case', sol], stdout=subprocess.PIPE)
# ['foamListTimes', '-withZero', '-case', sol], stdout=subprocess.PIPE)
except:
print("foamListTimes : command not found")
print("Do you have load OpenFoam environement?")
sys.exit(0)
# sort and split the resulting string
output = proc.stdout.read()
tread = output.decode().rstrip().split('\n')
# remove last item and create matrices to store the results
del tread[-1]
Nt = len(tread)
time = np.zeros(Nt)
X, Y, Z = fluidfoam.readmesh(sol)
alphat = np.zeros((Ny, Nt))
pt = np.zeros((Ny, Nt))
pfft = np.zeros((Ny, Nt))
# loop of time list 'tread' items
k = -1
for t in tread:
print("Reading time: %s s" % t)
k = k + 1
alphat[:, k] = fluidfoam.readscalar(sol, t + '/', 'alpha_a')
pt[:, k] = fluidfoam.readscalar(sol, t + '/', 'p_rbgh')
pfft[:, k] = fluidfoam.readscalar(sol, t + '/', 'pff')
time[k] = float(t)
Explanation: In the next block, experimental data from Pham Van Bang et al. (2006) are read as well sedFoam results from 1DSedim case using fluidfoam
End of explanation
#
# Figure size
#
figwidth = 12
figheight = 6
#
# parameters
#
zmin = 0.
zmax = np.max(Y)
tmax = 1800.
tadj = 172.
#
# calcul zint et zint2
#
if Nt > 1:
asint2 = 0.55
asint = 0.25
zint = np.zeros(Nt)
zint2 = np.zeros(Nt)
for i in np.arange(Nt):
# zint
toto = np.where(alphat[:, i] < asint)
if np.size(toto) == 0:
zint[i] = Y[Ny - 1]
else:
zint[i] = Y[toto[0][0]]
# zint2
toto2 = np.where(alphat[:, i] <= asint2)
if np.size(toto2) == 0:
zint2[i] = Y[0]
else:
zint2[i] = Y[toto2[0][0]]
#
# FIGURE 1: Interface positions Vs Time
#
figure(num=1, figsize=(figwidth, figheight),dpi=60, facecolor='w', edgecolor='w')
plot(t_pvb + tadj, zint_pvb + 0.1, 'ob',
t_pvb + tadj, zint2_pvb + 0.1, 'or')
plot(time, zint, '-b', time, zint2, '-r')
ylabel('y (m)')
xlabel('t (s)')
axis([0, tmax, zmin, zmax])
Explanation: In the next block, the experimental and numerical data are postprocessed to extract the interface positions:
- zint = upper interface between the clear fluid (alpha=0) and the sedimenting suspension (alpha=0.5)
- zint2 = lower interface between the sedimenting suspension (alpha=0.5) and the deposited particle layer (alpha=0.6)
These two positions are then plotted as a function of time for the experimental and numerical data.
End of explanation
#
# Figure 2: Concentration profiles
#
#
# Change subplot sizes
#
gs = gridspec.GridSpec(1, 4)
gs.update(left=0.1, right=0.95, top=0.95, bottom=0.075, wspace=0.125, hspace=0.125)
#
# Figure size
#
figwidth = 16
figheight = 6
figure(num=2, figsize=(figwidth, figheight), dpi=60, facecolor='w', edgecolor='w')
for i in np.arange(4):
if i == 0:
ax = subplot(gs[0, 0])
elif i == 1:
ax = subplot(gs[0, 1])
elif i == 2:
ax = subplot(gs[0, 2])
elif i == 3:
ax = subplot(gs[0, 3])
iexp = 7 * i + 1
titi = np.where(time[:] >= t_pvb[0][iexp] + tadj)
if np.size(titi) == 0:
inum = Nt - 1
else:
inum = titi[0][0]
print('texp= ' + str(t_pvb[0][iexp] + tadj) + 's - tnum='+ str(time[inum]) + 's')
ax.plot(alphat[:, inum], Y[:], '-r',
as_pvb[:, iexp], z_pvb[:, iexp] + 0.1, '--b')
title('t=' + str(t_pvb[0][iexp] + tadj) + 's')
axis([-0.05, 0.65, zmin, zmax])
if (i == 0):
ylabel('y (m)')
else:
ax.set_yticklabels([''])
xlabel(r'$\alpha$')
Explanation: The settling curves plotted above clearly demonstrate that the model is able to reproduce very well the sedimentation of particles. This problem is known for a long time and can be solved analytically using the characteristic method (Kynch, 1952). The major challenge in Eulerian-Eulerian two-phase flow modeling framework is to account for the elastic part of the solid phase stress (the so-called effective stress in soil mechanics) in a viscous framework. This is done here by using Johnson and Jackson (1987) model but this is clearly not optimum and other modeling options are possible and should be developped in the future.
In order to further validate the model we can plot the concentration profiles at different instants during the sedimentation and compare them with experiments. This is done by the python code in the next block
End of explanation
#
# FIGURE 3: Interface positions Vs Time + concentration + pressure
#
# Change subplot sizes
#
gs2 = gridspec.GridSpec(1, 5)
gs2.update(left=0.1, right=0.925, top=0.925, bottom=0.1, wspace=0.125, hspace=0.125)
#
# Figure size
#
figwidth = 14
figheight = 6
time_list=[0,15,30,45,58]
for k in np.arange(5):
i=time_list[k]
figure(num=3+i, figsize=(figwidth, figheight/2),dpi=60, facecolor='w', edgecolor='w')
ax = subplot(gs2[0, 0:3])
ax.plot(t_pvb + tadj, zint_pvb + 0.1, 'ob',
t_pvb + tadj, zint2_pvb + 0.1, 'or')
ax.plot(time[0:i], zint[0:i], '-b', time[0:i], zint2[0:i], '-r')
ylabel('y (m)')
xlabel('t (s)')
axis([0, tmax, zmin, zmax])
iexp = np.min(np.where(t_pvb[0][:] + tadj >time[i]))
print('texp= ' + str(t_pvb[0][iexp] + tadj) + 's - tnum='+ str(time[i]) + 's')
ax2 = subplot(gs2[0, 3])
ax2.plot(alphat[:, i], Y[:], '-r')
ax2.plot(as_pvb[:, iexp], z_pvb[:, iexp] + 0.1, '--k')
ax2.axis([-0.05, 0.65, zmin, zmax])
ax2.set_yticklabels([''])
xlabel(r'$\alpha$')
ax3 = subplot(gs2[0, 4])
ax3.plot(pfft[:, i], Y[:], '-r')
ax3.plot(pt[:, i], Y[:], '--b')
ax3.axis([0.0, 30, zmin, zmax])
ax3.set_yticklabels([''])
xlabel(r'$p (N/m^{2})$')
Explanation: As you can see the interface dynamics are well captured by the two-phase model however an oscilation of concentration appears at the upper interface which is not physical. This is probably due to the hyperbolic nature of the mass conservation equation for the solid phase. The steep gradient of concentration can probably assimilated to a shock and would require a shock-capturing scheme. Another alternative would be to add diffusion in the mass conservation to make the equation parabolic. Anyway, the goal of this simulation was to obtain a stable deposited sediment bed and this goal is achieved in the last panel.
Now before turning on the flow we want to make sure the particle pressure distribution is well reproduced by the model. In the following block, the particle pressure profile as well as excess pore pressure profile for the fluid phase are plotted at different instants.
End of explanation
import sys
sys.path.append("Py/")
from analytic_coulomb2D import *
#
#
#
zmin = 0.
zmax = 0.06
#
# compute the analytical solution in dimensionless form
#
nx = 60
xex = np.linspace(0, 1., nx)
# dimensionless parameters
mus = 0.24
phi0 = 0.6
eta_e = (1. + 2.5 * phi0)
# dimensional parameters
D = 0.06
rho_f = 950.
rho_p = 1050.
drho = rho_p - rho_f
etaf = 2.105e-5*rho_f
g = 9.81
hp = 0.0455/D
# pressure gradient
dpdx = -80e0 / (drho * g)
# Compute the analytical solution
alphaex = np.ones(nx) * phi0
toto = np.where(xex[:] > hp)
alphaex[toto] = 0.
pex = np.zeros(nx)
for i in range(nx):
if alphaex[nx - i - 1] > 0.:
pex[nx - i - 1] = pex[nx - i] + alphaex[nx - i] * \
(xex[nx - i] - xex[nx - i - 1])
[uex, hc] = analytic_coulomb2D(nx, xex, dpdx, hp, mus, phi0, eta_e)
duxmax = 0.
nuex = np.zeros(nx)
for i in range(nx - 1):
duexdz = (uex[i] - uex[i - 1]) / (xex[i] - xex[i - 1])
duxmax = max([duxmax, duexdz])
nuex[i] = mus * pex[i] / (rho_p * (np.abs(duexdz) + 1e-6))
#
# dimensional form
#
U0 = drho * g * D**2 / etaf
uex = uex * U0
xex = xex * D
pex = pex * drho * g * D
print("max(uex)=" + str(np.max(uex)) + " m/s")
Explanation: In these graphs, you can see that while the particle pressure is building up in the deposited particle layer the excess pore pressure dissipates. This is a well known problem in geomechanics and the time-lag between excess pore pressure dissipation and the particle pressure distribution is controlled by the permeability of the porous media. The final particle pressure distribution is hydrostatic and the vertical gradient compensates the buoyant weitght of the particles. The acurate prediction of the particle pressure distribution is of outmost importance for simulating bedload transport as the granular rheology is based on a Coulomb frictional model in which the yield stress is given as mu_s p^p.
2. Laminar bedload transport
We know turn to the more complex problem of laminar bedload. In the previous exemple, the particles have settled down to form a rigid bed (as illustrated above). Building up upon this configuration, we can now add a streamwise pressure gradient to drive a fluid flow in the x direction. This can be done by modifying the file:
constant/forceProperties
and set the variable gradPMEAN to (80 0 0). Your forceProperties file should look like that:
Once this done you further need to modify the file:
system/controlDict
to change the startTime to 1800 s and the endTime to 1900 s with output interval (writeInterval) every 20 s.
Then, you can run sedFoam again by typping the following command in the terminal:
sedFoam_rbgh > log &
The code is now running in batch mode and the standard output is written in the log file. You can access the bottom of the file by using the command:
tail -100 log
Once the first output is written (t=1820 s), you can use the piece of python code below to compare your simulation with the analytical solution from Aussillous et al. (2013). We are solving for a transient problem to get to steady state solution meaning that you have to wait long enough to make sure the solution has converged.
In the next code block, the analmytical solution is computed and made dimensional for the same configuration as the numerical simulation.
End of explanation
#
# Change subplot sizes
#
gs = gridspec.GridSpec(1, 3)
gs.update(left=0.1, right=0.95, top=0.95,bottom=0.1, wspace=0.125, hspace=0.25)
#########################################
# Reading SedFoam results
#########################################
proc = subprocess.Popen(
['foamListTimes', '-case', sol, '-latestTime'], stdout=subprocess.PIPE)
output = proc.stdout.read()
tread = output.decode().rstrip()
if float(tread)>1900:
tread='1900'
tread=tread+'/'
X, Y, Z = fluidfoam.readmesh(sol)
alpha = fluidfoam.readscalar(sol, tread, 'alpha_a')
Ua = fluidfoam.readvector(sol, tread, 'Ua')
Ub = fluidfoam.readvector(sol, tread, 'Ub')
pff = fluidfoam.readscalar(sol, tread, 'pff')
p = fluidfoam.readscalar(sol, tread, 'p')
Ny = np.size(Y)
U = np.zeros(Ny)
U = alpha[:] * Ua[0, :] + (1 - alpha[:]) * Ub[0, :]
print("max(Ub)=" + str(np.amax(Ub)) + " m/s")
#########################################
# figure 1
#########################################
figure(num=1, figsize=(figwidth, figheight), dpi=60, facecolor='w', edgecolor='w')
ax1 = subplot(gs[0, 0])
l11, = ax1.plot(alpha[:], Y[:], '-r')
l1, = ax1.plot(alphaex[:], xex[:], '--k')
ax1.set_ylabel('y (m)')
ax1.set_xlabel(r'$\alpha$')
ax1.set_xlim(0, np.max(np.max(alpha)) * 1.1)
ax1.set_ylim(zmin, zmax)
ax2 = subplot(gs[0, 1])
l21, = ax2.plot(U[:], Y[:], '-r')
l2, = ax2.plot(uex[:], xex[:], '--k')
ax2.set_xlabel('u ($m/s$)')
ax2.set_xlim(0, np.max(uex) * 1.1)
ax2.set_ylim(zmin, zmax)
ax2.set_yticklabels([''])
ax3 = subplot(gs[0, 2])
l31, = ax3.plot(pff[:], Y[:], '-r')
l3, = ax3.plot(pex[:], xex[:], '--k')
ax3.set_xlabel('p ($N/m^2$)')
ax3.set_xlim(0, np.max(pex) * 1.1)
ax3.set_ylim(zmin, zmax)
ax3.set_yticklabels([''])
Explanation: In the following, the numerical solution is read using fluidfoam and the concentration, velocity and pressure profiles are compared with this analytical solution.
End of explanation
#########################################
# Reading SedFoam results
#########################################
proc = subprocess.Popen(
['foamListTimes', '-case', sol, '-latestTime'], stdout=subprocess.PIPE)
output = proc.stdout.read()
tread = output.decode().rstrip()
if float(tread)>2000:
tread='2000'
tread=tread+'/'
X1, Y1, Z1 = fluidfoam.readmesh(sol)
alpha1 = fluidfoam.readscalar(sol, tread, 'alpha_a')
Ua1 = fluidfoam.readvector(sol, tread, 'Ua')
Ub1 = fluidfoam.readvector(sol, tread, 'Ub')
pff1 = fluidfoam.readscalar(sol, tread, 'pff')
p1 = fluidfoam.readscalar(sol, tread, 'p')
Ny1 = np.size(Y1)
U1 = np.zeros(Ny1)
U1 = alpha1[:] * Ua1[0, :] + (1 - alpha1[:]) * Ub1[0, :]
print("max(Ub)=" + str(np.amax(Ub1)) + " m/s")
#########################################
# figure 1
#########################################
figure(num=1, figsize=(figwidth, figheight),
dpi=60, facecolor='w', edgecolor='w')
ax1 = subplot(gs[0, 0])
l11, = ax1.plot(alpha[:], Y[:], '-r')
l12, = ax1.plot(alpha1[:], Y[:], '-g')
l1, = ax1.plot(alphaex[:], xex[:], '--k')
ax1.set_ylabel('y (m)')
ax1.set_xlabel(r'$\alpha$')
ax1.set_xlim(0, np.max(np.max(alpha)) * 1.1)
ax1.set_ylim(zmin, zmax)
ax2 = subplot(gs[0, 1])
l21, = ax2.plot(U[:], Y[:], '-r')
l22, = ax2.plot(U1[:], Y1[:], '-g')
l2, = ax2.plot(uex[:], xex[:], '--k')
ax2.set_xlabel('u ($m/s$)')
ax2.set_xlim(0, np.max(uex) * 1.1)
ax2.set_ylim(zmin, zmax)
ax2.set_yticklabels([''])
ax3 = subplot(gs[0, 2])
l31, = ax3.plot(pff[:], Y[:], '-r')
l32, = ax3.plot(pff1[:], Y1[:], '-g')
l3, = ax3.plot(pex[:], xex[:], '--k')
ax3.set_xlabel('p ($N/m^2$)')
ax3.set_xlim(0, np.max(pex) * 1.1)
ax3.set_ylim(zmin, zmax)
ax3.set_yticklabels([''])
Explanation: As you see the velocity ans particle pressure profiles are in very good agreement with the analytical solution even though the concentration exhibits a slight discrepancy at the bed interface. This is because the continuum two-phase flow model smooths out the sharp concentration gradient and the empirical particle pressure model introduces a very small vertical concentration gradient inside the sediment bed.
Overall the numerical results are satisfactory and we can now complexify the model closures by moving from Coulomb rheology to the mu(Iv) rheology. This is done by changing the keyword FrictionModel in constant/granularRheologyProperties from Coulomb to MuIv.
Then you need to modify the file system/controlDict to run from 1900 s to 2000 s.
Go to the terminal and run
sedFoam_rbgh.
Now we will run the same postprocessing script but this time the numerical solution should be different from the analytical one as we changed the rheological model.
End of explanation
#########################################
# Reading SedFoam results
#########################################
proc = subprocess.Popen(
['foamListTimes', '-case', sol, '-latestTime'], stdout=subprocess.PIPE)
output = proc.stdout.read()
tread = output.decode().rstrip() + '/'
X2, Y2, Z2 = fluidfoam.readmesh(sol)
alpha2 = fluidfoam.readscalar(sol, tread, 'alpha_a')
Ua2 = fluidfoam.readvector(sol, tread, 'Ua')
Ub2 = fluidfoam.readvector(sol, tread, 'Ub')
pa2 = fluidfoam.readscalar(sol, tread, 'pa')
pff2 = fluidfoam.readscalar(sol, tread, 'pff')
p2 = fluidfoam.readscalar(sol, tread, 'p')
Ny2 = np.size(Y2)
U2 = np.zeros(Ny2)
U2 = alpha2[:] * Ua2[0, :] + (1 - alpha2[:]) * Ub2[0, :]
print("max(Ub2)=" + str(np.amax(Ub2)) + " m/s")
#########################################
# figure 1
#########################################
figure(num=1, figsize=(figwidth, figheight), dpi=60, facecolor='w', edgecolor='w')
ax1 = subplot(gs[0, 0])
l11, = ax1.plot(alpha[:], Y[:], '-r')
l12, = ax1.plot(alpha1[:], Y1[:], '-g')
l13, = ax1.plot(alpha2[:], Y2[:], '-b')
l1, = ax1.plot(alphaex[:], xex[:], '--k')
ax1.set_ylabel('y (m)')
ax1.set_xlabel(r'$\alpha$')
ax1.set_xlim(0, np.max(np.max(alpha)) * 1.1)
ax1.set_ylim(zmin, zmax)
ax2 = subplot(gs[0, 1])
l21, = ax2.plot(U[:], Y[:], '-r')
l22, = ax2.plot(U1[:], Y1[:], '-g')
l23, = ax2.plot(U2[:], Y2[:], '-b')
l2, = ax2.plot(uex[:], xex[:], '--k')
ax2.set_xlabel('u ($m/s$)')
ax2.set_xlim(0, np.max(uex) * 1.1)
ax2.set_ylim(zmin, zmax)
ax2.set_yticklabels([''])
ax3 = subplot(gs[0, 2])
l31, = ax3.plot(pff[:], Y[:], '-r')
l32, = ax3.plot(pff1[:], Y1[:], '-g')
l33, = ax3.plot(pff2[:]+pa2[:], Y2[:], '-b')
l34, = ax3.plot(pa2[:], Y2[:], '--b')
l3, = ax3.plot(pex[:], xex[:], '--k')
ax3.set_xlabel('p ($N/m^2$)')
ax3.set_xlim(0, np.max(pex) * 1.1)
ax3.set_ylim(zmin, zmax)
ax3.set_yticklabels([''])
Explanation: As you can see, by changing the rheological model, the velocity profile in the moving granular layer changes from parabolic (Coulomb) to exponential (mu(Iv)) which is consistent with experimental observations. Also the fact that the velocity magnitude is reduced shows that the overall dissipation induced by the mu(Iv) rheology is much higher than with the Coulomb rheology.
So far the concentration profile remained fixed because no dilatancy effect has been introduced. In the next step, we propose to turn on the dilatancy model of the granular rheology (phi(Iv)) by changing the keyword PPressureModel in constant/granularRheologyProperties from none to MuIv.
Before running the model you need modify system/controlDict to run from 2000 s to 2100 s.
Go to the terminal and run
sedFoam_rbgh.
The next python code block allows you to plot the results
End of explanation
from pylab import mlab
import matplotlib.pyplot as plt
# Function returning the bed profile
def depth(sol, t, x, y, xi, yi):
ybed = np.zeros(len(xi))
if np.mod(t, 1) == 0:
timename = str(int(t)) + '/'
else:
timename = str(t) + '/'
alpha = fluidfoam.readscalar(sol, timename, 'alpha_a')
alphai = mlab.griddata(x, y, alpha, xi, yi, interp='linear')
for j in range(len(xi) - 1):
tab = np.where(alphai[:, j+1] > 0.5)
ybed[j] = yi[np.max(tab)]
return ybed
# Case information
base_path = '../../2DScourPipeline/'
cases_toplot = [base_path + '2D_Lee_shield0p33_MuI_kepsilon',
base_path + '2D_Lee_shield0p33_MuI_komega2006']
time1 = 11
time2 = 18
time3 = 25
# -------------PLOT SECTION------------- #
# Figure dimensions and parameters
fig_sizex = 27.5
fig_sizey = 37
font_size = 40
line_style = '-'
line_width = 4
marker_size = 16
label_expe = 'Mao [1986]'
label_num = [r'$k-\varepsilon$',
r'$k-\omega2006$']
line_color = ['C1',
'C2']
# Domain dimensions
D = 0.05
xmin = -0.1
xmax = 0.3
ymin = -0.08
ymax = 0.075
# Figure creation
fig = plt.figure(figsize=(fig_sizex, fig_sizey), dpi=100)
plt.rcParams.update({'font.size': font_size})
# Subplots creation
# Subplot 1
plt1 = plt.subplot(3, 1, 1)
circle = plt.Circle((0, 0), radius=0.5, fc='silver', edgecolor='k',
zorder=3)
plt.gca().add_patch(circle)
plt1.set_xticklabels([])
plt1.grid()
# Subplot 2
plt2 = plt.subplot(3, 1, 2, sharey=plt1)
circle = plt.Circle((0, 0), radius=0.5, fc='silver', edgecolor='k',
zorder=3)
plt.gca().add_patch(circle)
plt2.set_xticklabels([])
plt2.grid()
# Subplot 3
plt3 = plt.subplot(3, 1, 3, sharey=plt1)
circle = plt.Circle((0, 0), radius=0.5, fc='silver', edgecolor='k',
zorder=3)
plt.gca().add_patch(circle)
plt3.grid()
plt.subplots_adjust(hspace=0.2)
# Subplot axis
plt1.axis([xmin/D, xmax/D, ymin/D, ymax/D])
plt2.axis([xmin/D, xmax/D, ymin/D, ymax/D])
plt3.axis([xmin/D, xmax/D, ymin/D, ymax/D])
plt1.set_xlabel('')
plt1.set_ylabel('y/D')
plt2.set_ylabel('y/D')
plt3.set_xlabel('x/D')
plt3.set_ylabel('y/D')
# Horizontal line at y = 0
n = np.zeros(2)
nx = np.linspace(xmin/D, xmax/D, 2)
plt1.plot(nx, n-0.5, "k--")
plt2.plot(nx, n-0.5, "k--")
plt3.plot(nx, n-0.5, "k--")
# Number of division for linear interpolation
ngridx = 1500
ngridy = 500
# Interpolation grid dimensions
xinterpmin = -0.1
xinterpmax = 0.35
yinterpmin = -0.09
yinterpmax = 0.015
# Interpolation grid
xi = np.linspace(xinterpmin, xinterpmax, ngridx)
yi = np.linspace(yinterpmin, yinterpmax, ngridy)
for i, case in enumerate(cases_toplot):
# Bed elevation calculation
x1, y1, z1 = fluidfoam.readmesh(case)
ybed1 = depth(case, time1, x1, y1, xi, yi)
ybed2 = depth(case, time2, x1, y1, xi, yi)
ybed3 = depth(case, time3, x1, y1, xi, yi)
# Numerical results plotting
plt1.plot(xi/D, ybed1/D, label=label_num[i], linewidth=line_width, ls=line_style,
color=line_color[i])
plt2.plot(xi/D, ybed2/D, linewidth=line_width, ls=line_style, color=line_color[i])
plt3.plot(xi/D, ybed3/D, linewidth=line_width, ls=line_style, color=line_color[i])
# Experimental data collection
expe_11s = 'Py/DATA/Mao_11s_expe.txt'
expe_18s = 'Py/DATA/Mao_18s_expe.txt'
expe_25s = 'Py/DATA/Mao_25s_expe.txt'
x_expe_11s, y_expe_11s = np.loadtxt(expe_11s, usecols=(0, 1), unpack=True,
delimiter=',')
x_expe_18s, y_expe_18s = np.loadtxt(expe_18s, usecols=(0, 1), unpack=True,
delimiter=',')
x_expe_25s, y_expe_25s = np.loadtxt(expe_25s, usecols=(0, 1), unpack=True,
delimiter=',')
# Experimental data plotting
plt1.plot(x_expe_11s/D, y_expe_11s/D, "ro", label=label_expe,
markersize=marker_size)
plt2.plot(x_expe_18s/D, y_expe_18s/D, "ro", markersize=marker_size)
plt3.plot(x_expe_25s/D, y_expe_25s/D, "ro", markersize=marker_size)
# Legend
plt1.legend(loc='upper left')
Explanation: As you can immediately see, the concentration profile has changed and now exhibits a smoother vertical gradient between the fixed particle bed layer and the pure fluid layer. This modification is due to the shear induced pressure contribution represented as a dashed blue line in the right panel. This pressure contribution actually non-linearly couples the streamwise and wallnormal momentum balance equations for the solid phase as the shear induced pressure is proportional to the velocity shear rate squared. When dilatancy effect are added to the model, the dissipation is again increased leading to lower fluid velocity in the the pure fluid layer.
So far turbulence was not relevant as the bulk Reynolds number remained below the critical value of 3000. If you want to get familiar with turbulence modeling in the two-phase flow model framework please read Chauchat et al. (2017) and run the tutorial 1DSheetFlow available in the github repository.
3. 2D Scour around a horizontal cylinder
SedFoam can be used to simulate more complex multi-dimensional configurations like scour under a horizontal pipeline.
The erosion under a submarine pipeline can be decomposed into three steps:
1. The onset: when the current around the cylinder is strong enough, it generates a pressure drop, which liquefies the sediments underneath the cylinder;
2. the tunneling stage: when a breach is formed between the cylinder and the sediment bed, it expands due to the strong current in the breach;
3. the lee-wake erosion stage: when the gap is large enough, vortices are shed in the wake of the cylinder, leading to erosion downstream of the scour hole.
Two configurations are presented in this tutorial using different turbulence models ($k-\varepsilon$ and $k-\omega$ models).
Since the simulations take a lot of time and computer ressources, the simulation results are directly available.
The simulation results can be visualized using paraview software. To do so, go to the directory containing the visualization file by typing the following command in the terminal:
cd ~/Documents/2DScourPipeline
Now, to visualize the results from the configuration using the $k-\varepsilon$ model, type the following command:
paraview 2DPipeline.pvsm &
To switch to the visualization of the configuration using the $k-\omega$ model, in paraview click on:
File -> Load State...
select the file:
2DPipeline.pvsm
then, select:
Choose File Names
and choose the file empy.foam in the $k-\omega$ case directory:
2D_Lee_shield0p33_MuI_komega2006/empty.foam
Finally, to compare the results obtained from the two simulations and experimental data from Mao (1986), the following python script generates plots containing the bed interface at 11, 18 and 25 seconds.
End of explanation |
8,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sympy is a Python package used for solving equations using symbolic math.
Let's solve the following problem with SymPy.
Given
Step1: We need to define six different symbols
Step2: Next we'll create two expressions for our two equations. We can subtract the %crystallinity from the left side of the equation to set the equation to zero.
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
$$ \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% - \%crystallinity = 0 $$
Sub in $\rho_s = \rho_1$ and $\rho_s = \rho_2$ to each of the expressions.
Step3: Now we'll substitue in the values of $\rho_1 = 1.408$ and $c_1 = 0.743$ into our first expression.
Step4: Now we'll substitue our the values of $\rho_2 = 1.343$ and $c_2 = 0.312$ into our second expression.
Step5: To solve the two equations for the to unknows $\rho_a$ and $\rho_b$, use SymPy's nonlinsolve() function. Pass in a list of the two expressions and followed by a list of the two variables to solve for.
Step6: We see that the value of $\rho_a = 1.29957$ and $\rho_c = 1.44984$.
The solution is a SymPy FiniteSet object. To pull the values of $\rho_a$ and $\rho_c$ out of the FiniteSet, use the syntax sol.args[0][<var num>]. | Python Code:
from sympy import symbols, nonlinsolve
Explanation: Sympy is a Python package used for solving equations using symbolic math.
Let's solve the following problem with SymPy.
Given:
The density of two different polymer samples $\rho_1$ and $\rho_2$ are measured.
$$ \rho_1 = 1.408 \ g/cm^3 $$
$$ \rho_2 = 1.343 \ g/cm^3 $$
The percent crystalinity of the two samples ($\%c_1 $ and $\%c_2$) is known.
$$ \%c_1 = 74.3 \% $$
$$ \%c_2 = 31.2 \% $$
The percent crystalinity of a polymer sample is related to the density of 100% amorphus regions ($\rho_a$) and 100% crystaline regions ($\rho_c$) according to:
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
Find:
Find the density of 100% amorphus regions ($\rho_a$) and the density of 100% crystaline regions ($\rho_c$) for this polymer.
Solution:
There are a couple functions we need from Sympy. We'll need the symbols function to create our symbolic math variables and we need the nonlinsolve function to solve a system of non-linear equations.
End of explanation
pc, pa, p1, p2, c1, c2 = symbols('pc pa p1 p2 c1 c2')
Explanation: We need to define six different symbols: $$\rho_c, \rho_a, \rho_1, \rho_2, c_1, c_2$$
End of explanation
expr1 = ( (pc*(p1-pa) ) / (p1*(pc-pa)) - c1)
expr2 = ( (pc*(p2-pa) ) / (p2*(pc-pa)) - c2)
Explanation: Next we'll create two expressions for our two equations. We can subtract the %crystallinity from the left side of the equation to set the equation to zero.
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
$$ \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% - \%crystallinity = 0 $$
Sub in $\rho_s = \rho_1$ and $\rho_s = \rho_2$ to each of the expressions.
End of explanation
expr1 = expr1.subs(p1, 1.408)
expr1 = expr1.subs(c1, 0.743)
expr1
Explanation: Now we'll substitue in the values of $\rho_1 = 1.408$ and $c_1 = 0.743$ into our first expression.
End of explanation
expr2 = expr2.subs(p2, 1.343)
expr2 = expr2.subs(c2, 0.312)
expr2
Explanation: Now we'll substitue our the values of $\rho_2 = 1.343$ and $c_2 = 0.312$ into our second expression.
End of explanation
nonlinsolve([expr1,expr2],[pa,pc])
Explanation: To solve the two equations for the to unknows $\rho_a$ and $\rho_b$, use SymPy's nonlinsolve() function. Pass in a list of the two expressions and followed by a list of the two variables to solve for.
End of explanation
sol = nonlinsolve([expr1,expr2],[pa,pc])
type(sol)
sol.args
sol.args[0]
sol.args[0][0]
pa = sol.args[0][0]
pc = sol.args[0][1]
print(f' Density of 100% amorphous polymer, pa = {round(pa,2)} g/cm3')
print(f' Density of 100% crystaline polymer, pc = {round(pc,2)} g/cm3')
Explanation: We see that the value of $\rho_a = 1.29957$ and $\rho_c = 1.44984$.
The solution is a SymPy FiniteSet object. To pull the values of $\rho_a$ and $\rho_c$ out of the FiniteSet, use the syntax sol.args[0][<var num>].
End of explanation |
8,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple keyword spotting with CMSIS-DSP Python wrapper and Arduino
The goal of this notebook is to demonstrate how to use the CMSIS-DSP Python wrapper on an example which is complex enough.
It is not a state of the art keyword recognition system. The feature used for the machine learning is very simple and just able to recognize the "Yes" keyword.
But it is a good start and enough to demonstrate lots of features of the Python wrapper like
Step1: The speech commands
We are using the simplified speech commands from the TensorFlow Lite tutorial.
Those commands can be downloaded from this link
Step2: The below code is generating a label ID for a command. The ID will be -1 for any command not in the to_keep list. Other ID will be the index of the keyword in this list.
Step3: The feature
The feature is based on a simple zero crossing rate (zcr). We choose to only keep the increasing crossing. I don't think it is making a lot of differences for the final performance of the keyword recognition.
The zcr function is computing the zcr for a window of samples.
Step4: The final feature is the zcr computed on a segment of 1 second and filtered. We are using a sliding window and using a Hann window.
Step5: The patterns
The below class is representing a Pattern. A pattern can either be a sound file from the TensorFlow Lite examples and in this case we compute the feature and the label id.
The pattern can also be a random white noise. In that case, we also compute the feature and the class id is -1.
Note that when you use the signal property, the speech patterns will return the content of the file but noise patterns will generate a random noise which will thus be different each time.
Step6: Following code is giving the number of speech samples for each keyword.
It is assuming that all keywords contain the same number of samples.
Step7: The following code is generating the patterns used for the training of the ML model,.
It is reading patterns for all the words we want to keep (from to_keep list) and it is aggregating all other keywords in the unknown class.
It is also generating some random noise patterns.For the unknown class, the number of patterns will always be files_per_command but some patterns may be noise rather than sound files.
There is some randomization of file names. So each time this code is executed, you'll get patterns in a different order and for the unknown class, which is containing more than files_per_command, you'll get a different subset of those patterns (thr subset will have the right length files_per_command).
Finally the patterns are also randomized so that the split between training and test patterns will select different patterns each time this code is executed.
Step8: Below code is extracting the training and test patterns.
This will be used later to generate the array used by scikit learn to train the model.
Step9: Testing on a signal
The following code is displaying a pattern as example.
Step10: Simple function to display a spectrogram. It is adapted from a SciPy example.
Step11: Display of the feature to compare with the spectrogram.
Step12: Patterns for training
Now we generate the arrays needed to train and test the model.
We have an array of feature
Step13: Logistic Regression
We have chosen to use a simple logistic regression. We are doing a randomized search on the hyperparameter space.
Step14: We are using the best estimator found during the randomized search
Step15: The confusion matrix is generated from the test patterns to check the behavior of the classifier
Step16: We compute the final score. 0.8 is really the minimum acceptable value for this kind of demo.
With the zcr feature, if you try to detect Yes, No, Unknown, you'll get a score of around 0.6 which is very bad.
Step17: We can now save the model so that next time we want to play with the notebook and test the CMSIS-DSP implementation we do not have to retrain the model
Step18: And we can reload the saved model
Step19: Reference implementation with Matrix
This is the reference implementation which will be used to build the CMSIS-DSP implementation. We are no more using the scikit-learn predict function instead we are using an implementation of predict using linear algebra. It should give the same results.
Step20: And like in the code above with scikit-learn, we are checking the result with the confusion matrix and the score. It should give the same results
Step21: CMSIS-DSP implementation
Now we are ready to implement the code using CMSIS-DSP API. Once we have a running implementation in Python, writing the C code will be easy since the API is the same.
We are testing 3 implementations here
Step22: For the FIR, CMSIS-DSP is using a FIR instance structure and thus we need to define it
Step23: Let's check that the feature is giving the same result as the reference implemenattion using linear algebra.
Step24: The feature code is working, so now we can implement the predict
Step25: And finally we can check the CMSIS-DSP behavior of the test patterns
Step26: We are getting very similar results to the reference implementation. Now let's explore fixed point.
Q31 implementation
First thing to do is to convert the F32 values of the ML mode into Q31.
But we need values in [-1,1]. So we rescale those values and keep track of the shift required to restore the original value.
Then, we convert those rescaled values to Q31.
Step27: Now we can implement the zcr and feature in Q31.
Step28: Let's check the feature on the data to compare with the F32 version and check it is working
Step29: The Q31 feature is very similar to the F32 one so now we can implement the predict
Step30: Now we can check the Q31 implementation on the test patterns
Step31: The score is as good as the F32 implementation.
Q15 Implementation
It is the same as Q31 but using Q15 functions.
Step32: Q15 version is as good as other versions so we are selecting this implementation to run on the Arduino (once it has been converted to C).
Synchronous Data Flow
We are receiving stream of samples but our functions are using buffers.
We have sliding windows which make it more difficult to connect those functions together
Step33: To describe our compute graph, we need to describe the nodes which are used in this graph.
Each node is described by its inputs and outputs. For each IO, we define the data type and the number of samples read or written on the IO.
We need the following nodes in our system
Step34: We need some parameters. Those parameters need to be coherent with the values defined in the features in the above code.
AUDIO_INTERRUPT_LENGTH is the audio length generated by the source. But it is not the audio length generated by th PDM driver on Arduino. The Arduino implementation of the Source is doing the adaptation as we will see below.
Step36: Below function is
Step37: Next line is generating sched.py which is the Python implementation of the compute graph and its static scheduling. This file is describing the FIFOs connecting the nodes and describing how the nodes are scheduled.
You still need to provide an implementation of the nodes. It is available in appnodes.py and it is nearly a copy/paste of the Q15 implementation above.
But it is simpler because the sliding window and the static schedule ensure that each node is run only when enough data is available. So a big part of the control logic has been removed from the nodes.
sched.py is long because the static schedule is long and there are lots of function calls.. When we generate the C++ implementation, we are using an option which is using an array to describe the static schedule. It makes the C++ code much shorter. But having the sequence of function calls can be useful for debugging.
Step38: Next line is generating the C++ schedule that we will need for the Arduino implementation
Step39: Now we'd like to test the Q15 classifier and the static schedule on a real patterns.
We are using the Yes/No pattern from our VHT-SystemModeling example.
Below code is loading the pattern into a NumPy array.
Step40: Let's plot the signal to check we have the right one
Step41: Now we can run our static schedule on this file.
The reload function are needed when debugging (or implementing) the appnodes.py. Without this, the package would not be reloaded in the notebook.
This code needs some variables evaluated in the Q15 estimator code above.
Step42: The code is working. We are getting more printed Yes than Yes in the pattern because we are sliding by 0.5 second between each recognition and the same word can be recognized several time.
Now we are ready to implement the same on an Arduino.
First we need to generate the parameters of the model. If you have saved the model, you can reload it with the code below
Step43: Once the model is loaded, we extract the values and convert them to Q15
Step45: Now we need to generate C arrays for the ML model parameters. Those parameters are generated into kws/coef.cpp
Step46: Generation of the coef code
Step47: The implementation of the nodes is in kws/AppNodes.h. It is very similar to the appnodes.py but using the CMSIS-DSP C API.
The C++ template are used only to minimize the overhead at runtime.
Arduino
You need to have the arduino command line tools installed. And they need to be in your PATH so that the notebook can find the tool.
We are using the Arduino Nano 33 BLE
Building and upload
Step48: The first time the below command is executed, it will take a very long time. The full CMSIS-DSP library has to be rebuilt for the Arduino.
Step49: Testing
Below code is connecting to the Arduino board an displaying the output in a cell.
If you say Yes loudly enough (so not too far from the board) and if you don't have too much background noise in your room, then it should work.
You need to install the pyserial python package | Python Code:
import cmsisdsp as dsp
import cmsisdsp.fixedpoint as fix
import numpy as np
import os.path
import glob
import pathlib
import random
import soundfile as sf
import matplotlib.pyplot as plt
from IPython.display import display,Audio,HTML
import scipy.signal
from numpy.lib.stride_tricks import sliding_window_view
from scipy.signal.windows import hann
from sklearn import svm
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform
from sklearn.linear_model import LogisticRegression
import pickle
Explanation: Simple keyword spotting with CMSIS-DSP Python wrapper and Arduino
The goal of this notebook is to demonstrate how to use the CMSIS-DSP Python wrapper on an example which is complex enough.
It is not a state of the art keyword recognition system. The feature used for the machine learning is very simple and just able to recognize the "Yes" keyword.
But it is a good start and enough to demonstrate lots of features of the Python wrapper like:
Testing the CMSIS-DSP algorithm directly in Python
Test of fixed point implementation
Implementation of the compute graph and streaming computation from the CMSIS-DSP Synchronous Data Flow framework
C++ code generation for the compute graph
Final implementation for Arduino Nano 33 BLE
Several Python packages are required. If they are not already installed on you system, you can install them from the notebook by using:
!pip install packagename
The machine learning is using scikit-learn. For scientific computations, we are using SciPy, NumPy and Matplotlib.
For reading wav files, the soundfile package is used.
Other packages will be used below in the notebook and will have to be installed.
End of explanation
MINISPEECH="mini_speech_commands"
commands=np.array([os.path.basename(f) for f in glob.glob(os.path.join(MINISPEECH,"mini_speech_commands","*"))])
commands=commands[commands != "README.md"]
# Any other word will be recognized as unknown
to_keep=['yes']
Explanation: The speech commands
We are using the simplified speech commands from the TensorFlow Lite tutorial.
Those commands can be downloaded from this link: "http://storage.googleapis.com/download.tensorflow.org/data/mini_speech_commands.zip"
Once the zip has been uncompressed, you'll need to change the path of the folder below.
The below code is loading the list of commands available in the mini_speech_commands folder and it is describing the words we want to detect. Here we only want to detect the Yes keyword.
You can add other keywords but the CMSIS-DSP implementation in this notebook is only supporting one keyword.
Nevertheless, if you'd like to experiment with the training of the ML model and different features, then you can work with more commands.
End of explanation
UNKNOWN_CLASS = -1
def get_label(name):
return(pathlib.PurePath(name).parts[-2])
def get_label_id(name):
label=get_label(name)
if label in to_keep:
return(to_keep.index(label))
else:
return(UNKNOWN_CLASS)
Explanation: The below code is generating a label ID for a command. The ID will be -1 for any command not in the to_keep list. Other ID will be the index of the keyword in this list.
End of explanation
def zcr(w):
w = w-np.mean(w)
f=w[:-1]
g=w[1:]
k=np.count_nonzero(np.logical_and(f*g<0, g>f))
return(1.0*k/len(f))
Explanation: The feature
The feature is based on a simple zero crossing rate (zcr). We choose to only keep the increasing crossing. I don't think it is making a lot of differences for the final performance of the keyword recognition.
The zcr function is computing the zcr for a window of samples.
End of explanation
def feature(data):
samplerate=16000
input_len = 16000
# The speech pattern is padded to ensure it has a duration of 1 second
waveform = data[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.float32)
signal = np.hstack([waveform, zero_padding])
# We decompose the intput signal into overlapping window. And the signal in each window
# is premultiplied by a Hann window of the right size.
# Warning : if you change the window duration and audio offset, you'll need to change the value
# in the scripts used for the scheduling of the compute graph later.
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(samplerate*winDuration))
audioOffset=int(np.floor(samplerate*audioOffsetDuration))
overlap=winLength-audioOffset
window=hann(winLength,sym=False)
reta=[zcr(x*window) for x in sliding_window_view(signal,winLength)[::audioOffset,:]]
# The final signal is filtered. We have tested several variations on the feature. This filtering is
# improving the recognition
reta=scipy.signal.lfilter(np.ones(10)/10.0,[1],reta)
return(np.array(reta))
Explanation: The final feature is the zcr computed on a segment of 1 second and filtered. We are using a sliding window and using a Hann window.
End of explanation
class Pattern:
def __init__(self,p):
global UNKNOWN_CLASS
if isinstance(p, str):
self._isFile=True
self._filename=p
self._label=get_label_id(p)
data, samplerate = sf.read(self._filename)
self._feature = feature(data)
else:
self._isFile=False
self._noiseLevel=p
self._label=UNKNOWN_CLASS
noise=np.random.randn(16000)*p
self._feature=feature(noise)
@property
def label(self):
return(self._label)
@property
def feature(self):
return(self._feature)
# Only useful for plotting
# The random pattern will be different each time
@property
def signal(self):
if not self._isFile:
return(np.random.randn(16000)*self._noiseLevel)
else:
data, samplerate = sf.read(self._filename)
return(data)
Explanation: The patterns
The below class is representing a Pattern. A pattern can either be a sound file from the TensorFlow Lite examples and in this case we compute the feature and the label id.
The pattern can also be a random white noise. In that case, we also compute the feature and the class id is -1.
Note that when you use the signal property, the speech patterns will return the content of the file but noise patterns will generate a random noise which will thus be different each time.
End of explanation
files_per_command=len(glob.glob(os.path.join(MINISPEECH,"mini_speech_commands",commands[0],"*")))
files_per_command
Explanation: Following code is giving the number of speech samples for each keyword.
It is assuming that all keywords contain the same number of samples.
End of explanation
# Add patterns we want to detect
filenames=[]
for f in to_keep:
filenames+=glob.glob(os.path.join(MINISPEECH,"mini_speech_commands",f,"*"))
random.shuffle(filenames)
# Add remaining patterns
remaining_words=list(set(commands)-set(to_keep))
nb_noise=0
remaining=[]
for f in remaining_words:
remaining+=glob.glob(os.path.join(MINISPEECH,"mini_speech_commands",f,"*"))
random.shuffle(remaining)
filenames += remaining[0:files_per_command-nb_noise]
patterns=[Pattern(x) for x in filenames]
for i in range(nb_noise):
patterns.append(Pattern(np.abs(np.random.rand(1)*0.05)[0]))
random.shuffle(patterns)
Explanation: The following code is generating the patterns used for the training of the ML model,.
It is reading patterns for all the words we want to keep (from to_keep list) and it is aggregating all other keywords in the unknown class.
It is also generating some random noise patterns.For the unknown class, the number of patterns will always be files_per_command but some patterns may be noise rather than sound files.
There is some randomization of file names. So each time this code is executed, you'll get patterns in a different order and for the unknown class, which is containing more than files_per_command, you'll get a different subset of those patterns (thr subset will have the right length files_per_command).
Finally the patterns are also randomized so that the split between training and test patterns will select different patterns each time this code is executed.
End of explanation
print(len(patterns))
patterns=np.array(patterns)
nb_patterns = len(patterns)
nb_train= int(np.floor(0.8 * nb_patterns))
nb_tests=nb_patterns-nb_train
train_patterns = patterns[:nb_train]
test_patterns = patterns[-nb_tests:]
Explanation: Below code is extracting the training and test patterns.
This will be used later to generate the array used by scikit learn to train the model.
End of explanation
nbpat=50
data = patterns[nbpat].signal
samplerate=16000
plt.plot(data)
plt.show()
audio=Audio(data=data,rate=samplerate,autoplay=False)
audio
Explanation: Testing on a signal
The following code is displaying a pattern as example.
End of explanation
def get_spectrogram(waveform,fs):
# Zero-padding for an audio waveform with less than 16,000 samples.
input_len = 16000
waveform = waveform[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.float32)
mmax=np.max(np.abs(waveform))
equal_length = np.hstack([waveform, zero_padding])
f, t, Zxx = scipy.signal.stft(equal_length, fs, nperseg=1000)
plt.pcolormesh(t, f, np.abs(Zxx), vmin=0, vmax=mmax/100, shading='gouraud')
plt.title('STFT Magnitude')
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
get_spectrogram(data,16000)
Explanation: Simple function to display a spectrogram. It is adapted from a SciPy example.
End of explanation
feat=feature(data)
plt.plot(feat)
plt.show()
Explanation: Display of the feature to compare with the spectrogram.
End of explanation
X=np.array([x.feature for x in train_patterns])
X.shape
y=np.array([x.label for x in train_patterns])
y.shape
y_test = [x.label for x in test_patterns]
X_test = [x.feature for x in test_patterns]
Explanation: Patterns for training
Now we generate the arrays needed to train and test the model.
We have an array of feature : X_array.
An array of label ID : y
and similar arrays for the tests.
End of explanation
distributionsb = dict(C=uniform(loc=1, scale=1000)
)
reg = LogisticRegression(penalty="l1", solver="saga", tol=0.1)
clfb=RandomizedSearchCV(reg, distributionsb,random_state=0,n_iter=50).fit(X, y)
Explanation: Logistic Regression
We have chosen to use a simple logistic regression. We are doing a randomized search on the hyperparameter space.
End of explanation
clfb.best_estimator_
Explanation: We are using the best estimator found during the randomized search:
End of explanation
y_pred = clfb.predict(X_test)
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred,display_labels=labels)
Explanation: The confusion matrix is generated from the test patterns to check the behavior of the classifier:
End of explanation
clfb.score(X_test, y_test)
Explanation: We compute the final score. 0.8 is really the minimum acceptable value for this kind of demo.
With the zcr feature, if you try to detect Yes, No, Unknown, you'll get a score of around 0.6 which is very bad.
End of explanation
with open("logistic.pickle","wb") as f:
s = pickle.dump(clfb,f)
Explanation: We can now save the model so that next time we want to play with the notebook and test the CMSIS-DSP implementation we do not have to retrain the model:
End of explanation
with open("logistic.pickle","rb") as f:
clfb=pickle.load(f)
Explanation: And we can reload the saved model:
End of explanation
def predict(feat):
coef=clfb.best_estimator_.coef_
intercept=clfb.best_estimator_.intercept_
res=np.dot(coef,feat) + intercept
if res<0:
return(-1)
else:
return(0)
Explanation: Reference implementation with Matrix
This is the reference implementation which will be used to build the CMSIS-DSP implementation. We are no more using the scikit-learn predict function instead we are using an implementation of predict using linear algebra. It should give the same results.
End of explanation
y_pred_ref = [predict(x) for x in X_test]
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred_ref,display_labels=labels)
np.count_nonzero(np.equal(y_test,y_pred_ref))/len(y_test)
Explanation: And like in the code above with scikit-learn, we are checking the result with the confusion matrix and the score. It should give the same results:
End of explanation
coef_f32=clfb.best_estimator_.coef_
intercept_f32=clfb.best_estimator_.intercept_
def dsp_zcr(w):
m = dsp.arm_mean_f32(w)
m = -m
w = dsp.arm_offset_f32(w,m)
f=w[:-1]
g=w[1:]
k=np.count_nonzero(np.logical_and(f*g<0, g>f))
return(1.0*k/len(f))
Explanation: CMSIS-DSP implementation
Now we are ready to implement the code using CMSIS-DSP API. Once we have a running implementation in Python, writing the C code will be easy since the API is the same.
We are testing 3 implementations here : F32, Q31 and Q15.
At the end, we will check that Q15 is giving good enough results and thus can be implemented in C for the Arduino.
F32 Implementation
It will be very similar to the implemenattion above with matrix but will instead use the CMSIS-DSP API.
End of explanation
firf32 = dsp.arm_fir_instance_f32()
def dsp_feature(data):
samplerate=16000
input_len = 16000
waveform = data[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.float32)
signal = np.hstack([waveform, zero_padding])
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(samplerate*winDuration))
audioOffset=int(np.floor(samplerate*audioOffsetDuration))
overlap=winLength -audioOffset
window=hann(winLength,sym=False)
reta=[dsp_zcr(dsp.arm_mult_f32(x,window)) for x in sliding_window_view(signal,winLength)[::audioOffset,:]]
# Reset state and filter
# We want to start with a clean filter each time we filter a new feature.
# So the filter state is reset each time.
blockSize=98
numTaps=10
stateLength = numTaps + blockSize - 1
dsp.arm_fir_init_f32(firf32,10,np.ones(10)/10.0,np.zeros(stateLength))
reta=dsp.arm_fir_f32(firf32,reta)
return(np.array(reta))
Explanation: For the FIR, CMSIS-DSP is using a FIR instance structure and thus we need to define it
End of explanation
feat=dsp_feature(data)
plt.plot(feat)
plt.show()
Explanation: Let's check that the feature is giving the same result as the reference implemenattion using linear algebra.
End of explanation
def dsp_predict(feat):
res=dsp.arm_dot_prod_f32(coef_f32,feat)
res = res + intercept_f32
if res[0]<0:
return(-1)
else:
return(0)
Explanation: The feature code is working, so now we can implement the predict:
End of explanation
y_pred_ref = [dsp_predict(dsp_feature(x.signal)) for x in test_patterns]
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred_ref,display_labels=labels)
np.count_nonzero(np.equal(y_test,y_pred_ref))/len(y_test)
Explanation: And finally we can check the CMSIS-DSP behavior of the test patterns:
End of explanation
scaled_coef=clfb.best_estimator_.coef_
coef_shift=0
while np.max(np.abs(scaled_coef)) > 1:
scaled_coef = scaled_coef / 2.0
coef_shift = coef_shift + 1
coef_q31=fix.toQ31(scaled_coef)
scaled_intercept = clfb.best_estimator_.intercept_
intercept_shift = 0
while np.abs(scaled_intercept) > 1:
scaled_intercept = scaled_intercept / 2.0
intercept_shift = intercept_shift + 1
intercept_q31=fix.toQ31(scaled_intercept)
Explanation: We are getting very similar results to the reference implementation. Now let's explore fixed point.
Q31 implementation
First thing to do is to convert the F32 values of the ML mode into Q31.
But we need values in [-1,1]. So we rescale those values and keep track of the shift required to restore the original value.
Then, we convert those rescaled values to Q31.
End of explanation
def dsp_zcr_q31(w):
m = dsp.arm_mean_q31(w)
# Negate can saturate so we use CMSIS-DSP function which is working on array (and we have a scalar)
m = dsp.arm_negate_q31(np.array([m]))[0]
w = dsp.arm_offset_q31(w,m)
f=w[:-1]
g=w[1:]
k=np.count_nonzero(np.logical_and(np.logical_or(np.logical_and(f>0,g<0), np.logical_and(f<0,g>0)),g>f))
# k < len(f) so shift should be 0 except when k == len(f)
# When k==len(f) normally quotient is 0x40000000 and shift 1 and we convert
# this to 0x7FFFFFF and shift 0
status,quotient,shift_val=dsp.arm_divide_q31(k,len(f))
if shift_val==1:
return(dsp.arm_shift_q31(np.array([quotient]),shift)[0])
else:
return(quotient)
firq31 = dsp.arm_fir_instance_q31()
def dsp_feature_q31(data):
samplerate=16000
input_len = 16000
waveform = data[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.int32)
signal = np.hstack([waveform, zero_padding])
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(samplerate*winDuration))
audioOffset=int(np.floor(samplerate*audioOffsetDuration))
overlap=winLength-audioOffset
window=fix.toQ31(hann(winLength,sym=False))
reta=[dsp_zcr_q31(dsp.arm_mult_q31(x,window)) for x in sliding_window_view(signal,winLength)[::audioOffset,:]]
# Reset state and filter
blockSize=98
numTaps=10
stateLength = numTaps + blockSize - 1
dsp.arm_fir_init_q31(firq31,10,fix.toQ31(np.ones(10)/10.0),np.zeros(stateLength,dtype=np.int32))
reta=dsp.arm_fir_q31(firq31,reta)
return(np.array(reta))
Explanation: Now we can implement the zcr and feature in Q31.
End of explanation
feat=fix.Q31toF32(dsp_feature_q31(fix.toQ31(data)))
plt.plot(feat)
plt.show()
Explanation: Let's check the feature on the data to compare with the F32 version and check it is working:
End of explanation
def dsp_predict_q31(feat):
res=dsp.arm_dot_prod_q31(coef_q31,feat)
# Before adding the res and the intercept we need to ensure they are in the same Qx.x format
# The scaling applied to the coefs and to the intercept is different so we need to scale
# the intercept to take this into account
scaled=dsp.arm_shift_q31(np.array([intercept_q31]),intercept_shift-coef_shift)[0]
# Because dot prod output is in Q16.48
# and ret is on 64 bits
scaled = np.int64(scaled) << 17
res = res + scaled
if res<0:
return(-1)
else:
return(0)
Explanation: The Q31 feature is very similar to the F32 one so now we can implement the predict:
End of explanation
y_pred_ref = [dsp_predict_q31(dsp_feature_q31(fix.toQ31(x.signal))) for x in test_patterns]
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred_ref,display_labels=labels)
np.count_nonzero(np.equal(y_test,y_pred_ref))/len(y_test)
Explanation: Now we can check the Q31 implementation on the test patterns:
End of explanation
scaled_coef=clfb.best_estimator_.coef_
coef_shift=0
while np.max(np.abs(scaled_coef)) > 1:
scaled_coef = scaled_coef / 2.0
coef_shift = coef_shift + 1
coef_q15=fix.toQ15(scaled_coef)
scaled_intercept = clfb.best_estimator_.intercept_
intercept_shift = 0
while np.abs(scaled_intercept) > 1:
scaled_intercept = scaled_intercept / 2.0
intercept_shift = intercept_shift + 1
intercept_q15=fix.toQ15(scaled_intercept)
def dsp_zcr_q15(w):
m = dsp.arm_mean_q15(w)
# Negate can saturate so we use CMSIS-DSP function which is working on array (and we have a scalar)
m = dsp.arm_negate_q15(np.array([m]))[0]
w = dsp.arm_offset_q15(w,m)
f=w[:-1]
g=w[1:]
k=np.count_nonzero(np.logical_and(np.logical_or(np.logical_and(f>0,g<0), np.logical_and(f<0,g>0)),g>f))
# k < len(f) so shift should be 0 except when k == len(f)
# When k==len(f) normally quotient is 0x4000 and shift 1 and we convert
# this to 0x7FFF and shift 0
status,quotient,shift_val=dsp.arm_divide_q15(k,len(f))
if shift_val==1:
return(dsp.arm_shift_q15(np.array([quotient]),shift)[0])
else:
return(quotient)
firq15 = dsp.arm_fir_instance_q15()
def dsp_feature_q15(data):
samplerate=16000
input_len = 16000
waveform = data[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.int16)
signal = np.hstack([waveform, zero_padding])
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(samplerate*winDuration))
audioOffset=int(np.floor(samplerate*audioOffsetDuration))
overlap=winLength - audioOffset
window=fix.toQ15(hann(winLength,sym=False))
reta=[dsp_zcr_q15(dsp.arm_mult_q15(x,window)) for x in sliding_window_view(signal,winLength)[::audioOffset,:]]
# Reset state and filter
blockSize=98
numTaps=10
stateLength = numTaps + blockSize - 1
dsp.arm_fir_init_q15(firq15,10,fix.toQ15(np.ones(10)/10.0),np.zeros(stateLength,dtype=np.int16))
reta=dsp.arm_fir_q15(firq15,reta)
return(np.array(reta))
feat=fix.Q15toF32(dsp_feature_q15(fix.toQ15(data)))
plt.plot(feat)
plt.show()
def dsp_predict_q15(feat):
res=dsp.arm_dot_prod_q15(coef_q15,feat)
scaled=dsp.arm_shift_q15(np.array([intercept_q15]),intercept_shift-coef_shift)[0]
# Because dot prod output is in Q34.30
# and ret is on 64 bits
scaled = np.int64(scaled) << 15
res = res + scaled
if res<0:
return(-1)
else:
return(0)
y_pred_ref = [dsp_predict_q15(dsp_feature_q15(fix.toQ15(x.signal))) for x in test_patterns]
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred_ref,display_labels=labels)
np.count_nonzero(np.equal(y_test,y_pred_ref))/len(y_test)
Explanation: The score is as good as the F32 implementation.
Q15 Implementation
It is the same as Q31 but using Q15 functions.
End of explanation
from cmsisdsp.sdf.scheduler import *
Explanation: Q15 version is as good as other versions so we are selecting this implementation to run on the Arduino (once it has been converted to C).
Synchronous Data Flow
We are receiving stream of samples but our functions are using buffers.
We have sliding windows which make it more difficult to connect those functions together : we need FIFOs.
So we are going to use the CMSIS-DSP Synchronous Data Flow framework to describe the compute graph, compute the FIFO lengths and generate a static schedule implementing the streaming computation.
End of explanation
class Source(GenericSource):
def __init__(self,name,inLength):
GenericSource.__init__(self,name)
q15Type=CType(Q15)
self.addOutput("o",q15Type,inLength)
@property
def typeName(self):
return "Source"
class Sink(GenericSink):
def __init__(self,name,outLength):
GenericSink.__init__(self,name)
q15Type=CType(Q15)
self.addInput("i",q15Type,outLength)
@property
def typeName(self):
return "Sink"
class Feature(GenericNode):
def __init__(self,name,inLength):
GenericNode.__init__(self,name)
q15Type=CType(Q15)
self.addInput("i",q15Type,inLength)
self.addOutput("o",q15Type,1)
@property
def typeName(self):
return "Feature"
class FIR(GenericNode):
def __init__(self,name,inLength,outLength):
GenericNode.__init__(self,name)
q15Type=CType(Q15)
self.addInput("i",q15Type,inLength)
self.addOutput("o",q15Type,outLength)
@property
def typeName(self):
return "FIR"
class KWS(GenericNode):
def __init__(self,name,inLength):
GenericNode.__init__(self,name)
q15Type=CType(Q15)
self.addInput("i",q15Type,inLength)
self.addOutput("o",q15Type,1)
@property
def typeName(self):
return "KWS"
Explanation: To describe our compute graph, we need to describe the nodes which are used in this graph.
Each node is described by its inputs and outputs. For each IO, we define the data type and the number of samples read or written on the IO.
We need the following nodes in our system:
* A source to get audio samples from the Arduino PDM driver
* A Sink to generate the messages on the Arduino serial port
* A feature node to compute the feature for a given window (and pre-multipliying with the Hann window)
* A FIR node which is filtering all the features for one second of signal
* A KWS node which is doing the logistic regression
In addition to that we need sliding windows:
* Like in the Python code above we need a sliding window for audio
* After each recognition attempt on a segment of 1 second, we want to slide the recognition window by 0.5 seconds. It can be implemented with a sliding window on the features before the FIR
End of explanation
q15Type=CType(Q15)
FS=16000
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(FS*winDuration))
audio_input_length=int(np.floor(FS*audioOffsetDuration))
AUDIO_INTERRUPT_LENGTH = audio_input_length
Explanation: We need some parameters. Those parameters need to be coherent with the values defined in the features in the above code.
AUDIO_INTERRUPT_LENGTH is the audio length generated by the source. But it is not the audio length generated by th PDM driver on Arduino. The Arduino implementation of the Source is doing the adaptation as we will see below.
End of explanation
def gen_sched(python_code=True):
src=Source("src",AUDIO_INTERRUPT_LENGTH)
# For Python code, the input is a numpy array which is passed
# as argument of the node
if python_code:
src.addVariableArg("input_array")
sink=Sink("sink",1)
feature=Feature("feature",winLength)
feature.addVariableArg("window")
sliding_audio=SlidingBuffer("audioWin",q15Type,winLength,winLength-audio_input_length)
FEATURE_LENGTH=98 # for one second
FEATURE_OVERLAP = 49 # We slide feature by 0.5 second
sliding_feature=SlidingBuffer("featureWin",q15Type,FEATURE_LENGTH,FEATURE_OVERLAP)
kws=KWS("kws",FEATURE_LENGTH)
# Parameters of the ML model used by the node.
kws.addVariableArg("coef_q15")
kws.addVariableArg("coef_shift")
kws.addVariableArg("intercept_q15")
kws.addVariableArg("intercept_shift")
fir=FIR("fir",FEATURE_LENGTH,FEATURE_LENGTH)
# Description of the compute graph
g = Graph()
g.connect(src.o, sliding_audio.i)
g.connect(sliding_audio.o, feature.i)
g.connect(feature.o, sliding_feature.i)
g.connect(sliding_feature.o, fir.i)
g.connect(fir.o, kws.i)
g.connect(kws.o, sink.i)
# For Python we run for only around 13 seconds of input signal.
# Without this, it would run forever.
conf=Configuration()
if python_code:
conf.debugLimit=13
# We compute the scheduling
sched = g.computeSchedule(conf)
print("Schedule length = %d" % sched.scheduleLength)
print("Memory usage %d bytes" % sched.memory)
# We generate the scheduling code for a Python and C++ implementations
if python_code:
conf.pyOptionalArgs="input_array,window,coef_q15,coef_shift,intercept_q15,intercept_shift"
sched.pythoncode(".",config=conf)
with open("test.dot","w") as f:
sched.graphviz(f)
else:
conf.cOptionalArgs=const q15_t *window,
const q15_t *coef_q15,
const int coef_shift,
const q15_t intercept_q15,
const int intercept_shift
conf.memoryOptimization=True
# When schedule is long
conf.codeArray=True
sched.ccode("kws",config=conf)
with open("kws/test.dot","w") as f:
sched.graphviz(f)
Explanation: Below function is :
* Defining the compute graph by connecting all the nodes
* Generating a Python implementation of the compute graph and its scheduling
* Generating a C++ implementation of the compute graph and its scheduling
* Generating a graphviz description of the graph
The feature length is hardcoded. So if the sliding window parameters are changed, you'll need to change the values for the FEATURE_LENGTH and FEATURE_OVERLAP.
End of explanation
gen_sched(True)
Explanation: Next line is generating sched.py which is the Python implementation of the compute graph and its static scheduling. This file is describing the FIFOs connecting the nodes and describing how the nodes are scheduled.
You still need to provide an implementation of the nodes. It is available in appnodes.py and it is nearly a copy/paste of the Q15 implementation above.
But it is simpler because the sliding window and the static schedule ensure that each node is run only when enough data is available. So a big part of the control logic has been removed from the nodes.
sched.py is long because the static schedule is long and there are lots of function calls.. When we generate the C++ implementation, we are using an option which is using an array to describe the static schedule. It makes the C++ code much shorter. But having the sequence of function calls can be useful for debugging.
End of explanation
gen_sched(False)
Explanation: Next line is generating the C++ schedule that we will need for the Arduino implementation : kws/scheduler.cpp
End of explanation
from urllib.request import urlopen
import io
import soundfile as sf
test_pattern_url="https://github.com/ARM-software/VHT-SystemModeling/blob/main/EchoCanceller/sounds/yesno.wav?raw=true"
f = urlopen(test_pattern_url)
filedata = f.read()
data, samplerate = sf.read(io.BytesIO(filedata))
if len(data.shape)>1:
data=data[:,0]
Explanation: Now we'd like to test the Q15 classifier and the static schedule on a real patterns.
We are using the Yes/No pattern from our VHT-SystemModeling example.
Below code is loading the pattern into a NumPy array.
End of explanation
plt.plot(data)
plt.show()
Explanation: Let's plot the signal to check we have the right one:
End of explanation
import sched as s
from importlib import reload
import appnodes
appnodes= reload(appnodes)
s = reload(s)
dataQ15=fix.toQ15(data)
windowQ15=fix.toQ15(hann(winLength,sym=False))
nb,error = s.scheduler(dataQ15,windowQ15,coef_q15,coef_shift,intercept_q15,intercept_shift)
Explanation: Now we can run our static schedule on this file.
The reload function are needed when debugging (or implementing) the appnodes.py. Without this, the package would not be reloaded in the notebook.
This code needs some variables evaluated in the Q15 estimator code above.
End of explanation
with open("logistic.pickle","rb") as f:
clfb=pickle.load(f)
Explanation: The code is working. We are getting more printed Yes than Yes in the pattern because we are sliding by 0.5 second between each recognition and the same word can be recognized several time.
Now we are ready to implement the same on an Arduino.
First we need to generate the parameters of the model. If you have saved the model, you can reload it with the code below:
End of explanation
scaled_coef=clfb.best_estimator_.coef_
coef_shift=0
while np.max(np.abs(scaled_coef)) > 1:
scaled_coef = scaled_coef / 2.0
coef_shift = coef_shift + 1
coef_q15=fix.toQ15(scaled_coef)
scaled_intercept = clfb.best_estimator_.intercept_
intercept_shift = 0
while np.abs(scaled_intercept) > 1:
scaled_intercept = scaled_intercept / 2.0
intercept_shift = intercept_shift + 1
intercept_q15=fix.toQ15(scaled_intercept)
Explanation: Once the model is loaded, we extract the values and convert them to Q15:
End of explanation
def carray(a):
s="{"
k=0
for x in a:
s = s + ("%d," % (x,))
k = k + 1
if k == 10:
k=0;
s = s + "\n"
s = s + "}"
return(s)
ccode=#include "arm_math.h"
#include "coef.h"
const q15_t fir_coefs[NUMTAPS]=%s;
const q15_t coef_q15[%d]=%s;
const q15_t intercept_q15 = %d;
const int coef_shift=%d;
const int intercept_shift=%d;
const q15_t window[%d]=%s;
def gen_coef_code():
fir_coef = carray(fix.toQ15(np.ones(10)/10.0))
winq15=carray(fix.toQ15(hann(winLength,sym=False)))
res = ccode % (fir_coef,
len(coef_q15[0]),
carray(coef_q15[0]),
intercept_q15,
coef_shift,
intercept_shift,
winLength,
winq15
)
with open(os.path.join("kws","coef.cpp"),"w") as f:
print(res,file=f)
Explanation: Now we need to generate C arrays for the ML model parameters. Those parameters are generated into kws/coef.cpp
End of explanation
gen_coef_code()
Explanation: Generation of the coef code:
End of explanation
!arduino-cli board list
!arduino-cli config init
!arduino-cli lib install Arduino_CMSIS-DSP
Explanation: The implementation of the nodes is in kws/AppNodes.h. It is very similar to the appnodes.py but using the CMSIS-DSP C API.
The C++ template are used only to minimize the overhead at runtime.
Arduino
You need to have the arduino command line tools installed. And they need to be in your PATH so that the notebook can find the tool.
We are using the Arduino Nano 33 BLE
Building and upload
End of explanation
!arduino-cli compile -b arduino:mbed_nano:nano33ble kws
!arduino-cli upload -b arduino:mbed_nano:nano33ble -p COM5 kws
Explanation: The first time the below command is executed, it will take a very long time. The full CMSIS-DSP library has to be rebuilt for the Arduino.
End of explanation
import serial
import ipywidgets as widgets
import time
import threading
STOPSERIAL=False
def stop_action(btn):
global STOPSERIAL
STOPSERIAL=True
out = widgets.Output(layout={'border': '1px solid black','height':'40px'})
button = widgets.Button(
description='Stop',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me'
)
button.on_click(stop_action)
out.clear_output()
display(widgets.VBox([out,button]))
STOPSERIAL = False
def get_serial():
try:
with serial.Serial('COM6', 115200, timeout=1) as ser:
ser.reset_input_buffer()
global STOPSERIAL
while not STOPSERIAL:
data=ser.readline()
if (len(data)>0):
with out:
out.clear_output()
res=data.decode('ascii').rstrip()
if res=="Yes":
display(HTML("<p style='color:#00AA00';>YES</p>"))
else:
print(res)
with out:
out.clear_output()
print("Communication closed")
except Exception as inst:
with out:
out.clear_output()
print(inst)
t = threading.Thread(target=get_serial)
t.start()
Explanation: Testing
Below code is connecting to the Arduino board an displaying the output in a cell.
If you say Yes loudly enough (so not too far from the board) and if you don't have too much background noise in your room, then it should work.
You need to install the pyserial python package
End of explanation |
8,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Euler
Step1: Then I created a new variable, sum_squares, which would hold the sum of the squares, and set it to zero. I looped through all the values in my list, squared them, then added them to sum_squares.
Step2: I then created another new variable, sum_100, which would hold the sum of all numbers below 100, and set it to zero. Again, I looped through all the values in my list and added them to sum_100.
Step3: I then defined square_sum, which was the the square of the sum_100 variable. This is the square of the sum of all values up to 100.
Step4: Finally, I created a variable, difference, and set it to the difference between square_sum and sum_squares. Printing difference displays the answer. | Python Code:
lst = range(101)
Explanation: Project Euler: Problem 6
https://projecteuler.net/problem=6
The sum of the squares of the first ten natural numbers is,
$$1^2 + 2^2 + ... + 10^2 = 385$$
The square of the sum of the first ten natural numbers is,
$$(1 + 2 + ... + 10)^2 = 552 = 3025$$
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
First, I created a list which holds all numbers up to 100.
End of explanation
sum_squares = 0
for i in lst:
sum_squares += i ** 2
Explanation: Then I created a new variable, sum_squares, which would hold the sum of the squares, and set it to zero. I looped through all the values in my list, squared them, then added them to sum_squares.
End of explanation
sum_100 = 0
for i in lst:
sum_100 += i
Explanation: I then created another new variable, sum_100, which would hold the sum of all numbers below 100, and set it to zero. Again, I looped through all the values in my list and added them to sum_100.
End of explanation
square_sum = sum_100 ** 2
Explanation: I then defined square_sum, which was the the square of the sum_100 variable. This is the square of the sum of all values up to 100.
End of explanation
difference = square_sum - sum_squares
print(difference)
# This cell will be used for grading, leave it at the end of the notebook.
Explanation: Finally, I created a variable, difference, and set it to the difference between square_sum and sum_squares. Printing difference displays the answer.
End of explanation |
8,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source of the materials
Step1: Note that the default settings on the NCBI BLAST website are not quite
the same as the defaults on QBLAST. If you get different results, you’ll
need to check the parameters (e.g., the expectation value threshold and
the gap values).
For example, if you have a nucleotide sequence you want to search
against the nucleotide database (nt) using BLASTN, and you know the GI
number of your query sequence, you can use
Step2: Alternatively, if we have our query sequence already in a FASTA
formatted file, we just need to open the file and read in this record as
a string, and use that as the query argument
Step3: We could also have read in the FASTA file as a SeqRecord and then
supplied just the sequence itself
Step4: Supplying just the sequence means that BLAST will assign an identifier
for your sequence automatically. You might prefer to use the SeqRecord
object’s format method to make a FASTA string (which will include the
existing identifier)
Step5: This approach makes more sense if you have your sequence(s) in a
non-FASTA file format which you can extract using Bio.SeqIO (see
Chapter 5 - Sequence Input and Output.)
Whatever arguments you give the qblast() function, you should get back
your results in a handle object (by default in XML format). The next
step would be to parse the XML output into Python objects representing
the search results (Section [sec
Step6: After doing this, the results are in the file my_blast.xml and the
original handle has had all its data extracted (so we closed it).
However, the parse function of the BLAST parser (described
in [sec
Step7: Now that we’ve got the BLAST results back into a handle again, we are
ready to do something with them, so this leads us right into the parsing
section (see Section [sec
Step8: In this example there shouldn’t be any output from BLASTX to the
terminal, so stdout and stderr should be empty. You may want to check
the output file opuntia.xml has been created.
As you may recall from earlier examples in the tutorial, the
opuntia.fasta contains seven sequences, so the BLAST XML output should
contain multiple results. Therefore use Bio.Blast.NCBIXML.parse() to
parse it as described below in Section [sec
Step9: If instead you ran BLAST some other way, and have the BLAST output (in
XML format) in the file my_blast.xml, all you need to do is to open
the file for reading
Step10: Now that we’ve got a handle, we are ready to parse the output. The code
to parse it is really quite small. If you expect a single BLAST result
(i.e., you used a single query)
Step11: or, if you have lots of results (i.e., multiple query sequences)
Step12: Just like Bio.SeqIO and Bio.AlignIO (see
Chapters [chapter
Step13: Or, you can use a for-loop. Note though that you can step through the BLAST records only once. Usually, from each BLAST record you would save the information that you are interested in. If you want to save all returned BLAST records, you can convert the iterator into a list
Step14: Now you can access each BLAST record in the list with an index as usual. If your BLAST file is huge though, you may run into memory problems trying to save them all in a list.
Usually, you’ll be running one BLAST search at a time. Then, all you need to do is to pick up the first (and only) BLAST record in blast_records
Step15: I guess by now you’re wondering what is in a BLAST record.
The BLAST record class
A BLAST Record contains everything you might ever want to extract from
the BLAST output. Right now we’ll just show an example of how to get
some info out of the BLAST report, but if you want something in
particular that is not described here, look at the info on the record
class in detail, and take a gander into the code or automatically
generated documentation – the docstrings have lots of good info about
what is stored in each piece of information.
To continue with our example, let’s just print out some summary info
about all hits in our blast report greater than a particular threshold.
The following code does this | Python Code:
from Bio.Blast import NCBIWWW
help(NCBIWWW.qblast)
Explanation: Source of the materials: Biopython cookbook (Adapted)
<font color='red'>
New status: Draft</font>
BLAST
Running BLAST over the Internet
Saving blast output
Running BLAST locally
Parsing BLAST output
The BLAST record class
Parsing plain-text BLAST output
Hey, everybody loves BLAST right? I mean, geez, how can it get any
easier to do comparisons between one of your sequences and every other
sequence in the known world? But, of course, this section isn’t about
how cool BLAST is, since we already know that. It is about the problem
with BLAST – it can be really difficult to deal with the volume of data
generated by large runs, and to automate BLAST runs in general.
Fortunately, the Biopython folks know this only too well, so they’ve
developed lots of tools for dealing with BLAST and making things much
easier. This section details how to use these tools and do useful things
with them.
Dealing with BLAST can be split up into two steps, both of which can be
done from within Biopython. Firstly, running BLAST for your query
sequence(s), and getting some output. Secondly, parsing the BLAST output
in Python for further analysis.
Your first introduction to running BLAST was probably via the NCBI
web-service. In fact, there are lots of ways you can run BLAST, which
can be categorised in several ways. The most important distinction is
running BLAST locally (on your own machine), and running BLAST remotely
(on another machine, typically the NCBI servers). We’re going to start
this chapter by invoking the NCBI online BLAST service from within a
Python script.
NOTE: The following Chapter [chapter:searchio] describes
Bio.SearchIO, an experimental module in Biopython. We intend this to
ultimately replace the older Bio.Blast module, as it provides a more
general framework handling other related sequence searching tools as
well. However, until that is declared stable, for production code please
continue to use the Bio.Blast module for dealing with NCBI BLAST.
Running BLAST over the Internet
We use the function qblast() in the Bio.Blast.NCBIWWW module to call
the online version of BLAST. This has three non-optional arguments:
The first argument is the blast program to use for the search, as a
lower case string. The options and descriptions of the programs are
available at
https://blast.ncbi.nlm.nih.gov/Blast.cgi.. Currently
qblast only works with blastn, blastp, blastx, tblast and tblastx.
The second argument specifies the databases to search against.
Again, the options for this are available on the NCBI web pages at
http://www.ncbi.nlm.nih.gov/BLAST/blast_databases.shtml.
The third argument is a string containing your query sequence. This
can either be the sequence itself, the sequence in fasta format, or
an identifier like a GI number.
The qblast function also take a number of other option arguments which
are basically analogous to the different parameters you can set on the
BLAST web page. We’ll just highlight a few of them here:
The argument url_base sets the base URL for running BLAST over the internet. By default it connects to the NCBI, but one can use this to connect to an instance of NCBI BLAST running in the cloud. Please refer to the documentation for the qblast function for further details
The qblast function can return the BLAST results in various
formats, which you can choose with the optional format_type
keyword: "HTML", "Text", "ASN.1", or "XML". The default is
"XML", as that is the format expected by the parser, described in
section [sec:parsing-blast] below.
The argument expect sets the expectation or e-value threshold.
For more about the optional BLAST arguments, we refer you to the NCBI’s
own documentation, or that built into Biopython:
End of explanation
from Bio.Blast import NCBIWWW
result_handle = NCBIWWW.qblast("blastn", "nt", "8332116")
Explanation: Note that the default settings on the NCBI BLAST website are not quite
the same as the defaults on QBLAST. If you get different results, you’ll
need to check the parameters (e.g., the expectation value threshold and
the gap values).
For example, if you have a nucleotide sequence you want to search
against the nucleotide database (nt) using BLASTN, and you know the GI
number of your query sequence, you can use:
End of explanation
from Bio.Blast import NCBIWWW
fasta_string = open("data/m_cold.fasta").read()
result_handle = NCBIWWW.qblast("blastn", "nt", fasta_string)
Explanation: Alternatively, if we have our query sequence already in a FASTA
formatted file, we just need to open the file and read in this record as
a string, and use that as the query argument:
End of explanation
from Bio.Blast import NCBIWWW
from Bio import SeqIO
record = SeqIO.read("data/m_cold.fasta", format="fasta")
result_handle = NCBIWWW.qblast("blastn", "nt", record.seq)
Explanation: We could also have read in the FASTA file as a SeqRecord and then
supplied just the sequence itself:
End of explanation
from Bio.Blast import NCBIWWW
from Bio import SeqIO
record = SeqIO.read("data/m_cold.fasta", format="fasta")
result_handle = NCBIWWW.qblast("blastn", "nt", record.format("fasta"))
Explanation: Supplying just the sequence means that BLAST will assign an identifier
for your sequence automatically. You might prefer to use the SeqRecord
object’s format method to make a FASTA string (which will include the
existing identifier):
End of explanation
with open("data/my_blast.xml", "w") as out_handle:
out_handle.write(result_handle.read())
result_handle.close()
Explanation: This approach makes more sense if you have your sequence(s) in a
non-FASTA file format which you can extract using Bio.SeqIO (see
Chapter 5 - Sequence Input and Output.)
Whatever arguments you give the qblast() function, you should get back
your results in a handle object (by default in XML format). The next
step would be to parse the XML output into Python objects representing
the search results (Section [sec:parsing-blast]), but you might want
to save a local copy of the output file first. I find this especially
useful when debugging my code that extracts info from the BLAST results
(because re-running the online search is slow and wastes the NCBI
computer time).
Saving blast output
We need to be a bit careful since we can use result_handle.read() to
read the BLAST output only once – calling result_handle.read() again
returns an empty string.
End of explanation
result_handle = open("data/my_blast.xml")
Explanation: After doing this, the results are in the file my_blast.xml and the
original handle has had all its data extracted (so we closed it).
However, the parse function of the BLAST parser (described
in [sec:parsing-blast]) takes a file-handle-like object, so we can
just open the saved file for input:
End of explanation
from Bio.Blast.Applications import NcbiblastxCommandline
help(NcbiblastxCommandline)
blastx_cline = NcbiblastxCommandline(query="opuntia.fasta", db="nr", evalue=0.001,
outfmt=5, out="opuntia.xml")
blastx_cline
print(blastx_cline)
# stdout, stderr = blastx_cline()
Explanation: Now that we’ve got the BLAST results back into a handle again, we are
ready to do something with them, so this leads us right into the parsing
section (see Section [sec:parsing-blast] below). You may want to jump
ahead to that now ….
Running BLAST locally
Introduction
Running BLAST locally (as opposed to over the internet, see
Section [sec:running-www-blast]) has at least major two advantages:
Local BLAST may be faster than BLAST over the internet;
Local BLAST allows you to make your own database to search for
sequences against.
Dealing with proprietary or unpublished sequence data can be another
reason to run BLAST locally. You may not be allowed to redistribute the
sequences, so submitting them to the NCBI as a BLAST query would not be
an option.
Unfortunately, there are some major drawbacks too – installing all the
bits and getting it setup right takes some effort:
Local BLAST requires command line tools to be installed.
Local BLAST requires (large) BLAST databases to be setup (and
potentially kept up to date).
To further confuse matters there are several different BLAST packages
available, and there are also other tools which can produce imitation
BLAST output files, such as BLAT.
Standalone NCBI BLAST+
The “new” NCBI
BLAST+
suite was released in 2009. This replaces the old NCBI “legacy” BLAST
package (see below).
This section will show briefly how to use these tools from within
Python. If you have already read or tried the alignment tool examples in
Section [sec:alignment-tools] this should all seem quite
straightforward. First, we construct a command line string (as you would
type in at the command line prompt if running standalone BLAST by hand).
Then we can execute this command from within Python.
For example, taking a FASTA file of gene nucleotide sequences, you might
want to run a BLASTX (translation) search against the non-redundant (NR)
protein database. Assuming you (or your systems administrator) has
downloaded and installed the NR database, you might run:
```
blastx -query opuntia.fasta -db nr -out opuntia.xml -evalue 0.001 -outfmt 5
```
This should run BLASTX against the NR database, using an expectation
cut-off value of $0.001$ and produce XML output to the specified file
(which we can then parse). On my computer this takes about six minutes -
a good reason to save the output to a file so you can repeat any
analysis as needed.
From within Biopython we can use the NCBI BLASTX wrapper from the
Bio.Blast.Applications module to build the command line string, and
run it:
End of explanation
from Bio.Blast import NCBIWWW
result_handle = NCBIWWW.qblast("blastn", "nt", "8332116")
Explanation: In this example there shouldn’t be any output from BLASTX to the
terminal, so stdout and stderr should be empty. You may want to check
the output file opuntia.xml has been created.
As you may recall from earlier examples in the tutorial, the
opuntia.fasta contains seven sequences, so the BLAST XML output should
contain multiple results. Therefore use Bio.Blast.NCBIXML.parse() to
parse it as described below in Section [sec:parsing-blast].
Other versions of BLAST
NCBI BLAST+ (written in C++) was first released in 2009 as a replacement
for the original NCBI “legacy” BLAST (written in C) which is no longer
being updated. There were a lot of changes – the old version had a
single core command line tool blastall which covered multiple
different BLAST search types (which are now separate commands in
BLAST+), and all the command line options were renamed. Biopython’s
wrappers for the NCBI “legacy” BLAST tools have been deprecated and will
be removed in a future release. To try to avoid confusion, we do not
cover calling these old tools from Biopython in this tutorial.
You may also come across Washington University
BLAST (WU-BLAST), and its successor, Advanced
Biocomputing BLAST (AB-BLAST, released in
2009, not free/open source). These packages include the command line
tools wu-blastall and ab-blastall, which mimicked blastall from
the NCBI “legacy” BLAST suite. Biopython does not currently provide
wrappers for calling these tools, but should be able to parse any NCBI
compatible output from them.
Parsing BLAST output
As mentioned above, BLAST can generate output in various formats, such
as XML, HTML, and plain text. Originally, Biopython had parsers for
BLAST plain text and HTML output, as these were the only output formats
offered at the time. Unfortunately, the BLAST output in these formats
kept changing, each time breaking the Biopython parsers. Our HTML BLAST
parser has been removed, but the plain text BLAST parser is still
available (see Section [sec:parsing-blast-deprecated]). Use it at your
own risk, it may or may not work, depending on which BLAST version
you’re using.
As keeping up with changes in BLAST became a hopeless endeavor,
especially with users running different BLAST versions, we now recommend
to parse the output in XML format, which can be generated by recent
versions of BLAST. Not only is the XML output more stable than the plain
text and HTML output, it is also much easier to parse automatically,
making Biopython a whole lot more stable.
You can get BLAST output in XML format in various ways. For the parser,
it doesn’t matter how the output was generated, as long as it is in the
XML format.
You can use Biopython to run BLAST over the internet, as described
in section [sec:running-www-blast].
You can use Biopython to run BLAST locally, as described
in section [sec:running-local-blast].
You can do the BLAST search yourself on the NCBI site through your
web browser, and then save the results. You need to choose XML as
the format in which to receive the results, and save the final BLAST
page you get (you know, the one with all of the
interesting results!) to a file.
You can also run BLAST locally without using Biopython, and save the
output in a file. Again, you need to choose XML as the format in
which to receive the results.
The important point is that you do not have to use Biopython scripts to
fetch the data in order to be able to parse it. Doing things in one of
these ways, you then need to get a handle to the results. In Python, a
handle is just a nice general way of describing input to any info source
so that the info can be retrieved using read() and readline()
functions (see Section <span>sec:appendix-handles</span>).
If you followed the code above for interacting with BLAST through a
script, then you already have result_handle, the handle to the BLAST
results. For example, using a GI number to do an online search:
End of explanation
result_handle = open("data/my_blast.xml")
Explanation: If instead you ran BLAST some other way, and have the BLAST output (in
XML format) in the file my_blast.xml, all you need to do is to open
the file for reading:
End of explanation
from Bio.Blast import NCBIXML
blast_record = NCBIXML.read(result_handle)
Explanation: Now that we’ve got a handle, we are ready to parse the output. The code
to parse it is really quite small. If you expect a single BLAST result
(i.e., you used a single query):
End of explanation
from Bio.Blast import NCBIXML
blast_records = NCBIXML.parse(result_handle)
Explanation: or, if you have lots of results (i.e., multiple query sequences):
End of explanation
from Bio.Blast import NCBIXML
blast_records = NCBIXML.parse(result_handle)
blast_record = next(blast_records)
print(blast_record.database_sequences)
# # ... do something with blast_record
Explanation: Just like Bio.SeqIO and Bio.AlignIO (see
Chapters [chapter:Bio.SeqIO] and [chapter:Bio.AlignIO]), we have a
pair of input functions, read and parse, where read is for when
you have exactly one object, and parse is an iterator for when you can
have lots of objects – but instead of getting SeqRecord or
MultipleSeqAlignment objects, we get BLAST record objects.
To be able to handle the situation where the BLAST file may be huge,
containing thousands of results, NCBIXML.parse() returns an iterator.
In plain English, an iterator allows you to step through the BLAST
output, retrieving BLAST records one by one for each BLAST search
result:
End of explanation
for blast_record in blast_records:
#Do something with blast_records
blast_records = list(blast_records)
blast_records = list(blast_records)
Explanation: Or, you can use a for-loop. Note though that you can step through the BLAST records only once. Usually, from each BLAST record you would save the information that you are interested in. If you want to save all returned BLAST records, you can convert the iterator into a list:
End of explanation
from Bio.Blast import NCBIXML
blast_records = NCBIXML.parse(result_handle)
Explanation: Now you can access each BLAST record in the list with an index as usual. If your BLAST file is huge though, you may run into memory problems trying to save them all in a list.
Usually, you’ll be running one BLAST search at a time. Then, all you need to do is to pick up the first (and only) BLAST record in blast_records:
End of explanation
E_VALUE_THRESH = 0.04
from Bio.Blast import NCBIXML
result_handle = open("data/my_blast.xml", "r")
blast_records = NCBIXML.parse(result_handle)
for alignment in blast_record.alignments:
for hsp in alignment.hsps:
if hsp.expect < E_VALUE_THRESH:
print("****Alignment****")
print("sequence:", alignment.title)
print("length:", alignment.length)
print("e value:", hsp.expect)
print(hsp.query[0:75] + "...")
print(hsp.match[0:75] + "...")
print(hsp.sbjct[0:75] + "...")
Explanation: I guess by now you’re wondering what is in a BLAST record.
The BLAST record class
A BLAST Record contains everything you might ever want to extract from
the BLAST output. Right now we’ll just show an example of how to get
some info out of the BLAST report, but if you want something in
particular that is not described here, look at the info on the record
class in detail, and take a gander into the code or automatically
generated documentation – the docstrings have lots of good info about
what is stored in each piece of information.
To continue with our example, let’s just print out some summary info
about all hits in our blast report greater than a particular threshold.
The following code does this:
End of explanation |
8,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: 1. Initialization of setup
Step2: 2. Elemental Mass and Stiffness matrices
The mass and the stiffness matrix are calculated prior time extrapolation, so they are pre-calculated and stored at the beginning of the code.
The integrals defined in the mass and stiffness matrices are computed using a numerical quadrature, in this cases the GLL quadrature that uses the GLL points and their corresponding weights to approximate the integrals. Hence,
\begin{equation}
M_{ij}^k=\int_{-1}^1 \ell_i^k(\xi) \ell_j^k(\xi) \ J \ d\xi = \sum_{m=1}^{N_p} w_m \ \ell_i^k (x_m) \ell_j^k(x_m)\ J =\sum_{m=1}^{N_p} w_m \delta_{im}\ \delta_{jm} \ J= \begin{cases} w_i \ J \ \ \text{ if } i=j \ 0 \ \ \ \ \ \ \ \text{ if } i \neq j\end{cases}
\end{equation}
that is a diagonal mass matrix!. Subsequently, the stiffness matrices is given as
\begin{equation}
K_{i,j}= \int_{-1}^1 \ell_i^k(\xi) \cdot \partial x \ell_j^k(\xi) \ d\xi= \sum{m=1}^{N_p} w_m \ \ell_i^k(x_m)\cdot \partial_x \ell_j^k(x_m)= \sum_{m=1}^{N_p} w_m \delta_{im}\cdot \partial_x\ell_j^k(x_m)= w_i \cdot \partial_x \ell_j^k(x_i)
\end{equation}
The Lagrange polynomials and their properties have been already used, they determine the integration weights $w_i$ that are returned by the python method "gll". Additionally, the fist derivatives of such basis, $\partial_x \ell_j^k(x_i)$, are needed, the python method "Lagrange1st" returns them.
Exercise 1
Now we have all the ingredients to calculate the mass and stiffness matrix, initialize these matrices in the following cell. Compute the inverse mass matrix, keep in mind that it is diagonal.
Step3: 3. Flux Matrices
As in the case of finite volumes when we solve the 1D elastic wave equation for an homogeneous media, we assume the coefficients of matrix A to be constant inside the element.
\begin{equation}
\mathbf{A}=
\begin{pmatrix}
0 & -\mu \
-1/\rho & 0
\end{pmatrix}
\end{equation}
Now we need to diagonalize $\mathbf{A}$. Introducing the seismic impedance $Z = \rho c$ with $c = \sqrt{\mu/\rho}$, we have
\begin{equation}
\mathbf{A} = \mathbf{R}^{-1}\mathbf{\Lambda}\mathbf{R}
\qquad\text{,}\qquad
\mathbf{\Lambda}=
\begin{pmatrix}
-c & 0 \
0 & c
\end{pmatrix}
\qquad\text{,}\qquad
\mathbf{R} =
\begin{pmatrix}
Z & -Z \
1 & 1
\end{pmatrix}
\qquad\text{and}\qquad
\mathbf{R}^{-1} = \frac{1}{2Z}
\begin{pmatrix}
1 & Z \
-1 & Z
\end{pmatrix}
\end{equation}
We decompose the solution into right propagating $\mathbf{\Lambda}^{+}$ and left propagating eigenvalues $\mathbf{\Lambda}^{-}$ where
\begin{equation}
\mathbf{\Lambda}^{+}=
\begin{pmatrix}
-c & 0 \
0 & 0
\end{pmatrix}
\qquad\text{,}\qquad
\mathbf{\Lambda}^{-}=
\begin{pmatrix}
0 & 0 \
0 & c
\end{pmatrix}
\qquad\text{and}\qquad
\mathbf{A}^{\pm} = \mathbf{R}^{-1}\mathbf{\Lambda}^{\pm}\mathbf{R}
\end{equation}
This strategy allows us to formulate the Flux term in the discontinuous Galerkin method.
Exercise 2
Initialize all relevant matrices, i.e $R$, $R^{-1}$, $\mathbf{\Lambda}^{+}$, $\mathbf{\Lambda}^{-}$, $\mathbf{A}^{+}$, $\mathbf{A}^{-}$, $\mathbf{A}$.
Step4: 4. Discontinuous Galerkin Solution
The principal characteristic of the discontinuous Galerkin Method is the communication between the element neighbors using a flux term, in general it is given
\begin{equation}
\mathbf{Flux} = \int_{\partial D_k} \mathbf{A}\mathbf{Q}\ell_j(\xi)\mathbf{n}d\xi
\end{equation}
this term leads to four flux contributions for left and right sides of the elements
\begin{equation}
\mathbf{Flux} = -\mathbf{A}{k}^{-}\mathbf{Q}{l}^{k}\mathbf{F}^{l} + \mathbf{A}{k}^{+}\mathbf{Q}{r}^{k}\mathbf{F}^{r} - \mathbf{A}{k}^{+}\mathbf{Q}{r}^{k-1}\mathbf{F}^{l} + \mathbf{A}{k}^{-}\mathbf{Q}{l}^{k+1}\mathbf{F}^{r}
\end{equation}
Last but not least, we have to solve our semi-discrete scheme that we derived above using an appropriate time extrapolation, in the code below we implemented two different time extrapolation schemes | Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
from gll import gll
from lagrange1st import lagrange1st
from flux_homo import flux
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Discontinuous Galerkin Method - 1D Elastic Wave Equation, Homogeneous case</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
The source-free elastic wave equation in 1D reads
\begin{align}
\partial_t \sigma - \mu \partial_x v & = 0 \
\partial_t v - \frac{1}{\rho} \partial_x \sigma & = 0
\end{align}
with $\rho$ the density and $\mu$ the shear modulus. This equation in matrix-vector notation follows
\begin{equation}
\partial_t \mathbf{Q} + \mathbf{A} \partial_x \mathbf{Q} = 0
\end{equation}
where $\mathbf{Q} = (\sigma, v)$ is the vector of unknowns and the matrix $\mathbf{A}$ contains the parameters $\rho$ and $\mu$. We seek to solve the linear advection equation as a hyperbolic equation $ \partial_t u + \mu \ \partial_x u=0$. A series of steps need to be done:
1) The weak form of the equation is derived by multiplying both sides by an arbitrary test function.
2) Apply the stress Free Boundary Condition after integration by parts
3) We approximate the unknown field $\mathbf{Q}(x,t)$ by a sum over space-dependent basis functions $\ell_i$ weighted by time-dependent coefficients $\mathbf{Q}(x_i,t)$, as we did in the spectral elements method. As interpolating functions we choose the Lagrange polynomials and use $\xi$ as the space variable representing the elemental domain:
\begin{equation}
\mathbf{Q}(\xi,t) \ = \ \sum_{i=1}^{N_p} \mathbf{Q}(\xi_i,t) \ell_i(\xi) \qquad with \qquad \ell_i^{(N)} (\xi) \ := \ \prod_{j = 1, \ j \neq i}^{N+1} \frac{\xi - \xi_j}{\xi_i-\xi_j}, \quad i,j = 1, 2, \dotsc , N + 1
\end{equation}
4) The continuous weak form is written as a system of linear equations by considering the approximated displacement field. Finally, the semi-discrete scheme can be written in matrix-vector form as
\begin{equation}
\mathbf{M}\partial_t \mathbf{Q} = \mathbf{A}\mathbf{K}\mathbf{Q} - \mathbf{Flux}
\end{equation}
5) Time extrapolation is done after applying a standard 1st order finite-difference approximation to the time derivative, we call it the Euler scheme.
\begin{equation}
\mathbf{Q}^{t+1} \approx \mathbf{Q}^{t} + dt\mathbf{M}^{-1}(\mathbf{A}\mathbf{K}\mathbf{Q} - \mathbf{Flux})
\end{equation}
This notebook implements both Euler and Runge-Kutta schemes for solving the free source version of the elastic wave equation in a homogeneous media. To keep the problem simple, we use as spatial initial condition a Gauss function with half-width $\sigma$
\begin{equation}
Q(x,t=0) = e^{-1/\sigma^2 (x - x_{o})^2}
\end{equation}
End of explanation
# Initialization of setup
# --------------------------------------------------------------------------
c = 2500 # acoustic velocity [m/s]
tmax = 2.0 # Length of seismogram [s]
xmax = 10000 # Length of domain [m]
vs = 2500 # Advection velocity
rho = 2500 # Density [kg/m^3]
mu = rho*vs**2 # shear modulus
N = 4 # Order of Lagrange polynomials
ne = 200 # Number of elements
sig = 200 # Gaussian width
x0 = 5000 # x location of Gaussian
eps = 0.4 # Courant criterion
iplot = 20 # Plotting frequency
imethod = 'RK' # 'Euler', 'RK'
#--------------------------------------------------------------------
# GLL points and integration weights
[xi,w] = gll(N) # xi, N+1 coordinates [-1 1] of GLL points
# w Integration weights at GLL locations
# Space domain
le = xmax/ne # Length of elements
ng = ne*N + 1
# Vector with GLL points
k=0
xg = np.zeros((N+1)*ne)
for i in range(0, ne):
for j in range(0, N+1):
k += 1
xg[k-1] = i*le + .5*(xi[j]+1)*le
x = np.reshape(xg, (N+1, ne), order='F').T
# Calculation of time step acoording to Courant criterion
dxmin = np.min(np.diff(xg[1:N+1]))
dt = eps*dxmin/vs # Global time step
nt = int(np.floor(tmax/dt))
# Mapping - Jacobian
J = le/2 # Jacobian
Ji = 1/J # Inverse Jacobian
# 1st derivative of Lagrange polynomials
l1d = lagrange1st(N)
Explanation: 1. Initialization of setup
End of explanation
# Initialization of system matrices
# -----------------------------------------------------------------
# Elemental Mass matrix
M = np.zeros((N+1, N+1))
for i in range(0, N+1):
M[i, i] = w[i] * J
# Inverse matrix of M (M is diagonal!)
Minv = np.identity(N+1)
for i in range(0, N+1):
Minv[i,i] = 1. / M[i,i]
# Elemental Stiffness Matrix
K = np.zeros((N+1, N+1))
for i in range(0, N+1):
for j in range(0, N+1):
K[i,j] = w[j] * l1d[i,j] # NxN matrix for every element
Explanation: 2. Elemental Mass and Stiffness matrices
The mass and the stiffness matrix are calculated prior time extrapolation, so they are pre-calculated and stored at the beginning of the code.
The integrals defined in the mass and stiffness matrices are computed using a numerical quadrature, in this cases the GLL quadrature that uses the GLL points and their corresponding weights to approximate the integrals. Hence,
\begin{equation}
M_{ij}^k=\int_{-1}^1 \ell_i^k(\xi) \ell_j^k(\xi) \ J \ d\xi = \sum_{m=1}^{N_p} w_m \ \ell_i^k (x_m) \ell_j^k(x_m)\ J =\sum_{m=1}^{N_p} w_m \delta_{im}\ \delta_{jm} \ J= \begin{cases} w_i \ J \ \ \text{ if } i=j \ 0 \ \ \ \ \ \ \ \text{ if } i \neq j\end{cases}
\end{equation}
that is a diagonal mass matrix!. Subsequently, the stiffness matrices is given as
\begin{equation}
K_{i,j}= \int_{-1}^1 \ell_i^k(\xi) \cdot \partial x \ell_j^k(\xi) \ d\xi= \sum{m=1}^{N_p} w_m \ \ell_i^k(x_m)\cdot \partial_x \ell_j^k(x_m)= \sum_{m=1}^{N_p} w_m \delta_{im}\cdot \partial_x\ell_j^k(x_m)= w_i \cdot \partial_x \ell_j^k(x_i)
\end{equation}
The Lagrange polynomials and their properties have been already used, they determine the integration weights $w_i$ that are returned by the python method "gll". Additionally, the fist derivatives of such basis, $\partial_x \ell_j^k(x_i)$, are needed, the python method "Lagrange1st" returns them.
Exercise 1
Now we have all the ingredients to calculate the mass and stiffness matrix, initialize these matrices in the following cell. Compute the inverse mass matrix, keep in mind that it is diagonal.
End of explanation
# Inialize Flux relates matrices
# ---------------------------------------------------------------
Z = rho*vs
R = np.array([[Z, -Z], [1, 1]])
Rinv = np.linalg.inv(R)
Lm = np.array([[-c, 0], [0, 0]])
Lp = np.array([[0, 0] , [0, c]])
Ap = R @ Lp @ Rinv
Am = R @ Lm @ Rinv
A = np.array([[0, -mu], [-1/rho, 0]])
Explanation: 3. Flux Matrices
As in the case of finite volumes when we solve the 1D elastic wave equation for an homogeneous media, we assume the coefficients of matrix A to be constant inside the element.
\begin{equation}
\mathbf{A}=
\begin{pmatrix}
0 & -\mu \
-1/\rho & 0
\end{pmatrix}
\end{equation}
Now we need to diagonalize $\mathbf{A}$. Introducing the seismic impedance $Z = \rho c$ with $c = \sqrt{\mu/\rho}$, we have
\begin{equation}
\mathbf{A} = \mathbf{R}^{-1}\mathbf{\Lambda}\mathbf{R}
\qquad\text{,}\qquad
\mathbf{\Lambda}=
\begin{pmatrix}
-c & 0 \
0 & c
\end{pmatrix}
\qquad\text{,}\qquad
\mathbf{R} =
\begin{pmatrix}
Z & -Z \
1 & 1
\end{pmatrix}
\qquad\text{and}\qquad
\mathbf{R}^{-1} = \frac{1}{2Z}
\begin{pmatrix}
1 & Z \
-1 & Z
\end{pmatrix}
\end{equation}
We decompose the solution into right propagating $\mathbf{\Lambda}^{+}$ and left propagating eigenvalues $\mathbf{\Lambda}^{-}$ where
\begin{equation}
\mathbf{\Lambda}^{+}=
\begin{pmatrix}
-c & 0 \
0 & 0
\end{pmatrix}
\qquad\text{,}\qquad
\mathbf{\Lambda}^{-}=
\begin{pmatrix}
0 & 0 \
0 & c
\end{pmatrix}
\qquad\text{and}\qquad
\mathbf{A}^{\pm} = \mathbf{R}^{-1}\mathbf{\Lambda}^{\pm}\mathbf{R}
\end{equation}
This strategy allows us to formulate the Flux term in the discontinuous Galerkin method.
Exercise 2
Initialize all relevant matrices, i.e $R$, $R^{-1}$, $\mathbf{\Lambda}^{+}$, $\mathbf{\Lambda}^{-}$, $\mathbf{A}^{+}$, $\mathbf{A}^{-}$, $\mathbf{A}$.
End of explanation
# DG Solution, Time extrapolation
# ---------------------------------------------------------------
# Initalize solution vectors
Q = np.zeros([ne, N+1, 2])
#Qa = np.zeros([ne, N+1, 2])
Qnew = np.zeros([ne, N+1, 2])
k1 = np.zeros([ne, N+1, 2])
k2 = np.zeros([ne, N+1, 2])
Q[:,:,0] = np.exp(-1/sig**2*((x-x0))**2)
Qs = np.zeros(xg.size) # for plotting
Qv = np.zeros(xg.size) # for plotting
Qa = np.zeros((2, xg.size)) # for analytical solution
# Initialize animated plot
# ---------------------------------------------------------------
fig = plt.figure(figsize=(10,6))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
line1 = ax1.plot(xg, Qs, 'k', xg, Qa[0,:], 'r--', lw=1.5)
line2 = ax2.plot(xg, Qv, 'k', xg, Qa[1,:], 'r--', lw=1.5)
ax1.set_ylabel('Stress')
ax2.set_ylabel('Velocity')
ax2.set_xlabel(' x ')
plt.suptitle('Homogeneous Disc. Galerkin - %s method'%imethod, size=16)
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for it in range(nt):
if imethod == 'Euler': # Euler
# Calculate Fluxes
Flux = flux(Q, N, ne, Ap, Am)
# Extrapolate each element using flux F
for i in range(1,ne-1):
Qnew[i,:,0] = dt * Minv @ (-mu * K @ Q[i,:,1].T - Flux[i,:,0].T) + Q[i,:,0].T
Qnew[i,:,1] = dt * Minv @ (-1/rho * K @ Q[i,:,0].T - Flux[i,:,1].T) + Q[i,:,1].T
elif imethod == 'RK':
# Calculate Fluxes
Flux = flux(Q, N, ne, Ap, Am)
# Extrapolate each element using flux F
for i in range(1,ne-1):
k1[i,:,0] = Minv @ (-mu * K @ Q[i,:,1].T - Flux[i,:,0].T)
k1[i,:,1] = Minv @ (-1/rho * K @ Q[i,:,0].T - Flux[i,:,1].T)
for i in range(1,ne-1):
Qnew[i,:,0] = dt * Minv @ (-mu * K @ Q[i,:,1].T - Flux[i,:,0].T) + Q[i,:,0].T
Qnew[i,:,1] = dt * Minv @ (-1/rho * K @ Q[i,:,0].T - Flux[i,:,1].T) + Q[i,:,1].T
Flux = flux(Qnew,N,ne,Ap,Am)
for i in range(1,ne-1):
k2[i,:,0] = Minv @ (-mu * K @ Qnew[i,:,1].T - Flux[i,:,0].T)
k2[i,:,1] = Minv @ (-1/rho * K @ Qnew[i,:,0].T - Flux[i,:,1].T)
# Extrapolate
Qnew = Q + (dt/2) * (k1 + k2)
else:
raise NotImplementedError
Q, Qnew = Qnew, Q
# --------------------------------------
# Animation plot. Display solution
if not it % iplot:
for l in line1:
l.remove()
del l
for l in line2:
l.remove()
del l
# stretch for plotting
k = 0
for i in range(ne):
for j in range(N+1):
Qs[k] = Q[i,j,0]
Qv[k] = Q[i,j,1]
k = k + 1
# --------------------------------------
# Analytical solution (stress i.c.)
Qa[0,:] = 1./2.*(np.exp(-1./sig**2 * (xg-x0 + c*it*dt)**2)\
+ np.exp(-1./sig**2 * (xg-x0-c*it*dt)**2))
Qa[1,:] = 1/(2*Z)*(np.exp(-1./sig**2 * (xg-x0+c*it*dt)**2)\
- np.exp(-1./sig**2 * (xg-x0-c*it*dt)**2))
# --------------------------------------
# Display lines
line1 = ax1.plot(xg, Qs, 'k', xg, Qa[0,:], 'r--', lw=1.5)
line2 = ax2.plot(xg, Qv, 'k', xg, Qa[1,:], 'r--', lw=1.5)
plt.legend(iter(line2), ('D. Galerkin', 'Analytic'))
plt.gcf().canvas.draw()
Explanation: 4. Discontinuous Galerkin Solution
The principal characteristic of the discontinuous Galerkin Method is the communication between the element neighbors using a flux term, in general it is given
\begin{equation}
\mathbf{Flux} = \int_{\partial D_k} \mathbf{A}\mathbf{Q}\ell_j(\xi)\mathbf{n}d\xi
\end{equation}
this term leads to four flux contributions for left and right sides of the elements
\begin{equation}
\mathbf{Flux} = -\mathbf{A}{k}^{-}\mathbf{Q}{l}^{k}\mathbf{F}^{l} + \mathbf{A}{k}^{+}\mathbf{Q}{r}^{k}\mathbf{F}^{r} - \mathbf{A}{k}^{+}\mathbf{Q}{r}^{k-1}\mathbf{F}^{l} + \mathbf{A}{k}^{-}\mathbf{Q}{l}^{k+1}\mathbf{F}^{r}
\end{equation}
Last but not least, we have to solve our semi-discrete scheme that we derived above using an appropriate time extrapolation, in the code below we implemented two different time extrapolation schemes:
1) Euler scheme
\begin{equation}
\mathbf{Q}^{t+1} \approx \mathbf{Q}^{t} + dt\mathbf{M}^{-1}(\mathbf{A}\mathbf{K}\mathbf{Q} - \mathbf{Flux})
\end{equation}
2) Second-order Runge-Kutta method (also called predictor-corrector scheme)
\begin{eqnarray}
k_1 &=& f(t_i, y_i) \
k_2 &=& f(t_i + dt, y_i + dt k_1) \
& & \
y_{i+1} &=& y_i + \frac{dt}{2} (k_1 + k_2)
\end{eqnarray}
with $f$ that corresponds with $\mathbf{M}^{-1}(\mathbf{A}\mathbf{K}\mathbf{Q} - \mathbf{Flux})$
Exercise 3
In order to validate our numerical solution, we ask to compare with the corresponding analytical solution as we did in the finite volumes implementation. Implement the analytical solution for the 1D elastic wave equation in homogeneous media and plot it together with the discontinuous Galerkin numerical solution.
End of explanation |
8,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Constants
Step1: Toy Dataset ("Gaussian blobs")
Step2: Typical Model Specification for Logistic Regression
Step3: Alternative Specification
An alternative specification isolates the positive and negative samples and explicitly requires an input for each class. The loss that is optimized is then the sum of the binary cross-entropy losses for each class. Since we fix the corresponding target labels to be all ones or zeros accordingly, the binary cross-entropy losses for each class both result in one of the complementary terms, and averaging them would result in the usual binary cross-entropy loss on all samples with their corresponding labels.
This trick is crucial for many model specifications in keras-adversarial.
Step4: The loss that actually gets optimized is the first value above, the sum of the subsequent values. The mean of them is the required binary cross-entropy loss. | Python Code:
n_samples = 100
n_features = 2
n_classes = 2
seed = 42
rng = np.random.RandomState(seed)
Explanation: Constants
End of explanation
x_test, y_test = make_blobs(n_samples=n_samples, centers=n_classes, random_state=rng)
# class labels are balanced
np.sum(y_test)
fig, ax = plt.subplots(figsize=(7, 5))
cb = ax.scatter(*x_test.T, c=y_test, cmap='coolwarm')
fig.colorbar(cb, ax=ax)
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
plt.show()
Explanation: Toy Dataset ("Gaussian blobs")
End of explanation
classifier = Sequential([
Dense(16, input_dim=n_features, activation='relu'),
Dense(32, activation='relu'),
Dense(32, activation='relu'),
Dense(1, activation='sigmoid')
])
classifier.compile(optimizer='rmsprop', loss='binary_crossentropy')
loss = classifier.evaluate(x_test, y_test)
loss
Explanation: Typical Model Specification for Logistic Regression
End of explanation
pos = Input(shape=(n_features,))
neg = Input(shape=(n_features,))
# make use of the classifier defined earlier
y_pred_pos = classifier(pos)
y_pred_neg = classifier(neg)
# define a multi-input, multi-output model
classifier_alt = Model([pos, neg], [y_pred_pos, y_pred_neg])
classifier_alt.compile(optimizer='rmsprop', loss='binary_crossentropy')
losses = classifier_alt.evaluate(
[
x_test[y_test == 1],
x_test[y_test == 0]
],
[
np.ones(n_samples // 2),
np.zeros(n_samples // 2)
]
)
losses
Explanation: Alternative Specification
An alternative specification isolates the positive and negative samples and explicitly requires an input for each class. The loss that is optimized is then the sum of the binary cross-entropy losses for each class. Since we fix the corresponding target labels to be all ones or zeros accordingly, the binary cross-entropy losses for each class both result in one of the complementary terms, and averaging them would result in the usual binary cross-entropy loss on all samples with their corresponding labels.
This trick is crucial for many model specifications in keras-adversarial.
End of explanation
.5 * losses[0]
# alternatively
np.mean(losses[1:])
np.allclose(loss, np.mean(losses[1:]))
Explanation: The loss that actually gets optimized is the first value above, the sum of the subsequent values. The mean of them is the required binary cross-entropy loss.
End of explanation |
8,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First Trading Algorithm
Pairs Trading
Pairs trading is a strategy that uses two stocks that are highly correlated. We can then use the difference in price between the two stocks as signal if one moves out of correlation with the other. It is an older strategy that is used classically as a guide to beginning algorithmic trading. There is a fantastic full guide and write up on Investopedia you can find here! I highly recommend reading the article in full before continuing, it is entertaining and informative!
Let's create our first basic trading algorithm! This is an exercise in using quantopian, NOT a realistic representation of what a good algorithm is! Never use something as simple as this in the real world! This is an extremely simplified version of Pairs Trading, we won't be considering factors such as cointegration!
Step1: United Airlines and American Airlines
Step2: Spread and Correlation
Step3: Normalizing with a z-score
Step4: Rolling Z-Score
Our spread is currently American-United. Let's decide how to calculate this on a rolling basis for our use in Quantopian
Step6: Implementation of Strategy
WARNING | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import quandl
Explanation: First Trading Algorithm
Pairs Trading
Pairs trading is a strategy that uses two stocks that are highly correlated. We can then use the difference in price between the two stocks as signal if one moves out of correlation with the other. It is an older strategy that is used classically as a guide to beginning algorithmic trading. There is a fantastic full guide and write up on Investopedia you can find here! I highly recommend reading the article in full before continuing, it is entertaining and informative!
Let's create our first basic trading algorithm! This is an exercise in using quantopian, NOT a realistic representation of what a good algorithm is! Never use something as simple as this in the real world! This is an extremely simplified version of Pairs Trading, we won't be considering factors such as cointegration!
End of explanation
start = '07-01-2015'
end = '07-01-2017'
united = quandl.get('WIKI/UAL',
start_date = start,
end_date = end)
american = quandl.get('WIKI/AAL',
start_date = start,
end_date = end)
united.head()
american.head()
american['Adj. Close'].plot(label = 'American Airlines',
figsize = (12, 8))
united['Adj. Close'].plot(label = 'United Airlines')
plt.legend()
Explanation: United Airlines and American Airlines
End of explanation
np.corrcoef(american['Adj. Close'],
united['Adj. Close'])
spread = american['Adj. Close'] - united['Adj. Close']
spread.plot(label='Spread',
figsize = (12,8))
plt.axhline(spread.mean(),
c = 'r')
plt.legend()
Explanation: Spread and Correlation
End of explanation
def zscore(stocks):
return (stocks - stocks.mean()) / np.std(stocks)
zscore(spread).plot(figsize = (14,8))
plt.axhline(zscore(spread).mean(),
color = 'black')
plt.axhline(1.0, c = 'r', ls = '--')
plt.axhline(-1.0, c = 'g', ls = '--')
plt.legend(['Spread z-score', 'Mean', '+1', '-1']);
Explanation: Normalizing with a z-score
End of explanation
#1 day moving average of the price spread
spread_mavg1 = spread.rolling(1).mean()
# 30 day moving average of the price spread
spread_mavg30 = spread.rolling(30).mean()
# Take a rolling 30 day standard deviation
std_30 = spread.rolling(30).std()
# Compute the z score for each day
zscore_30_1 = (spread_mavg1 - spread_mavg30) / std_30
zscore_30_1.plot(figsize = (12, 8),
label = 'Rolling 30 day Z score')
plt.axhline(0, color = 'black')
plt.axhline(1.0, color = 'red', linestyle = '--');
Explanation: Rolling Z-Score
Our spread is currently American-United. Let's decide how to calculate this on a rolling basis for our use in Quantopian
End of explanation
import numpy as np
def initialize(context):
Called once at the start of the algorithm.
# Every day we check the pair status
schedule_function(check_pairs, date_rules.every_day(), time_rules.market_close(minutes = 60))
# Our Two Airlines
context.aa = sid(45971) #aal
context.ual = sid(28051) #ual
# Flags to tell us if we're currently in a trade
context.long_on_spread = False
context.shorting_spread = False
def check_pairs(context, data):
# For convenience
aa = context.aa
ual = context.ual
# Get pricing history
prices = data.history([aa, ual], "price", 30, '1d')
# Need to use .iloc[-1:] to get dataframe instead of series
short_prices = prices.iloc[-1:]
# Get the long 30 day mavg
mavg_30 = np.mean(prices[aa] - prices[ual])
# Get the std of the 30 day long window
std_30 = np.std(prices[aa] - prices[ual])
# Get the shorter span 1 day mavg
mavg_1 = np.mean(short_prices[aa] - short_prices[ual])
# Compute z-score
if std_30 > 0:
zscore = (mavg_1 - mavg_30)/std_30
# Our two entry cases
if zscore > 0.5 and not context.shorting_spread:
# spread = aa - ual
order_target_percent(aa, -0.5) # short top
order_target_percent(ual, 0.5) # long bottom
context.shorting_spread = True
context.long_on_spread = False
elif zscore < -0.5 and not context.long_on_spread:
# spread = aa - ual
order_target_percent(aa, 0.5) # long top
order_target_percent(ual, -0.5) # short bottom
context.shorting_spread = False
context.long_on_spread = True
# Our exit case
elif abs(zscore) < 0.1:
order_target_percent(aa, 0)
order_target_percent(ual, 0)
context.shorting_spread = False
context.long_on_spread = False
record('zscore', zscore)
Explanation: Implementation of Strategy
WARNING: YOU SHOULD NOT ACTUALLY TRADE WITH THIS!
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.