Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
9,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sound waves as equations
Author
Step1: Fast Fourier Transform
We use the Fast Fourier Transform algorithm from the numpy library.
Step2: Writing the Fast Fourier Transform in Latex
We output the formula to a document, using the Latex syntax. The final pdf file will be too big to be handled by one latex file. Therefore, we split it in multiple files, included in a main latex file.
Step3: Inverse Fast Fourier Transform
We will check the formula is correct performing the inverse transformation on the formula to obtain the original signal and outputting it into a new .wav file, which will sound exactly as the original one. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import wave
import sys
# Only opens 16bit PCM WAV files.
# The format can be changed in Audacity.
spf = wave.open('test2.wav','r')
# Only opens Mono files
if spf.getnchannels() == 2:
print 'Just mono files'
sys.exit(0)
# Extracts the signal
signal = spf.readframes(-1)
signal = np.fromstring(signal, 'Int16')
# Prints the signal
plt.figure(1)
plt.title('Signal Wave...')
plt.plot(signal)
Explanation: Sound waves as equations
Author: Mario Román
GitHub: https://github.com/M42/sum-of-waves
Introduction
This notebook contains code to a sound file (specifically, a 16bit PCM WAV file) into a pdf file containing the mathematical formula of the sound wave. For an example of input and output, you can check the files "test2.wav" and "output.pdf" in the GitHub repository.
We use the Fast Fourier Transform algorithm to express the wave as a sum of sinusoidal functions, and the complete sum is translated to a LaTeX file, which can be used to produce the final pdf.
Plotting wave files
In this first step, we are going to open the sound file and make a simple plot of the signal, using matplotlib. The sound file:
Must be a 16bit PCM WAV file.
Must be mono; not stereo.
To convert multiple sound files to this format, you can use sound editing programs, such as Audacity.
End of explanation
Y = np.fft.fft(signal)
plt.figure(2)
plt.title('Fourier transform')
plt.plot(Y)
Y
Explanation: Fast Fourier Transform
We use the Fast Fourier Transform algorithm from the numpy library.
End of explanation
lfile = open("latex.tex","w")
N = str(len(Y))
lfile.write("\\documentclass{scrartcl}\n\\usepackage{amsmath}\n\\begin{document}\n")
lfile.write("\\allowdisplaybreaks[1]\n")
lfile.write("\\title{Hungarian March}\n\\subtitle{First 8 seconds!}\n")
# Huge files cause a memory error in pdflatex
# We split in small files
j = 0
ffile = open("latex"+str(0)+".tex","w")
ffile.write("\\begin{align*}\n")
for k in range(len(Y)):
j = j+1
if j==28:
j=0
ffile.write("\\end{align*}\n")
lfile.write("\\input{latex"+str((k-1)/28)+".tex}\n")
ffile.close()
ffile = open("latex"+str(k/28)+".tex","w")
ffile.write("\\begin{align*}\n")
ffile.write(str(Y[k]) + "e^{ \\frac{j \\tau}{" + N + "} " + str(k) + "t } && + \\\\ \n")
ffile.write("\\end{align*}\n")
lfile.write("\\end{document}")
lfile.close()
Explanation: Writing the Fast Fourier Transform in Latex
We output the formula to a document, using the Latex syntax. The final pdf file will be too big to be handled by one latex file. Therefore, we split it in multiple files, included in a main latex file.
End of explanation
# Inverses the fast fourier transform
yinv = np.fft.ifft(Y)
newsignal = yinv.real.astype(np.int16)
newsignal = newsignal.copy(order='C')
# Recreates the wave file
spfw = wave.open('test3.wav','w')
spfw.setnchannels(spf.getnchannels())
spfw.setsampwidth(spf.getsampwidth())
spfw.setframerate(spf.getframerate())
spfw.writeframes(newsignal)
Explanation: Inverse Fast Fourier Transform
We will check the formula is correct performing the inverse transformation on the formula to obtain the original signal and outputting it into a new .wav file, which will sound exactly as the original one.
End of explanation |
9,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:1000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
reviews_ints =
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features =
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
train_x, val_x =
train_y, val_y =
val_x, test_x =
val_y, test_y =
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ =
labels_ =
keep_prob =
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state =
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
9,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Biophysics. Examples of code using Bio.PDB
Requirements
biophyton
jupyter
nglview
Step1: 1.1 Loading structure from PDB file
Step2: 1.2 Get Headers
Step3: 1.3 Alternatively . Download from PDB (in mmCIF format, this is the default)
pdbl.retrieve_pdb_file gives as output the name of the downloaded file.
Step4: 1.4 Get Extended data from mmCIF file (very long output, structure defined at PDB mmCIF format)
Step5: Composition
3.1 Number of models
All elements are just lists of elements of the following level
Step6: 3.2 Chains and length of the first model
We iterate along the list of chains contained in the first model (st[0]), len(chain) gives the length of the array, i.e. the number of residues that contains.
Step7: 3.3 List of first 10 residues of chain 'A'
Chains are lists of residues using the residue number as index. st[0] corresponds to the first model, st[0]['A'] points to the chain with id 'A' within st[0].
Step8: 3.4 List of atoms of first residue and complete information of first atom
get_atoms() gives the list of atoms of any element. Can be used with structures, models, chains or residues. Here st[0]['A'][1] stands for the residue labelled as 1 in chain A of the first model (st[0]).
Vars is a standart python function to give the contents of a given dictionary/object. The atom dictionary is described in Bio.PDB.Atom.
Note that the information in Atom corresponds to one line in the PDB format, in this case N atom on residue A 1
ATOM 1 N MET A 1 27.340 24.430 2.614 1.00 9.67 N
Step9: List of atoms and coordinates for a given list of residues and atoms
First loop goes through all structure (note that here get_atoms() is applied to st, could be also applied to other selections), selecting atoms and residues included in residues and atoms lists.
Second loop goes through the selection and prints atom id and coordinates (as lists). It can be done in a single loop, but in this way select can be re-used for other purposes
Step10: list atoms and coordinates for a given residue
Note here that here we select a specific residue not a residue type as above.
Step11: Print all distances between the atoms of two residues
We introduce here two functions to make nicer prints for residues and atoms. This is a quite standard way but other formats are possible. If model should be indicated, this is typically written as /model (GLY A10.N/0).
Distances are evaluated in two alternative ways. Bio.PDB provides a shorcut for distance just substracting atoms. Second option uses numpy functions and calculates the length of a vector going from one atom to the other. Should not be any difference.
Step12: Print all distances from atoms in the residue to a given point
Version of a same script using a central point of reference.
Step13: Find contacts below some distance cut-off, using NeighborSearch
This can be solved iterating over all combination of pairs of residues and selecting those that are closer than MAXDIST.
NeighborSearch uses a different internal structure that makes this search much faster, especially in large structures.
Step14: 9
Step15: selecting chains A and C (one biological assembly)
Step16: Save it on a PDB file | Python Code:
# import libraries
from Bio.PDB import *
import nglview as nv
Explanation: Biophysics. Examples of code using Bio.PDB
Requirements
biophyton
jupyter
nglview
End of explanation
pdb_file = '1ubq.pdb'
pdb_parser = PDBParser()
st = pdb_parser.get_structure('1UBQ', '1ubq.pdb')
import nglview as nv
view = nv.show_biopython(st)
view
Explanation: 1.1 Loading structure from PDB file
End of explanation
st.header
Explanation: 1.2 Get Headers
End of explanation
pdbl = PDBList()
cif_fn = pdbl.retrieve_pdb_file("1UBQ")
cif_fn
cif_parser = MMCIFParser()
cif_st = cif_parser.get_structure('1UBQ', cif_fn)
view = nv.show_biopython(st)
view
Explanation: 1.3 Alternatively . Download from PDB (in mmCIF format, this is the default)
pdbl.retrieve_pdb_file gives as output the name of the downloaded file.
End of explanation
mmcif_dict = MMCIF2Dict.MMCIF2Dict(cif_fn)
mmcif_dict
Explanation: 1.4 Get Extended data from mmCIF file (very long output, structure defined at PDB mmCIF format)
End of explanation
len(st)
Explanation: Composition
3.1 Number of models
All elements are just lists of elements of the following level: structure is a list of models, model is a list of chains, etc. Here we just check for the length of the list of models.
End of explanation
for chain in st[0]:
print(chain.id, len(chain))
Explanation: 3.2 Chains and length of the first model
We iterate along the list of chains contained in the first model (st[0]), len(chain) gives the length of the array, i.e. the number of residues that contains.
End of explanation
for num_res in range(10):
# Note that residues start in 1 in 1ubq, may not be the case
# in other proteins
res = st[0]['A'][num_res + 1]
print(res.get_resname(), res.get_parent().id, res.id[1])
Explanation: 3.3 List of first 10 residues of chain 'A'
Chains are lists of residues using the residue number as index. st[0] corresponds to the first model, st[0]['A'] points to the chain with id 'A' within st[0].
End of explanation
for atom in st[0]['A'][1].get_atoms():
print(atom.id)
vars(st[0]['A'][1]['N'])
Explanation: 3.4 List of atoms of first residue and complete information of first atom
get_atoms() gives the list of atoms of any element. Can be used with structures, models, chains or residues. Here st[0]['A'][1] stands for the residue labelled as 1 in chain A of the first model (st[0]).
Vars is a standart python function to give the contents of a given dictionary/object. The atom dictionary is described in Bio.PDB.Atom.
Note that the information in Atom corresponds to one line in the PDB format, in this case N atom on residue A 1
ATOM 1 N MET A 1 27.340 24.430 2.614 1.00 9.67 N
End of explanation
residues = ['ARG', 'ASP']
atoms = ['CA']
select = []
for atom in st.get_atoms():
if atom.get_parent().get_resname() in residues and atom.id in atoms:
select.append(atom)
for atom in select:
print(atom.get_parent().get_resname(), atom.get_parent().id[1],
atom.get_name(), atom.get_coord())
Explanation: List of atoms and coordinates for a given list of residues and atoms
First loop goes through all structure (note that here get_atoms() is applied to st, could be also applied to other selections), selecting atoms and residues included in residues and atoms lists.
Second loop goes through the selection and prints atom id and coordinates (as lists). It can be done in a single loop, but in this way select can be re-used for other purposes
End of explanation
residue_num = 24
chain_id = 'A'
for atom in st[0][chain_id][residue_num]:
print(atom.get_parent().get_resname(), atom.get_parent().id[1],
atom.get_name(), atom.get_coord())
Explanation: list atoms and coordinates for a given residue
Note here that here we select a specific residue not a residue type as above.
End of explanation
# chain_A = st[0]['A']
residue_1 = 10
residue_2 = 20
import numpy as np
# simple functions to get atom and residue ids properly printed
def residue_id(res): #residue as ASP A32
#res.id[1] is integer, we should use str to get the corresponding string
return res.get_resname() + ' ' + res.get_parent().id + str(res.id[1])
def atom_id(atom): #atom as ASP A32.N, re-uses residue_id
return residue_id(atom.get_parent()) + '.' + atom.id
for at1 in chain_A[residue_1]:
for at2 in chain_A[residue_2]:
dist = at2 - at1 # Direct procedure with (-) to compute distances
vector = at2.coord - at1.coord # Or using numpy
distance = np.sqrt(np.sum(vector ** 2))
print(atom_id(at1), atom_id(at2), dist, distance)
Explanation: Print all distances between the atoms of two residues
We introduce here two functions to make nicer prints for residues and atoms. This is a quite standard way but other formats are possible. If model should be indicated, this is typically written as /model (GLY A10.N/0).
Distances are evaluated in two alternative ways. Bio.PDB provides a shorcut for distance just substracting atoms. Second option uses numpy functions and calculates the length of a vector going from one atom to the other. Should not be any difference.
End of explanation
center = np.array([10, 10, 10])
for at1 in chain_A[residue_1].get_atoms():
vect = at1.coord - center
distance = np.sqrt(np.sum(vect ** 2))
print(atom_id(at1), distance)
Explanation: Print all distances from atoms in the residue to a given point
Version of a same script using a central point of reference.
End of explanation
MAXDIST = 10
select = []
#Select only CA atoms for short
for at in st.get_atoms():
if at.id == 'CA':
select.append(at)
# Preparing search
nbsearch = NeighborSearch(select)
ncontact = 0
for at1, at2 in nbsearch.search_all(MAXDIST): # All pairs with a distance less than MAXDIST
ncontact += 1
print("Contact: ", ncontact, atom_id(at1), atom_id(at2), at2 - at1)
Explanation: Find contacts below some distance cut-off, using NeighborSearch
This can be solved iterating over all combination of pairs of residues and selecting those that are closer than MAXDIST.
NeighborSearch uses a different internal structure that makes this search much faster, especially in large structures.
End of explanation
st_6axg = pdb_parser.get_structure('6AXG', '6axg.pdb')
view = nv.show_biopython(st_6axg)
view
Explanation: 9: Select some chain and discard others (e.g. to get a Biological assembly from an assymetrical unit)
This is the "Biopython" way of selecting parts of the structure. Can be used also to select residues, or atoms, but it can become complicated. Most experienced users will just edit the PDB file and remove the chains, residues or atoms manually.
Here we use as example 6axg, a complex of two proteins (the biological assembly). Version downloaded from PDB (the asymmetric unit) has 6 copies of the complex. Chains A and C correspond to one of such copies.
End of explanation
chains_ok = ['A', 'C']
chain_orig = [ch.id for ch in st_6axg[0]]
for chain_id in chain_orig:
if chain_id not in chains_ok:
st_6axg[0].detach_child(chain_id)
view = nv.show_biopython(st_6axg)
view
Explanation: selecting chains A and C (one biological assembly)
End of explanation
pdbio = PDBIO()
output_pdb_path = '6axg_AC.pdb'
pdbio.set_structure(st_6axg)
pdbio.save(output_pdb_path)
Explanation: Save it on a PDB file
End of explanation |
9,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scientific programming with the SciPy stack
Pandas
Import libraries and check versions.
Step1: Read the data and get a row count. Data source
Step2: SymPy
SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible.
Step3: This example was gleaned from
Step4: What is the probability that the temperature is actually greater than 33 degrees?
<img src="eq1.png">
We can use Sympy's integration engine to calculate a precise answer.
Step5: Assume we now have a thermometer and can measure the temperature. However, there is still uncertainty involved.
Step6: We now have two measurements -- 30 +- 3 degrees and 26 +- 1.5 degrees. How do we combine them? 30 +- 3 was our prior measurement. We want to cacluate a better estimate of the temperature (posterior) given an observation of 26 degrees.
<img src="eq2.png"> | Python Code:
import pandas as pd
import numpy as np
import sys
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Numpy version ' + np.__version__)
Explanation: Scientific programming with the SciPy stack
Pandas
Import libraries and check versions.
End of explanation
file_path = r'data\T100_2015.csv.gz'
df = pd.read_csv(file_path, header=0)
df.count()
df.head(n=10)
df = pd.read_csv(file_path, header=0, usecols=["PASSENGERS", "ORIGIN", "DEST"])
df.head(n=10)
print('Min: ', df['PASSENGERS'].min())
print('Max: ', df['PASSENGERS'].max())
print('Mean: ', df['PASSENGERS'].mean())
df = df.query('PASSENGERS > 10000')
print('Min: ', df['PASSENGERS'].min())
print('Max: ', df['PASSENGERS'].max())
print('Mean: ', df['PASSENGERS'].mean())
OriginToDestination = df.groupby(['ORIGIN', 'DEST'], as_index=False).agg({'PASSENGERS':sum,})
OriginToDestination.head(n=10)
OriginToDestination = pd.pivot_table(OriginToDestination, values='PASSENGERS', index=['ORIGIN'], columns=['DEST'], aggfunc=np.sum)
OriginToDestination.head()
OriginToDestination.fillna(0)
Explanation: Read the data and get a row count. Data source: U.S. Department of Transportation, TranStats database. Air Carrier Statistics Table T-100 Domestic Market (All Carriers): "This table contains domestic market data reported by both U.S. and foreign air carriers, including carrier, origin, destination, and service class for enplaned passengers, freight and mail when both origin and destination airports are located within the boundaries of the United States and its territories." -- 2015
End of explanation
import sympy
from sympy import *
from sympy.stats import *
from sympy import symbols
from sympy.plotting import plot
from sympy.interactive import printing
printing.init_printing(use_latex=True)
print('Sympy version ' + sympy.__version__)
Explanation: SymPy
SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible.
End of explanation
T = Normal('T', 30, 3)
Explanation: This example was gleaned from:
Rocklin, Matthew, and Andy R. Terrel. "Symbolic Statistics with SymPy." Computing in Science & Engineering 14.3 (2012): 88-93.
Problem: Data assimilation -- we want to assimilate new measurements into a set of old measurements. Both sets of measurements have uncertainty. For example, ACS estimates updated with local data.
Assume we've estimated that the temperature outside is 30 degrees. However, there is certainly uncertainty is our estimate. Let's say +- 3 degrees. In Sympy, we can model this with a normal random variable.
End of explanation
P(T > 33)
N(P(T > 33))
Explanation: What is the probability that the temperature is actually greater than 33 degrees?
<img src="eq1.png">
We can use Sympy's integration engine to calculate a precise answer.
End of explanation
noise = Normal('noise', 0, 1.5)
observation = T + noise
Explanation: Assume we now have a thermometer and can measure the temperature. However, there is still uncertainty involved.
End of explanation
T_posterior = given(T, Eq(observation, 26))
Explanation: We now have two measurements -- 30 +- 3 degrees and 26 +- 1.5 degrees. How do we combine them? 30 +- 3 was our prior measurement. We want to cacluate a better estimate of the temperature (posterior) given an observation of 26 degrees.
<img src="eq2.png">
End of explanation |
9,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Lab 9 - Graphs & Networks
In this lab we will do the following
Step2: 1. Get API key
Get a LinkedIn API key at http
Step3: 2. Get Access Token
Next we are scraping our data using the LinkedIn API. (Code for using the LinkedIn API is taken and adjusted from http
Step4: 3. Get data, clean it and store to disk
Step5: When you have run these cells you have a 'linkedIn_links_clean.csv' file in the directory of your notebook, that is compatible with gephi. If you don't have a LinkedIn account or think your network is boring you can use one of ours which you can get here.
4. Network Analysis with Gephi
Installation
Gephi requires Java to run, at least a JRE of version 6. To check if you have java installed, open a console and run
$ java -version
java version "1.7.0_25"
OpenJDK Runtime Environment (IcedTea 2.3.12) (7u25-2.3.12-4ubuntu3)
OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode)
If you don't have java or only an outdated version, go here to download it.
To install gephi, download it and follow these installation instructions.
Analysis
The analysis with a GUI based tool is hard to convey in an IPython Notebook ;). If you don't want to watch the video, here is the Gephi Quick Start guide.
Here are the things we are doing
Step6: and now, with NetworkX !
Step7: A few stats about your network | Python Code:
!pip install oauth2
!pip install unidecode
%matplotlib inline
from collections import defaultdict
import json
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import rcParams
import matplotlib.cm as cm
import matplotlib as mpl
#colorbrewer2 Dark2 qualitative color table
dark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),
(0.8509803921568627, 0.37254901960784315, 0.00784313725490196),
(0.4588235294117647, 0.4392156862745098, 0.7019607843137254),
(0.9058823529411765, 0.1607843137254902, 0.5411764705882353),
(0.4, 0.6509803921568628, 0.11764705882352941),
(0.9019607843137255, 0.6705882352941176, 0.00784313725490196),
(0.6509803921568628, 0.4627450980392157, 0.11372549019607843)]
rcParams['figure.figsize'] = (10, 6)
rcParams['figure.dpi'] = 150
rcParams['axes.color_cycle'] = dark2_colors
rcParams['lines.linewidth'] = 2
rcParams['axes.facecolor'] = 'white'
rcParams['font.size'] = 14
rcParams['patch.edgecolor'] = 'white'
rcParams['patch.facecolor'] = dark2_colors[0]
rcParams['font.family'] = 'StixGeneral'
def remove_border(axes=None, top=False, right=False, left=True, bottom=True):
Minimize chartjunk by stripping out unnecesasry plot borders and axis ticks
The top/right/left/bottom keywords toggle whether the corresponding plot border is drawn
ax = axes or plt.gca()
ax.spines['top'].set_visible(top)
ax.spines['right'].set_visible(right)
ax.spines['left'].set_visible(left)
ax.spines['bottom'].set_visible(bottom)
#turn off all ticks
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_ticks_position('none')
#now re-enable visibles
if top:
ax.xaxis.tick_top()
if bottom:
ax.xaxis.tick_bottom()
if left:
ax.yaxis.tick_left()
if right:
ax.yaxis.tick_right()
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
Explanation: Lab 9 - Graphs & Networks
In this lab we will do the following:
Get a LinkedIn API key
Use oauth2 to get an acceess token
First we are going to download our own LinkedIn data using the LinkedIn API.
Then we are exporting this data as a csv file to be able to import it into Gephi.
Before starting Gephi we will do some network analysis directly in python
We will analyze our data with the external tool Gephi
You can download this notebook from here.
0. Install oauth2 package
End of explanation
#Johanna
#user_token = '6a516d33-786e-443c-b6e9-def654f88098'
#user_secret = 'c03c49da-9dae-4b05-a2af-82e40426439f'
#api_key = 'xpsswsigqw4r'
#secret_key = 'aIRpJHhA8JHTRsyb'
#Alex
#api_key = 'g8lq60ilatfh'
#secret_key = 'XEOmeklHWHtmwgoQ'
#user_token = 'a8991ba6-9a27-40d7-ac6f-9280cc1dc650'
#user_secret = '43a11017-c1f3-4c30-afab-43df3c39b938'
#Nicolas
user_token = 'd41f3e0c-6bb9-4db8-b324-25a723ff2f50'
user_secret = 'fc66e892-6f92-4e15-b9a9-b0cccbec5336'
api_key = 'kg7oy496e09a'
secret_key = 'oLCLRNxVjt8ZY6OE'
Explanation: 1. Get API key
Get a LinkedIn API key at http://developer.linkedin.com/documents/authentication (choose r_network)
Save your authentication:
End of explanation
import oauth2 as oauth
import urlparse
def request_token(consumer):
client = oauth.Client(consumer)
request_token_url = 'https://api.linkedin.com/uas/oauth/requestToken?scope=r_network'
resp, content = client.request(request_token_url, "POST")
if resp['status'] != '200':
raise Exception("Invalid response %s." % resp['status'])
request_token = dict(urlparse.parse_qsl(content))
return request_token
#consumer = oauth.Consumer(api_key, secret_key)
#r_token = request_token(consumer)
#print "Request Token: oauth_token: %s, oauth_token_secret: %s" % (r_token['oauth_token'], r_token['oauth_token_secret'])
def authorize(request_token):
authorize_url ='https://api.linkedin.com/uas/oauth/authorize'
print "Go to the following link in your browser:"
print "%s?oauth_token=%s" % (authorize_url, request_token['oauth_token'])
print
accepted = 'n'
while accepted.lower() == 'n':
accepted = raw_input('Have you authorized me? (y/n) ')
oauth_verifier = raw_input('What is the PIN? ')
return oauth_verifier
#oauth_verifier = authorize(r_token)
def access(consumer, request_token, oauth_verifier):
access_token_url = 'https://api.linkedin.com/uas/oauth/accessToken'
token = oauth.Token(request_token['oauth_token'], request_token['oauth_token_secret'])
token.set_verifier(oauth_verifier)
client = oauth.Client(consumer, token)
resp, content = client.request(access_token_url, "POST")
access_token = dict(urlparse.parse_qsl(content))
return access_token
#a_token = access(consumer, r_token, oauth_verifier)
#print a_token
#print "Access Token: oauth_token = %s, oauth_token_secret = %s" % (a_token['oauth_token'], a_token['oauth_token_secret'])
#print "You may now access protected resources using the access tokens above."
consumer = oauth.Consumer(api_key, secret_key)
r_token = request_token(consumer)
print "Request Token: oauth_token: %s, oauth_token_secret: %s" % (r_token['oauth_token'], r_token['oauth_token_secret'])
oauth_verifier = authorize(r_token)
a_token = access(consumer, r_token, oauth_verifier)
print a_token
print "Access Token: oauth_token = %s, oauth_token_secret = %s" % (a_token['oauth_token'], a_token['oauth_token_secret'])
print "You may now access protected resources using the access tokens above."
Explanation: 2. Get Access Token
Next we are scraping our data using the LinkedIn API. (Code for using the LinkedIn API is taken and adjusted from http://dataiku.com/blog/2012/12/07/visualizing-your-linkedin-graph-using-gephi-part-1.html).
End of explanation
import simplejson
import codecs
output_file = 'linkedIn_links.csv'
my_name = 'Your Name'
def linkedin_connections():
# Use your credentials to build the oauth client
consumer = oauth.Consumer(key=api_key, secret=secret_key)
token = oauth.Token(key=a_token['oauth_token'], secret=a_token['oauth_token_secret'])
client = oauth.Client(consumer, token)
# Fetch first degree connections
resp, content = client.request('http://api.linkedin.com/v1/people/~/connections?format=json')
results = simplejson.loads(content)
# File that will store the results
output = codecs.open(output_file, 'w', 'utf-8')
# Loop through the 1st degree connection and see how they connect to each other
for result in results["values"]:
con = "%s %s" % (result["firstName"].replace(",", " "), result["lastName"].replace(",", " "))
print >>output, "%s,%s" % (my_name, con)
# This is the trick, use the search API to get related connections
u = "https://api.linkedin.com/v1/people/%s:(relation-to-viewer:(related-connections))?format=json" % result["id"]
resp, content = client.request(u)
rels = simplejson.loads(content)
try:
for rel in rels['relationToViewer']['relatedConnections']['values']:
sec = "%s %s" % (rel["firstName"].replace(",", " "), rel["lastName"].replace(",", " "))
print >>output, "%s,%s" % (con, sec)
except:
pass
linkedin_connections()
from operator import itemgetter
from unidecode import unidecode
clean_output_file = 'linkedIn_links_clean.csv'
def stringify(chain):
# Simple utility to build the nodes labels
allowed = '0123456789abcdefghijklmnopqrstuvwxyz_'
c = unidecode(chain.strip().lower().replace(' ', '_'))
return ''.join([letter for letter in c if letter in allowed])
def clean(f_input, f_output):
output = open(f_output, 'w')
# Store the edges inside a set for dedup
edges = set()
for line in codecs.open(f_input, 'r', 'utf-8'):
from_person, to_person = line.strip().split(',')
_f = stringify(from_person)
_t = stringify(to_person)
# Reorder the edge tuple
_e = tuple(sorted((_f, _t), key=itemgetter(0, 1)))
edges.add(_e)
for edge in edges:
print >>output, '%s,%s' % (edge[0], edge[1])
clean(output_file, clean_output_file)
Explanation: 3. Get data, clean it and store to disk
End of explanation
import csv
from collections import defaultdict
pairlist=[]
connections=defaultdict(list)
userset=set()
with open('linkedIn_links_clean.csv', 'rb') as csvfile:
allrows = csv.reader(csvfile, delimiter=',')
for row in allrows:
# if ((row[0]=='your_name') | (row[1]=='your_name')): continue # exclude yourself ?
pairlist.append((row[0], row[1]))
connections[row[0]].append(row[1])
connections[row[1]].append(row[0])
userset.add(row[0])
userset.add(row[1])
## Actual algorithm starts here
## display the pagerank
Explanation: When you have run these cells you have a 'linkedIn_links_clean.csv' file in the directory of your notebook, that is compatible with gephi. If you don't have a LinkedIn account or think your network is boring you can use one of ours which you can get here.
4. Network Analysis with Gephi
Installation
Gephi requires Java to run, at least a JRE of version 6. To check if you have java installed, open a console and run
$ java -version
java version "1.7.0_25"
OpenJDK Runtime Environment (IcedTea 2.3.12) (7u25-2.3.12-4ubuntu3)
OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode)
If you don't have java or only an outdated version, go here to download it.
To install gephi, download it and follow these installation instructions.
Analysis
The analysis with a GUI based tool is hard to convey in an IPython Notebook ;). If you don't want to watch the video, here is the Gephi Quick Start guide.
Here are the things we are doing:
Applying a force directed layout with increased repulsion strength
Remove yourself, explore shortest paths between partners
Run force-directed again
Calculating a PageRank
Color nodes by PageRank
Trying a couple of other statistics
Size by PageRank
Filter by Topology, Degree
"Cluster" by running Modularity. Try different parameters.
Highlight via "Partition"
Page Rank
Method
We'll show during this lab that the PageRank basically amounts to computing the largest eigen vector of a stochastic matrix, which can be done via the power iteration method.
Now, code it on your LinkedIn network!
End of explanation
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib
import math
g = nx.Graph()
remove_me = False
for user in userset:
if remove_me & (user=='your_name'): continue
g.add_node(user)
for user in userset:
if remove_me & (user=='your_name'): continue
nconnec = 0
for connection in connections[user]:
if remove_me & (connection=='your_name'): continue
g.add_edge(user, connection, weight = 1)
nconnec+=1
if remove_me & (nconnec==0):
g.remove_node(user)
pagerank_nx = nx.pagerank_scipy(g)
color = [(min(pagerank_nx[n]*30.,1),min(pagerank_nx[n]*30.,1), min(pagerank_nx[n]*30.,1)) for n in pagerank_nx]
pos = nx.spring_layout(g, iterations=100)
nx.draw_networkx_edges(g, pos, width=1, alpha=0.4)
nx.draw_networkx_nodes(g, pos, node_color=color, node_size=100, alpha=1, linewidths =0.5)
#lbls = nx.draw_networkx_labels(g, pos)
plt.show()
# checks whether we have the same, or similar, pageranks
sorted_pr = sorted(pagerank_nx.iteritems(), reverse=True, key=lambda (k,v): v)
print sorted_pr[:10]
Explanation: and now, with NetworkX !
End of explanation
# your number of connection
print 'my degree is: ', g.degree('your_name'), '\n'
# diameter = maximum nb of edges between 2 nodes = always 2 in this case
print 'the graph diameter is: ',nx.diameter(g), '\n'
#center : surprising ?
print 'the center is: ',nx.center(g), '\n'
# number of clique communities of 5 nodes
print 'there are ', len(list(nx.k_clique_communities(g, 5))),'clique communities\n'
# most influential ?
print 'degree: ', g.degree(sorted_pr[2]),'\n'
print 'shortest path between Hanspeter and a friend', nx.shortest_path(g,source='hanspeter_pfister',target='etienne_corteel'),'\n'
Explanation: A few stats about your network:
End of explanation |
9,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data generation
Step1: Preparing data set sweep
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep. Here we will use a single mock community, but two different versions of the reference database.
Step2: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
Assignment Using QIIME 1 or Command-Line Classifiers
Here we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file.
Step3: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format
Step4: As a sanity check, we can look at the first command that was generated and the number of commands generated.
Step5: Finally, we run our commands.
Step6: QIIME2 Classifiers
Now let's do it all over again, but with QIIME2 classifiers (which require different input files and command templates). Note that the QIIME2 artifact files required for assignment are not included in tax-credit, but can be generated from any reference dataset using qiime tools import.
Step7: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
Step8: Move result files to repository
Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
Step9: Do not forget to copy the expected taxonomy files for this mock community! | Python Code:
from os.path import join, expandvars
from joblib import Parallel, delayed
from glob import glob
from os import system
from tax_credit.framework_functions import (parameter_sweep,
generate_per_method_biom_tables,
move_results_to_repository)
project_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment")
analysis_name= "mock-community"
data_dir = join(project_dir, "data", analysis_name)
reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/")
results_dir = expandvars("$HOME/Desktop/projects/mock-community/")
Explanation: Data generation: using python to sweep over methods and parameters
In this notebook, we illustrate how to use python to generate and run a list of commands. In this example, we generate a list of QIIME 1.9.0 assign_taxonomy.py commands, though this workflow for command generation is generally very useful for performing parameter sweeps (i.e., exploration of sets of parameters for achieving a specific result for comparative purposes).
Environment preparation
End of explanation
dataset_reference_combinations = [
('mock-3', 'silva_123_v4_trim250'),
('mock-3', 'silva_123_clean_full16S'),
('mock-3', 'silva_123_clean_v4_trim250'),
('mock-3', 'gg_13_8_otus_clean_trim150'),
('mock-3', 'gg_13_8_otus_clean_full16S'),
('mock-9', 'unite_20.11.2016_clean_trim100'),
('mock-9', 'unite_20.11.2016_clean_fullITS'),
]
reference_dbs = {'gg_13_8_otus_clean_trim150': (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'gg_13_8_otus_clean_full16S': (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'unite_20.11.2016_clean_trim100': (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv')),
'unite_20.11.2016_clean_fullITS': (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv')),
'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/rep_set/rep_set_16S_only/99/99_otus_16S/dna-sequences.fasta'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.txt')),
'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean.fasta'),
join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv')),
'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean/dna-sequences.fasta'),
join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv'))
}
Explanation: Preparing data set sweep
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep. Here we will use a single mock community, but two different versions of the reference database.
End of explanation
method_parameters_combinations = { # probabalistic classifiers
'rdp': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0]},
# global alignment classifiers
'uclust': {'min_consensus_fraction': [0.51, 0.76, 1.0],
'similarity': [0.9, 0.97, 0.99],
'uclust_max_accepts': [1, 3, 5]},
}
Explanation: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
Assignment Using QIIME 1 or Command-Line Classifiers
Here we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file.
End of explanation
command_template = "source activate qiime1; source ~/.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 7000"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.fna',)
Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format:
{0} = output directory
{1} = input data
{2} = reference sequences
{3} = reference taxonomy
{4} = method name
{5} = other parameters
End of explanation
print(len(commands))
commands[0]
Explanation: As a sanity check, we can look at the first command that was generated and the number of commands generated.
End of explanation
Parallel(n_jobs=1)(delayed(system)(command) for command in commands)
Explanation: Finally, we run our commands.
End of explanation
new_reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/")
reference_dbs = {'gg_13_8_otus_clean_trim150' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150.qza'),
join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')),
'gg_13_8_otus_clean_full16S' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean.qza'),
join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')),
'unite_20.11.2016_clean_trim100' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100.qza'),
join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'unite_20.11.2016_clean_fullITS' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.qza'),
join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/rep_set/rep_set_16S_only/99/99_otus_16S_515f-806r_trim250.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza')),
'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza')),
'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean_515f-806r_trim250.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza'))
}
method_parameters_combinations = { # probabalistic classifiers
'blast+' : {'p-evalue': [0.001],
'p-maxaccepts': [1, 10],
'p-min-id': [0.80, 0.99],
'p-min-consensus': [0.51, 0.99]}
}
command_template = "mkdir -p {0}; qiime feature-classifier blast --i-query {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.qza',)
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
method_parameters_combinations = { # probabalistic classifiers
'vsearch' : {'p-maxaccepts': [1, 10],
'p-min-id': [0.97, 0.99],
'p-min-consensus': [0.51, 0.99]}
}
command_template = "mkdir -p {0}; qiime feature-classifier vsearch --i-query {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.qza',)
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
new_reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/")
reference_dbs = {'gg_13_8_otus_clean_trim150' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150-classifier.qza'),
join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')),
'gg_13_8_otus_clean_full16S' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean-classifier.qza'),
join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')),
'unite_20.11.2016_clean_trim100' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100-classifier.qza'),
join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'unite_20.11.2016_clean_fullITS' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean-classifier.qza'),
join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_515f-806r_trim250-classifier.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.txt')),
'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean-classifier.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv')),
'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean_515f-806r_trim250-classifier.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv'))
}
method_parameters_combinations = {
'q2-nb' : {'p-confidence': [0.0, 0.2, 0.4, 0.6, 0.8]}
}
command_template = "mkdir -p {0}; qiime feature-classifier classify --i-reads {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-classifier {2} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.qza',)
Parallel(n_jobs=1)(delayed(system)(command) for command in commands)
Explanation: QIIME2 Classifiers
Now let's do it all over again, but with QIIME2 classifiers (which require different input files and command templates). Note that the QIIME2 artifact files required for assignment are not included in tax-credit, but can be generated from any reference dataset using qiime tools import.
End of explanation
taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_seqs_tax_assignments.txt')
generate_per_method_biom_tables(taxonomy_glob, data_dir)
Explanation: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
End of explanation
precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name)
for community in dataset_reference_combinations:
method_dirs = glob(join(results_dir, community[0], '*', '*', '*'))
move_results_to_repository(method_dirs, precomputed_results_dir)
Explanation: Move result files to repository
Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
End of explanation
for community in dataset_reference_combinations:
community_dir = join(precomputed_results_dir, community[0])
exp_observations = join(community_dir, '*', 'expected')
new_community_exp_dir = join(community_dir, community[1], 'expected')
!mkdir {new_community_exp_dir}; cp {exp_observations}/* {new_community_exp_dir}
Explanation: Do not forget to copy the expected taxonomy files for this mock community!
End of explanation |
9,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 13.3. External Mixture
Is Required
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step60: 14.2. Shortwave Bands
Is Required
Step61: 14.3. Longwave Bands
Is Required
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step63: 15.2. Twomey
Is Required
Step64: 15.3. Twomey Minimum Ccn
Is Required
Step65: 15.4. Drizzle
Is Required
Step66: 15.5. Cloud Lifetime
Is Required
Step67: 15.6. Longwave Bands
Is Required
Step68: 16. Model
Aerosol model
16.1. Overview
Is Required
Step69: 16.2. Processes
Is Required
Step70: 16.3. Coupling
Is Required
Step71: 16.4. Gas Phase Precursors
Is Required
Step72: 16.5. Scheme Type
Is Required
Step73: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
9,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Survival Analysis (1)
source
Step2: political leaders
start | Python Code:
import pandas as pd
import lifelines
import matplotlib.pylab as plt
%matplotlib inline
data = lifelines.datasets.load_dd()
Explanation: Survival Analysis (1)
source : lifelines documents (https://lifelines.readthedocs.io/)
Survival Analysis is useful for searching break of machine or User's churn rate...and so on.
It usually uses 'Kaplan-Meier'.
package : lifelines
End of explanation
data.head()
data.tail()
from lifelines import KaplanMeierFitter
kmf = KaplanMeierFitter()
# kaplan-meier
# KaplanMeierFitter.fit(event_times, event_observed=None,
# timeline=None, label='KM-estimate',
# alpha=None)
Parameters:
event_times: an array, or pd.Series, of length n of times that
the death event occured at
event_observed: an array, or pd.Series, of length n -- True if
the death was observed, False if the event was lost
(right-censored). Defaults all True if event_observed==None
timeline: set the index of the survival curve to this postively increasing array.
label: a string to name the column of the estimate.
alpha: the alpha value in the confidence intervals.
Overrides the initializing alpha for this call to fit only.
Returns:
self, with new properties like 'survival_function_'
T = data["duration"]
C = data["observed"]
kmf.fit(T, event_observed=C)
kmf.survival_function_.plot()
plt.title('Survival function of political regimes');
kmf.plot()
kmf.median_
## A leader is elected there is a 50% chance he or she will be gone in 3 years.
ax = plt.subplot(111)
dem = (data["democracy"] == "Democracy")
kmf.fit(T[dem], event_observed=C[dem], label="Democratic Regimes")
kmf.plot(ax=ax, ci_force_lines=True)
kmf.fit(T[~dem], event_observed=C[~dem], label="Non-democratic Regimes")
kmf.plot(ax=ax, ci_force_lines=True)
## ci_force_lines : force the confidence intervals to be line plots
plt.ylim(0,1);
plt.title("Lifespans of different global regimes");
Explanation: political leaders
start : birth
end : retirement
End of explanation |
9,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: <table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step3: ユーティリティ
以下のセルを実行し、後で必要になる次のユーティリティを作成します。
画像を読み込むためのヘルパーメソッド
モデル名から TF Hub ハンドルへのマップ
COCO 2017 データセットの人物のキーポイントを持つタプルの一覧。キーポイントを持つモデルに必要
Step4: 視覚化ツール
適切に検出されたボックス、キーポイント、セグメンテーションで画像を視覚化するには、TensorFlow Object Detection API を使用します。リポジトリをクローンしてインストールします。
Step5: Object Detection API をインストールします。
Step6: これで、後で必要になる依存関係をインポートすることができます。
Step7: ラベルマップデータを読み込む(プロット用)
ラベルマップはインデックス番号とカテゴリ名に対応しているため、畳み込みネットワークが5を予測した場合、これは airplane に対応していることが分かります。ここでは内部のユーティリティ関数を使用していますが、適切な文字列ラベルに整数をマッピングしたディクショナリを返すものであれば何でも構いません。
ここでは簡単にするため、Object Detection API コードを読み込んだリポジトリから読み込みます。
Step8: 検出モデルを構築し、事前トレーニング済みモデルの重みを読み込む
ここで、使用するオブジェクト検出モデルを選択します。アーキテクチャを選択すると、自動的に読み込みます。後でモデルを変更して他のアーキテクチャを試す場合には、次のセルを変更してその次のセルで実行してください。
ヒント
Step9: 選択したモデルを TensorFlow Hub から読み込む
ここで必要なのは選択されたモデルのハンドルのみで、Tensorflow Hub ライブラリを使用してメモリに読み込みます。
Step10: 画像を読み込む
簡単な画像を使ってこのモデルを試してみましょう。これを支援するために、テスト画像のリストを用意しています。
興味がある方は、次のような単純なことを試してみてください。
独自の画像で推論を実行してみましょう。Colab にアップロードして以下のセルと同じ方法で読み込むだけです。
いくつかの入力画像を変更して、検出がまだ機能するかどうかを確認してみましょう。ここで簡単に試せるのは、画像を水平方向に反転させたり、グレースケールに変換(入力画像には 3 つのチャンネルがあることに注意してください)したりすることです。
注意
Step11: 推論を行う
推論を行うには、TF Hub を搭載したモデルを呼び出す必要があります。
以下を試すことができます。
result['detection_boxes']をプリントアウトして、ボックスの位置を画像内のボックスに一致させます。座標は正規化された形で(すなわち、間隔 [0, 1] で)与えらることに注意してください。
結果に含まれる他の出力キーを確認します。完全なドキュメントは、モデルのドキュメントページで見ることができます(ブラウザで上記でプリントアウトしたモデルハンドルをポイントします)。
Step12: 結果を可視化する
ここで TensorFlow Object Detection API を使用して、推論ステップの四角形(および可能な場合はキーポイント)を表示します。
このメソッドの完全なドキュメントはこちらを参照してください。
ここでは、たとえば min_score_thresh をほかの値(0 と 1 の間)に設定して、より多くの検出を許可したり、より多くの検出をフィルターで除外したりすることができます。
Step13: [オプション]
利用可能なオブジェクト検出モデルに Mask R-CNN というモデルがありますが、このモデルの出力ではインスタンスセグメンテーションが可能です。
これを可視化するためには、上記と同じメソッドを使用しますが、次の追加のパラメータを追加します。instance_masks=output_dict.get('detection_masks_reframed', None) | Python Code:
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# This Colab requires TF 2.5.
!pip install -U "tensorflow>=2.5"
import os
import pathlib
import matplotlib
import matplotlib.pyplot as plt
import io
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from six.moves.urllib.request import urlopen
import tensorflow as tf
import tensorflow_hub as hub
tf.get_logger().setLevel('ERROR')
Explanation: <table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_object_detection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
<td> <a href="https://tfhub.dev/tensorflow/collections/object_detection/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a> </td>
</table>
TensorFlow Hub オブジェクト検出 Colab
TensorFlow Hub オブジェクト検出 Colab へようこそ!このノートブックでは、「すぐに使える」画像用オブジェクト検出モデルを実行する手順を説明します。
その他のモデル
このコレクションには、COCO 2017 データセットでトレーニングされた TF 2 オブジェクト検出モデルが含まれています。ここから、現在 tfhub.dev がホストしているすべてのオブジェクト検出モデルを見つけることができます。
インポートとセットアップ
基本のインポートから始めます。
End of explanation
# @title Run this!!
def load_image_into_numpy_array(path):
Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: the file path to the image
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
image = None
if(path.startswith('http')):
response = urlopen(path)
image_data = response.read()
image_data = BytesIO(image_data)
image = Image.open(image_data)
else:
image_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(image_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(1, im_height, im_width, 3)).astype(np.uint8)
ALL_MODELS = {
'CenterNet HourGlass104 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1',
'CenterNet HourGlass104 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1',
'CenterNet HourGlass104 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024/1',
'CenterNet HourGlass104 Keypoints 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024_kpts/1',
'CenterNet Resnet50 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512/1',
'CenterNet Resnet50 V1 FPN Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512_kpts/1',
'CenterNet Resnet101 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet101v1_fpn_512x512/1',
'CenterNet Resnet50 V2 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512/1',
'CenterNet Resnet50 V2 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512_kpts/1',
'EfficientDet D0 512x512' : 'https://tfhub.dev/tensorflow/efficientdet/d0/1',
'EfficientDet D1 640x640' : 'https://tfhub.dev/tensorflow/efficientdet/d1/1',
'EfficientDet D2 768x768' : 'https://tfhub.dev/tensorflow/efficientdet/d2/1',
'EfficientDet D3 896x896' : 'https://tfhub.dev/tensorflow/efficientdet/d3/1',
'EfficientDet D4 1024x1024' : 'https://tfhub.dev/tensorflow/efficientdet/d4/1',
'EfficientDet D5 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d5/1',
'EfficientDet D6 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d6/1',
'EfficientDet D7 1536x1536' : 'https://tfhub.dev/tensorflow/efficientdet/d7/1',
'SSD MobileNet v2 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2',
'SSD MobileNet V1 FPN 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v1/fpn_640x640/1',
'SSD MobileNet V2 FPNLite 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1',
'SSD MobileNet V2 FPNLite 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1',
'SSD ResNet50 V1 FPN 640x640 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_640x640/1',
'SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_1024x1024/1',
'SSD ResNet101 V1 FPN 640x640 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_640x640/1',
'SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1',
'SSD ResNet152 V1 FPN 640x640 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_640x640/1',
'SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1',
'Faster R-CNN ResNet50 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_640x640/1',
'Faster R-CNN ResNet50 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_1024x1024/1',
'Faster R-CNN ResNet50 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_800x1333/1',
'Faster R-CNN ResNet101 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1',
'Faster R-CNN ResNet101 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_1024x1024/1',
'Faster R-CNN ResNet101 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_800x1333/1',
'Faster R-CNN ResNet152 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_640x640/1',
'Faster R-CNN ResNet152 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_1024x1024/1',
'Faster R-CNN ResNet152 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_800x1333/1',
'Faster R-CNN Inception ResNet V2 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1',
'Faster R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_1024x1024/1',
'Mask R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1'
}
IMAGES_FOR_TEST = {
'Beach' : 'models/research/object_detection/test_images/image2.jpg',
'Dogs' : 'models/research/object_detection/test_images/image1.jpg',
# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg
'Naxos Taverna' : 'https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg',
# Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg
'Beatles' : 'https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg',
# By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg
'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg',
# Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg
'Birds' : 'https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg',
}
COCO17_HUMAN_POSE_KEYPOINTS = [(0, 1),
(0, 2),
(1, 3),
(2, 4),
(0, 5),
(0, 6),
(5, 7),
(7, 9),
(6, 8),
(8, 10),
(5, 6),
(5, 11),
(6, 12),
(11, 12),
(11, 13),
(13, 15),
(12, 14),
(14, 16)]
Explanation: ユーティリティ
以下のセルを実行し、後で必要になる次のユーティリティを作成します。
画像を読み込むためのヘルパーメソッド
モデル名から TF Hub ハンドルへのマップ
COCO 2017 データセットの人物のキーポイントを持つタプルの一覧。キーポイントを持つモデルに必要
End of explanation
# Clone the tensorflow models repository
!git clone --depth 1 https://github.com/tensorflow/models
Explanation: 視覚化ツール
適切に検出されたボックス、キーポイント、セグメンテーションで画像を視覚化するには、TensorFlow Object Detection API を使用します。リポジトリをクローンしてインストールします。
End of explanation
%%bash
sudo apt install -y protobuf-compiler
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
Explanation: Object Detection API をインストールします。
End of explanation
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import ops as utils_ops
%matplotlib inline
Explanation: これで、後で必要になる依存関係をインポートすることができます。
End of explanation
PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
Explanation: ラベルマップデータを読み込む(プロット用)
ラベルマップはインデックス番号とカテゴリ名に対応しているため、畳み込みネットワークが5を予測した場合、これは airplane に対応していることが分かります。ここでは内部のユーティリティ関数を使用していますが、適切な文字列ラベルに整数をマッピングしたディクショナリを返すものであれば何でも構いません。
ここでは簡単にするため、Object Detection API コードを読み込んだリポジトリから読み込みます。
End of explanation
#@title Model Selection { display-mode: "form", run: "auto" }
model_display_name = 'CenterNet HourGlass104 Keypoints 512x512' # @param ['CenterNet HourGlass104 512x512','CenterNet HourGlass104 Keypoints 512x512','CenterNet HourGlass104 1024x1024','CenterNet HourGlass104 Keypoints 1024x1024','CenterNet Resnet50 V1 FPN 512x512','CenterNet Resnet50 V1 FPN Keypoints 512x512','CenterNet Resnet101 V1 FPN 512x512','CenterNet Resnet50 V2 512x512','CenterNet Resnet50 V2 Keypoints 512x512','EfficientDet D0 512x512','EfficientDet D1 640x640','EfficientDet D2 768x768','EfficientDet D3 896x896','EfficientDet D4 1024x1024','EfficientDet D5 1280x1280','EfficientDet D6 1280x1280','EfficientDet D7 1536x1536','SSD MobileNet v2 320x320','SSD MobileNet V1 FPN 640x640','SSD MobileNet V2 FPNLite 320x320','SSD MobileNet V2 FPNLite 640x640','SSD ResNet50 V1 FPN 640x640 (RetinaNet50)','SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)','SSD ResNet101 V1 FPN 640x640 (RetinaNet101)','SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)','SSD ResNet152 V1 FPN 640x640 (RetinaNet152)','SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)','Faster R-CNN ResNet50 V1 640x640','Faster R-CNN ResNet50 V1 1024x1024','Faster R-CNN ResNet50 V1 800x1333','Faster R-CNN ResNet101 V1 640x640','Faster R-CNN ResNet101 V1 1024x1024','Faster R-CNN ResNet101 V1 800x1333','Faster R-CNN ResNet152 V1 640x640','Faster R-CNN ResNet152 V1 1024x1024','Faster R-CNN ResNet152 V1 800x1333','Faster R-CNN Inception ResNet V2 640x640','Faster R-CNN Inception ResNet V2 1024x1024','Mask R-CNN Inception ResNet V2 1024x1024']
model_handle = ALL_MODELS[model_display_name]
print('Selected model:'+ model_display_name)
print('Model Handle at TensorFlow Hub: {}'.format(model_handle))
Explanation: 検出モデルを構築し、事前トレーニング済みモデルの重みを読み込む
ここで、使用するオブジェクト検出モデルを選択します。アーキテクチャを選択すると、自動的に読み込みます。後でモデルを変更して他のアーキテクチャを試す場合には、次のセルを変更してその次のセルで実行してください。
ヒント: 選択したモデルの詳細については、リンク(モデルのハンドル)を辿り、TF Hub の追加ドキュメントを読むことができます。特定のモデルを選択した後で、ハンドルを出力すると簡単です。
End of explanation
print('loading model...')
hub_model = hub.load(model_handle)
print('model loaded!')
Explanation: 選択したモデルを TensorFlow Hub から読み込む
ここで必要なのは選択されたモデルのハンドルのみで、Tensorflow Hub ライブラリを使用してメモリに読み込みます。
End of explanation
#@title Image Selection (don't forget to execute the cell!) { display-mode: "form"}
selected_image = 'Beach' # @param ['Beach', 'Dogs', 'Naxos Taverna', 'Beatles', 'Phones', 'Birds']
flip_image_horizontally = False #@param {type:"boolean"}
convert_image_to_grayscale = False #@param {type:"boolean"}
image_path = IMAGES_FOR_TEST[selected_image]
image_np = load_image_into_numpy_array(image_path)
# Flip horizontally
if(flip_image_horizontally):
image_np[0] = np.fliplr(image_np[0]).copy()
# Convert image to grayscale
if(convert_image_to_grayscale):
image_np[0] = np.tile(
np.mean(image_np[0], 2, keepdims=True), (1, 1, 3)).astype(np.uint8)
plt.figure(figsize=(24,32))
plt.imshow(image_np[0])
plt.show()
Explanation: 画像を読み込む
簡単な画像を使ってこのモデルを試してみましょう。これを支援するために、テスト画像のリストを用意しています。
興味がある方は、次のような単純なことを試してみてください。
独自の画像で推論を実行してみましょう。Colab にアップロードして以下のセルと同じ方法で読み込むだけです。
いくつかの入力画像を変更して、検出がまだ機能するかどうかを確認してみましょう。ここで簡単に試せるのは、画像を水平方向に反転させたり、グレースケールに変換(入力画像には 3 つのチャンネルがあることに注意してください)したりすることです。
注意: アルファチャンネルを持つ画像を使用する場合、モデルは 3 チャンネルの画像を想定しているため、アルファは 4 番目としてカウントされます。
End of explanation
# running inference
results = hub_model(image_np)
# different object detection models have additional results
# all of them are explained in the documentation
result = {key:value.numpy() for key,value in results.items()}
print(result.keys())
Explanation: 推論を行う
推論を行うには、TF Hub を搭載したモデルを呼び出す必要があります。
以下を試すことができます。
result['detection_boxes']をプリントアウトして、ボックスの位置を画像内のボックスに一致させます。座標は正規化された形で(すなわち、間隔 [0, 1] で)与えらることに注意してください。
結果に含まれる他の出力キーを確認します。完全なドキュメントは、モデルのドキュメントページで見ることができます(ブラウザで上記でプリントアウトしたモデルハンドルをポイントします)。
End of explanation
label_id_offset = 0
image_np_with_detections = image_np.copy()
# Use keypoints if available in detections
keypoints, keypoint_scores = None, None
if 'detection_keypoints' in result:
keypoints = result['detection_keypoints'][0]
keypoint_scores = result['detection_keypoint_scores'][0]
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False,
keypoints=keypoints,
keypoint_scores=keypoint_scores,
keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_detections[0])
plt.show()
Explanation: 結果を可視化する
ここで TensorFlow Object Detection API を使用して、推論ステップの四角形(および可能な場合はキーポイント)を表示します。
このメソッドの完全なドキュメントはこちらを参照してください。
ここでは、たとえば min_score_thresh をほかの値(0 と 1 の間)に設定して、より多くの検出を許可したり、より多くの検出をフィルターで除外したりすることができます。
End of explanation
# Handle models with masks:
image_np_with_mask = image_np.copy()
if 'detection_masks' in result:
# we need to convert np.arrays to tensors
detection_masks = tf.convert_to_tensor(result['detection_masks'][0])
detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0])
# Reframe the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes,
image_np.shape[1], image_np.shape[2])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
result['detection_masks_reframed'] = detection_masks_reframed.numpy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_mask[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False,
instance_masks=result.get('detection_masks_reframed', None),
line_thickness=8)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_mask[0])
plt.show()
Explanation: [オプション]
利用可能なオブジェクト検出モデルに Mask R-CNN というモデルがありますが、このモデルの出力ではインスタンスセグメンテーションが可能です。
これを可視化するためには、上記と同じメソッドを使用しますが、次の追加のパラメータを追加します。instance_masks=output_dict.get('detection_masks_reframed', None)
End of explanation |
9,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining the model
First, we define the model as a probabilistic program inheriting from pyprob.Model. Models inherit from torch.nn.Module and can be potentially trained with gradient-based optimization (not covered in this example).
The forward function can have any number and type of arguments as needed.
Step1: Finding the correct posterior analytically
Since all distributions are gaussians in this model, we can analytically compute the posterior and we can compare the true posterior to the inferenced one.
Assuming that the prior and likelihood are $p(x) = \mathcal{N}(\mu_0, \sigma_0)$ and $p(y|x) = \mathcal{N}(x, \sigma)$ respectively and, $y_1, y_2, \ldots y_n$ are the observed values, the posterior would be $p(x|y) = \mathcal{N}(\mu_p, \sigma_p)$ where,
$$
\begin{align}
\sigma_{p}^{2} & = \frac{1}{\frac{n}{\sigma^2} + \frac{1}{\sigma_{0}^{2}}} \
\mu_p & = \sigma_{p}^{2} \left( \frac{\mu_0}{\sigma_{0}^{2}} + \frac{n\overline{y}}{\sigma^2} \right)
\end{align}
$$
The following class implements computing this posterior distribution. We also implement some helper functions and variables for plotting the correct posterior and prior.
Step2: Prior distribution
We inspect the prior distribution to see if it behaves in the way we intended. First we construct an Empirical distribution with forward samples from the model.
Note
Step3: We can plot a historgram of these samples that are held by the Empirical distribution.
Step4: Posterior inference with importance sampling
For a given set of observations, we can get samples from the posterior distribution.
Step5: Regular importance sampling uses proposals from the prior distribution. We can see this by plotting the histogram of the posterior distribution without using the importance weights. As expected, this is the same with the prior distribution.
Step6: When we do use the weights, we end up with the correct posterior distribution. The following shows the sampled posterior with the correct posterior (orange curve).
Step7: In practice, it is advised to use methods of the Empirical posterior distribution instead of dealing with the weights directly, which ensures that the weights are used in the correct way.
For instance, we can get samples from the posterior, compute its mean and standard deviation, and evaluate expectations of a function under the distribution
Step8: Inference compilation
Inference compilation is a technique where a deep neural network is used for parameterizing the proposal distribution in importance sampling (https
Step9: We now construct the posterior distribution using samples from inference compilation, using the trained inference network.
A much smaller number of samples are enough (200 vs. 5000) because the inference network provides good proposals based on the given observations. We can see that the proposal distribution given by the inference network is doing a job much better than the prior, by plotting the posterior samples without the importance weights, for a selection of observations.
Step10: We can see that the proposal distribution given by the inference network is already a good estimate to the true posterior which makes the inferred posterior a much better estimate than the prior, even using much less number of samples.
Step11: Inference compilation performs amortized inferece which means, the same trained network provides proposal distributions for any observed values.
We can try performing inference using the same trained network with different observed values. | Python Code:
class GaussianUnknownMean(Model):
def __init__(self):
super().__init__(name='Gaussian with unknown mean') # give the model a name
self.prior_mean = 1
self.prior_std = math.sqrt(5)
self.likelihood_std = math.sqrt(2)
def forward(self): # Needed to specifcy how the generative model is run forward
# sample the (latent) mean variable to be inferred:
mu = pyprob.sample(Normal(self.prior_mean, self.prior_std)) # NOTE: sample -> denotes latent variables
# define the likelihood
likelihood = Normal(mu, self.likelihood_std)
# Lets add two observed variables
# -> the 'name' argument is used later to assignment values:
pyprob.observe(likelihood, name='obs0') # NOTE: observe -> denotes observable variables
pyprob.observe(likelihood, name='obs1')
# return the latent quantity of interest
return mu
model = GaussianUnknownMean()
Explanation: Defining the model
First, we define the model as a probabilistic program inheriting from pyprob.Model. Models inherit from torch.nn.Module and can be potentially trained with gradient-based optimization (not covered in this example).
The forward function can have any number and type of arguments as needed.
End of explanation
def plot_function(min_val, max_val, func, *args, **kwargs):
x = np.linspace(min_val,max_val,int((max_val-min_val)*50))
plt.plot(x, np.vectorize(func)(x), *args, **kwargs)
def get_dist_pdf(dist):
return lambda x: math.exp(dist.log_prob(x))
class CorrectDistributions:
def __init__(self, model):
self.prior_mean = model.prior_mean
self.prior_std = model.prior_std
self.likelihood_std = model.likelihood_std
self.prior_dist = Normal(self.prior_mean, self.prior_std)
@property
def observed_list(self):
return self.__observed_list
@observed_list.setter
def observed_list(self, new_observed_list):
self.__observed_list = new_observed_list
self.construct_correct_posterior()
def construct_correct_posterior(self):
n = len(self.observed_list)
posterior_var = 1/(n/self.likelihood_std**2 + 1/self.prior_std**2)
posterior_mu = posterior_var * (self.prior_mean/self.prior_std**2 + n*np.mean(self.observed_list)/self.likelihood_std**2)
self.posterior_dist = Normal(posterior_mu, math.sqrt(posterior_var))
def prior_pdf(self, model, x):
p = Normal(model.prior_mean,model.prior_stdd)
return math.exp(p.log_prob(x))
def plot_posterior(self, min_val, max_val):
if not hasattr(self, 'posterior_dist'):
raise AttributeError('observed values are not set yet, and posterior is not defined.')
plot_function(min_val, max_val, get_dist_pdf(self.posterior_dist), label='correct posterior', color='orange')
def plot_prior(self, min_val, max_val):
plot_function(min_val, max_val, get_dist_pdf(self.prior_dist), label='prior', color='green')
correct_dists = CorrectDistributions(model)
Explanation: Finding the correct posterior analytically
Since all distributions are gaussians in this model, we can analytically compute the posterior and we can compare the true posterior to the inferenced one.
Assuming that the prior and likelihood are $p(x) = \mathcal{N}(\mu_0, \sigma_0)$ and $p(y|x) = \mathcal{N}(x, \sigma)$ respectively and, $y_1, y_2, \ldots y_n$ are the observed values, the posterior would be $p(x|y) = \mathcal{N}(\mu_p, \sigma_p)$ where,
$$
\begin{align}
\sigma_{p}^{2} & = \frac{1}{\frac{n}{\sigma^2} + \frac{1}{\sigma_{0}^{2}}} \
\mu_p & = \sigma_{p}^{2} \left( \frac{\mu_0}{\sigma_{0}^{2}} + \frac{n\overline{y}}{\sigma^2} \right)
\end{align}
$$
The following class implements computing this posterior distribution. We also implement some helper functions and variables for plotting the correct posterior and prior.
End of explanation
prior = model.prior_results(num_traces=1000)
Explanation: Prior distribution
We inspect the prior distribution to see if it behaves in the way we intended. First we construct an Empirical distribution with forward samples from the model.
Note: Extra arguments passed to prior_distribution will be forwarded to model's forward function.
End of explanation
prior.plot_histogram(show=False, alpha=0.75, label='emprical prior')
correct_dists.plot_prior(min(prior.values_numpy()),max(prior.values_numpy()))
plt.legend();
Explanation: We can plot a historgram of these samples that are held by the Empirical distribution.
End of explanation
correct_dists.observed_list = [8, 9] # Observations
# sample from posterior (5000 samples)
posterior = model.posterior_results(
num_traces=5000, # the number of samples estimating the posterior
inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING, # specify which inference engine to use
observe={'obs0': correct_dists.observed_list[0],
'obs1': correct_dists.observed_list[1]} # assign values to the observed values
)
Explanation: Posterior inference with importance sampling
For a given set of observations, we can get samples from the posterior distribution.
End of explanation
posterior_unweighted = posterior.unweighted()
posterior_unweighted.plot_histogram(show=False, alpha=0.75, label='empirical proposal')
correct_dists.plot_prior(min(posterior_unweighted.values_numpy()),
max(posterior_unweighted.values_numpy()))
correct_dists.plot_posterior(min(posterior_unweighted.values_numpy()),
max(posterior_unweighted.values_numpy()))
plt.legend();
Explanation: Regular importance sampling uses proposals from the prior distribution. We can see this by plotting the histogram of the posterior distribution without using the importance weights. As expected, this is the same with the prior distribution.
End of explanation
posterior.plot_histogram(show=False, alpha=0.75, bins=50, label='inferred posterior')
correct_dists.plot_posterior(min(posterior.values_numpy()),
max(posterior_unweighted.values_numpy()))
plt.legend();
Explanation: When we do use the weights, we end up with the correct posterior distribution. The following shows the sampled posterior with the correct posterior (orange curve).
End of explanation
print(posterior.sample())
print(posterior.mean)
print(posterior.stddev)
print(posterior.expectation(lambda x: torch.sin(x)))
Explanation: In practice, it is advised to use methods of the Empirical posterior distribution instead of dealing with the weights directly, which ensures that the weights are used in the correct way.
For instance, we can get samples from the posterior, compute its mean and standard deviation, and evaluate expectations of a function under the distribution:
End of explanation
model.learn_inference_network(num_traces=20000,
observe_embeddings={'obs0' : {'dim' : 32},
'obs1': {'dim' : 32}},
inference_network=pyprob.InferenceNetwork.LSTM)
Explanation: Inference compilation
Inference compilation is a technique where a deep neural network is used for parameterizing the proposal distribution in importance sampling (https://arxiv.org/abs/1610.09900). This neural network, which we call inference network, is automatically generated and trained with data sampled from the model.
We can learn an inference network for our model.
End of explanation
# sample from posterior (200 samples)
posterior = model.posterior_results(
num_traces=200, # the number of samples estimating the posterior
inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING_WITH_INFERENCE_NETWORK, # specify which inference engine to use
observe={'obs0': correct_dists.observed_list[0],
'obs1': correct_dists.observed_list[1]} # assign values to the observed values
)
posterior_unweighted = posterior.unweighted()
posterior_unweighted.plot_histogram(show=False, bins=50, alpha=0.75, label='empirical proposal')
correct_dists.plot_posterior(min(posterior.values_numpy()),
max(posterior.values_numpy()))
plt.legend();
Explanation: We now construct the posterior distribution using samples from inference compilation, using the trained inference network.
A much smaller number of samples are enough (200 vs. 5000) because the inference network provides good proposals based on the given observations. We can see that the proposal distribution given by the inference network is doing a job much better than the prior, by plotting the posterior samples without the importance weights, for a selection of observations.
End of explanation
posterior.plot_histogram(show=False, bins=50, alpha=0.75, label='inferred posterior')
correct_dists.plot_posterior(min(posterior.values_numpy()),
max(posterior.values_numpy()))
plt.legend();
Explanation: We can see that the proposal distribution given by the inference network is already a good estimate to the true posterior which makes the inferred posterior a much better estimate than the prior, even using much less number of samples.
End of explanation
correct_dists.observed_list = [12, 10] # New observations
posterior = model.posterior_results(
num_traces=200,
inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING_WITH_INFERENCE_NETWORK, # specify which inference engine to use
observe={'obs0': correct_dists.observed_list[0],
'obs1': correct_dists.observed_list[1]}
)
posterior_unweighted = posterior.unweighted()
posterior_unweighted.plot_histogram(show=False, bins=50, alpha=0.75, label='empirical proposal')
correct_dists.plot_posterior(min(posterior.values_numpy()),
max(posterior.values_numpy()))
plt.legend();
posterior.plot_histogram(show=False, bins=50, alpha=0.75, label='inferred posterior')
correct_dists.plot_posterior(min(posterior.values_numpy()),
max(posterior.values_numpy()))
plt.legend();
Explanation: Inference compilation performs amortized inferece which means, the same trained network provides proposal distributions for any observed values.
We can try performing inference using the same trained network with different observed values.
End of explanation |
9,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
9,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: <a href="https
Step4: We've succesfully loaded our data, but there are still a couple preprocessing steps to go through first. Specifically, we're going to
Step5: Now that we have our features and labels set, it's time to start modeling!
1) Model Selection
The results of our models are only good if our models are correct in the first place. "Good" here can mean different things depending on your application -- we'll talk more about that later in this assignment.
What's important to know for now is that the conclusions we draw are subject to the assumptions and limitations of our underlying model. Thus, making sure we choose the right models to analyze is important! But how does one choose the right model?
1.1) Building Intuition -- Which model do you think is best?
To build some intuition, let's start off with a simple example. We'll use a subset of our data. For ease of visualization, we'll start with just two features, and just 10 cities.
Step6: The following blocks of code will generate 2 candidate classifiers, for labeling the datapoints as either label 0 (high obesity rate), or label 1 (high obesity rate).
The code will also output an accuracy score, which for this section is defined as
Step7: 1A) Which model do you think is better, Classifier 1 or Classifier 2? Explain your reasoning.
1B) Classifier 2 has a higher accuracy than Classifier 1, but has a more complicated decision boundary. Which do you think would generalize best to new data?
1.2) The Importance of Generalizability
So, how did we do? Let's see what happens when we add back the rest of the cities (we'll keep using just the 2 features for ease of visualization.)
Step8: 2A) In light of all the new datapoints, now which classifier do you think is better, Classifer 1 or Classifier 2? Explain your reasoning.
2B) In Question 1, Classifier 1 had a lower accuracy than Classifier 2. After adding more datapoints, we now see the reverse, with Classifier 1 having a higher accuracy than Classifier 2. What happened? Give an explanation (or at least your best guess) for why this is.
2) Evaluation Metrics
In question 1, we were able to visualize how well our models performed by plotting our data and decision boundaries. However, this was only possible because we limited ourselves to just 2 features. Unfortunately for us, humans are only good at visualizing up to 3 dimesnions. As you increase the number of features and/or complexity of your models, creating meaningful visualizations quickly become intractable.
Thus, we'll need other methods to measure how well our models perform. In this section, we'll cover some common strategies for evaluating models.
To start, let's finally fit a model to all of our available data (e.g. 500 cities and 8 features). Because the features have different scales, we'll also take care to standardize their values.
Step9: 2.1) Accuracy
2.1.1) Classification Accuracy
We've seen an example of an evaluation metric already -- accuracy! The accuracy score used question 1 is more commonly known as classification accuracy, and is the most common metric used in classification problems.
As a refresher, the classification accuracy is the ratio of number of correct predictions to the total number of datapoints.
classification accuracy
Step10: 2.2) Train/Test Splits
The ability of a model to perform well on new, previously unseen data (drawn from the same distribution as the data used the create the model) is called Generalization. For most applications, we prefer models that generalize well over those that don't.
One way to check the generalizability of a model is to perform an analysis similar to what we did in question 1. We'll take our data, and randomly split it into two subsets, a training set that we'll use to build our model, and a test set, which we'll hold out until the model is complete and use it to evaluate how well our model can generalize to simulate new, previously unseen data.
2.2.1) Choosing Split Sizes
But what percentage of our datapoints should go into our training and test sets respectively? There are no hard and fast rules for this, the right split often depends on the application and how much data we have. The next few questions explore the key tradeoffs
Step11: 2.2.3) Training vs. Test Accuracy
As you may have noticed from 2.2.2, we can calculate two different accuracies after performing a train/test split. A training accuracy based on how well the model performs on the data it was trained on, and a test accuracy based on how well the model performs on held out data. Typically, we select models based on test accuracy. Afterall, a model's performance on new data after being deployed is usually more important than how well that model performed on the training data.
So why measure training accuracy at all? It turns out training accuracy is often useful for diagnosing some common issues with models.
For example, consider the following scenario
Step12: 2.3A) Play around with the code box above to find a good value of $k$. What happens if $k$ is very large or very small?
2.2B) How does the average score across all folds change with $k$?
2.4) Other Metrics Worth Knowing
2.4.1) What about Regression? -- Mean Squared Error
Different models and different problems often use different accuracy metrics. You may have noticed classification accuracy doesn't make much sense for regression problems, where instead of predicting a label, the model predicts a numeric value. In regression, a common accuracy metric is the Mean Squared Error, or MSE.
$ MSE = \frac{1}{\text{# total data points}}\sum_{\text{all data points}}(\text{predicted value} - \text{actual value})^2$
It is a measure of the average difference between the predicted value and the actual value. The square ($^2$) can seem counterintuitive at first, but offers some nice mathematical properties.
2.4.2) More Classification Metrics
Accuracy alone never tells the full story.There are a number of other metrics borrowed from statistics that are commonly used on classification models.
It's possible for a model to have a high accuracy, but score very low on some of the following metrics
Step13: 3A) Run the code boxes above and select which model you would choose to deploy. Justify your answer.
3B) Consider a new Classifier D. Its results look like this | Python Code:
Install Data Commons API
We need to install the Data Commons API, since they don't ship natively with
most python installations.
In Colab, we'll be installing the Data Commons python and pandas APIs
through pip.
!pip install datacommons --upgrade --quiet
!pip install datacommons_pandas --upgrade --quiet
Imports
This is where we'll load all the libraries we need for this assignment.
# Data Commons Python and Pandas APIs
import datacommons
import datacommons_pandas
# For manipulating data
import numpy as np
import pandas as pd
# For implementing models and evaluation methods
from sklearn import linear_model, svm, tree
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.preprocessing import StandardScaler
# For plotting
from matplotlib import pyplot as plt
from mlxtend.plotting import plot_decision_regions, category_scatter
Loading the Data
We'll query data using the Data Commons API, storing it in a Pandas data frame
city_dcids = datacommons.get_property_values(["CDC500_City"],
"member",
limit=500)["CDC500_City"]
# We've compiled a list of some nice Data Commons Statistical Variables
# to use as features for you
stat_vars_to_query = [
"Count_Person",
"Median_Income_Person",
"Count_Person_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"dc/e9gftzl2hm8h9", # Commute Time, this has a weird DCID
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol",
"Percent_Person_Obesity"
]
# Query Data Commons for the data and display the data
raw_features_df = datacommons_pandas.build_multivariate_dataframe(city_dcids,stat_vars_to_query)
display(raw_features_df)
Explanation: <a href="https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2022 Google LLC.
SPDX-License-Identifier: Apache-2.0
Classification and Model Evaluation
So you've built a machine learning model, or perhaps multiple models... now what? How do you know if those models are any good? And if you have multiple candidate models, how should you choose which one to utimately deploy?
The answer: model evaluation. Understanding how to evaluate your models is an essential skill not just to check how well your models perform, but also to diagnose issues and find areas for improvement. Most importantly, we need to understand whether or not we can trust our model's predictions.
Learning Objectives
In this lesson, we'll be covering:
Model Selection
Generalization and Overfitting
Train/Test Splits
Cross Validation
Statistical Evaulation Metrics:
How do we know a model is "good"?
Tradeoffs between evaluation metrics
Need extra help?
If you're new to Google Colab, take a look at this getting started tutorial.
To build more familiarity with the Data Commons API, check out these Data Commons Tutorials.
And for help with Pandas and manipulating data frames, take a look at the Pandas Documentation.
We'll be using the scikit-learn library for implementing our models today. Documentation can be found here.
As usual, if you have any other questions, please reach out to your course staff!
Part 0: Introduction and Setup
The obesity epidemic in the United States is a major public health issue. Obesity rates vary across the nation by geographic location. In this colab, we'll be exploring how obesity rates vary with different health or societal factors across US cities.
Our Data Science Question: Can we predict which cities have high (>30%) or low (<30%) obesity rates based on other health or lifestyle factors?
End of explanation
# Make Row Names More Readable
# --- First, we'll copy the dcids into their own column
# --- Next, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
# --- Finally, we'll set this column as the new index
df = raw_features_df.copy(deep=True)
df['DCID'] = df.index
city_name_dict = datacommons.get_property_values(city_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
df['City'] = pd.Series(city_name_dict)
df.set_index('City', inplace=True)
# Rename column "dc/e9gftzl2jm8h9" to "Commute_Time"
df.rename(columns={"dc/e9gftzl2hm8h9":"Commute_Time"}, inplace=True)
# Convert commute_time value
avg_commute_time = df["Commute_Time"]/df["Count_Person"]
df["Commute_Time"] = avg_commute_time
# Convert Count of No Health Insurance to Percentage
percent_noHealthInsurance = df["Count_Person_NoHealthInsurance"]/df["Count_Person"]
df["Percent_NoHealthInsurance"] = percent_noHealthInsurance
# Create labels based on the Obesity rate of each city
# --- Percent_Person_Obesity < 30 will be Label 0
# --- Percent_Person_Obesity >= 30 will be label 1
df["Label"] = df['Percent_Person_Obesity'] >= 30.0
df["Label"] = df["Label"].astype(int)
# Display results
display(df)
Explanation: We've succesfully loaded our data, but there are still a couple preprocessing steps to go through first. Specifically, we're going to:
Change the row labels from dcids to names for readability.
Change the column name "dc/e9gftzl2hm8h9" to the more human readable "Commute_Time"
The raw commute time values from Data Commons shows the total amount of minutes spent for everyone in the city. Let's instead look at the average commute time for a single person, which we'll get by dividing the raw commute time (Commute_Time) by population size (Count_Person)
Similarly, we'll get a Percent_NoHealthInsurance by dividing the the count of people without health insurance (Count_Person_NoHealthInsurance) by population size.
To perform classification, we need to convert our obesity rate data into labels. In this lesson, we'll look at binary classification, and will split our cities into "Low obesity rate" (label 0, obesity% < 30%) and "High obesity rate" (label 1, obesity% >= 30%) categories.
End of explanation
# For ease of visualization, we'll focus on just a few cities
subset_city_dcids = ["geoId/0667000", # San Francisco, CA
"geoId/3651000", # NYC, NY
"geoId/1304000", # Atlanta, GA
"geoId/2404000", # Baltimore, MD
"geoId/3050200", # Missoula, MT
"geoId/4835000", # Houston, TX
"geoId/2622000", # Detroit, MI
"geoId/5363000", # Seattle, WA
"geoId/2938000", # Kansas City, MO
"geoId/4752006" # Nashville, TN
]
# Create a subset data frame with just those cities
subset_df = df.loc[df['DCID'].isin(subset_city_dcids)]
# We'll just use 2 features for ease of visualization
X = subset_df[["Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours"]]
Y = subset_df[['Label']]
# Visualize the data
colors = ['#1f77b4', '#ff7f0e']
markers = ['s', '^']
fig, ax = plt.subplots()
ax.set_title('Original Data')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
for i in range(X.shape[0]):
ax.scatter(X["Percent_Person_PhysicalInactivity"][i],
X["Percent_Person_SleepLessThan7Hours"][i],
c=colors[Y['Label'][i]],
marker=markers[Y['Label'][i]],
)
ax.legend([0, 1])
plt.show()
Explanation: Now that we have our features and labels set, it's time to start modeling!
1) Model Selection
The results of our models are only good if our models are correct in the first place. "Good" here can mean different things depending on your application -- we'll talk more about that later in this assignment.
What's important to know for now is that the conclusions we draw are subject to the assumptions and limitations of our underlying model. Thus, making sure we choose the right models to analyze is important! But how does one choose the right model?
1.1) Building Intuition -- Which model do you think is best?
To build some intuition, let's start off with a simple example. We'll use a subset of our data. For ease of visualization, we'll start with just two features, and just 10 cities.
End of explanation
# Classifier 1
classifier1 = svm.SVC()
classifier1.fit(X, Y["Label"])
fig, ax = plt.subplots()
ax.set_title('Classifier 1')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
plot_decision_regions(X.to_numpy(),
Y["Label"].to_numpy(),
clf=classifier1,
legend=2)
plt.show()
print('Accuracy of this classifier is:', classifier1.score(X,Y["Label"]))
# Classifier 2
classifier2 = tree.DecisionTreeClassifier(random_state=0)
classifier2.fit(X, Y["Label"])
fig, ax = plt.subplots()
ax.set_title('Classifier 2')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
plot_decision_regions(X.to_numpy(),
Y["Label"].to_numpy(),
clf=classifier2,
legend=2)
plt.show()
print('Accuracy of this classifier is:', classifier2.score(X,Y["Label"]))
Explanation: The following blocks of code will generate 2 candidate classifiers, for labeling the datapoints as either label 0 (high obesity rate), or label 1 (high obesity rate).
The code will also output an accuracy score, which for this section is defined as:
$\text{Accuracy} = \frac{\text{# correctly labeled}}{\text{# total datapoints}}$
End of explanation
# Original Data
X_full = df[["Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours"]]
Y_full = df[['Label']]
# Visualize the data
cCycle = ['#1f77b4', '#ff7f0e']
mCycle = ['s', '^']
fig, ax = plt.subplots()
ax.set_title('Original Data')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
for i in range(X_full.shape[0]):
ax.scatter(X_full["Percent_Person_PhysicalInactivity"][i],
X_full["Percent_Person_SleepLessThan7Hours"][i],
c=cCycle[Y_full['Label'][i]],
marker=mCycle[Y_full['Label'][i]],
)
ax.legend([0, 1])
plt.show()
# Classifier 1
fig, ax = plt.subplots()
ax.set_title('Classifier 1')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
plot_decision_regions(X_full.to_numpy(),
Y_full["Label"].to_numpy(),
clf=classifier1,
legend=2)
plt.show()
print('Accuracy of this classifier is: %.2f' % classifier1.score(X_full,Y_full["Label"]))
# Classifier 2
fig, ax = plt.subplots()
ax.set_title('Classifier 2')
ax.set_ylabel('Percent_Person_SleepLessThan7Hours')
ax.set_xlabel('Percent_Person_PhysicalInactivity')
plot_decision_regions(X_full.to_numpy(),
Y_full["Label"].to_numpy(),
clf=classifier2,
legend=2)
plt.show()
print('Accuracy of this classifier is: %.2f' % classifier2.score(X_full,Y_full["Label"]))
Explanation: 1A) Which model do you think is better, Classifier 1 or Classifier 2? Explain your reasoning.
1B) Classifier 2 has a higher accuracy than Classifier 1, but has a more complicated decision boundary. Which do you think would generalize best to new data?
1.2) The Importance of Generalizability
So, how did we do? Let's see what happens when we add back the rest of the cities (we'll keep using just the 2 features for ease of visualization.)
End of explanation
# Use all features that aren't obesity
X_large = df.dropna()[[
"Median_Income_Person",
"Percent_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol",
"Commute_Time"
]]
Y_large = df.dropna()["Label"]
# Standardize the data
scaler = StandardScaler().fit(X_large)
X_large = scaler.transform(X_large)
# Create a model
large_model = linear_model.Perceptron()
large_model.fit(X_large, Y_large)
Explanation: 2A) In light of all the new datapoints, now which classifier do you think is better, Classifer 1 or Classifier 2? Explain your reasoning.
2B) In Question 1, Classifier 1 had a lower accuracy than Classifier 2. After adding more datapoints, we now see the reverse, with Classifier 1 having a higher accuracy than Classifier 2. What happened? Give an explanation (or at least your best guess) for why this is.
2) Evaluation Metrics
In question 1, we were able to visualize how well our models performed by plotting our data and decision boundaries. However, this was only possible because we limited ourselves to just 2 features. Unfortunately for us, humans are only good at visualizing up to 3 dimesnions. As you increase the number of features and/or complexity of your models, creating meaningful visualizations quickly become intractable.
Thus, we'll need other methods to measure how well our models perform. In this section, we'll cover some common strategies for evaluating models.
To start, let's finally fit a model to all of our available data (e.g. 500 cities and 8 features). Because the features have different scales, we'll also take care to standardize their values.
End of explanation
print('Accuracy of the large model is: %.2f' % large_model.score(X_large,Y_large))
Explanation: 2.1) Accuracy
2.1.1) Classification Accuracy
We've seen an example of an evaluation metric already -- accuracy! The accuracy score used question 1 is more commonly known as classification accuracy, and is the most common metric used in classification problems.
As a refresher, the classification accuracy is the ratio of number of correct predictions to the total number of datapoints.
classification accuracy:
$Accuracy = \frac{\text{# correctly labeled}}{\text{# total datapoints}}$
Note that sometimes the classification accuracy can be misleading! Consider the following scenario:
There are two classes, A and B. We have 100 data points in our dataset. Of these 100 data points, 99 points are labeled class A, while only 1 of the data points are labeled class B.
2.1A) Consider a model that always predicts class A. What is the accuracy of this always-A model?
2.1B) How well do you expect the always-A model to perform on new, previously unseen data? Assume the new data follows the same distribution as the original 100 data points.
2.1C) Run the following code block to calculate the classification accuracy of our large model. Is the accuracy higher or lower than you expected?
End of explanation
'''
Try a variety of different splits by changing the test_size variable, which
represents the ratio of points to use in the test set.
For example, for a 75% Training, 25% Test split, use test_size=0.25
'''
test_size = 0.25 # Change me! Enter a value between 0 and 1
print(f'{np.round((1-test_size)*100)}% Training, {(test_size)*100}% Test Split' )
# Randomly split data into Train and Test Sets
x_train, x_test, y_train, y_test = train_test_split(X_large, Y_large, test_size=test_size)
# Fit a model on the training set
large_model.fit(x_train, y_train)
print('The TRAINING accuracy is: %.2f' % large_model.score(x_train, y_train))
# Evaluate on the test Set
print('The TEST accuracy is: %.2f' % large_model.score(x_test, y_test))
Explanation: 2.2) Train/Test Splits
The ability of a model to perform well on new, previously unseen data (drawn from the same distribution as the data used the create the model) is called Generalization. For most applications, we prefer models that generalize well over those that don't.
One way to check the generalizability of a model is to perform an analysis similar to what we did in question 1. We'll take our data, and randomly split it into two subsets, a training set that we'll use to build our model, and a test set, which we'll hold out until the model is complete and use it to evaluate how well our model can generalize to simulate new, previously unseen data.
2.2.1) Choosing Split Sizes
But what percentage of our datapoints should go into our training and test sets respectively? There are no hard and fast rules for this, the right split often depends on the application and how much data we have. The next few questions explore the key tradeoffs:
2.2A) Consider a scenario with 5 data points in the training set and 95 data points in the test set. How accurate of a model do you think we're likely to train?
2.2B) Does your answer to 2.2A change if we have 500 training and 9500 test points instead?
2.2C) Consider a scenario with 95 data points in the training set and 5 data points in the test set. Is the test accuracy still a good measure of generalizability?
2.2D) Does your answer to 2.2C change if we have 9500 training and 500 test points instead?
2.2.2) Try for yourself!
2.2E) Play around with a couple values of test_size in the code box below. Find a split ratio that seems to work well, and report what that ratio is.
End of explanation
'''
Set the number of folds by changing k.
'''
k = 5 # Enter an integer >=2. Number of folds.
print(f'Test accuracies for {k} splits:')
scores = cross_val_score(large_model, X_large, Y_large, cv=k)
for i in range(k):
print('\tFold %d: %.2f' % (i+1, scores[i]))
print('Average score across all folds: %.2f' % np.mean(scores))
Explanation: 2.2.3) Training vs. Test Accuracy
As you may have noticed from 2.2.2, we can calculate two different accuracies after performing a train/test split. A training accuracy based on how well the model performs on the data it was trained on, and a test accuracy based on how well the model performs on held out data. Typically, we select models based on test accuracy. Afterall, a model's performance on new data after being deployed is usually more important than how well that model performed on the training data.
So why measure training accuracy at all? It turns out training accuracy is often useful for diagnosing some common issues with models.
For example, consider the following scenario:
After performing a train/test split, a model is found to have 100% training accuracy, but only 33% test accuracy.
2.2F) What's going on with the model in the scenario? Come up with a hypothetical setup that could result in these train and test accuracies.
Hint: This situation is called overfitting. If you're stuck, feel free to look it up!
2.3) Cross-Validation
If you haven't already, run the code box in 2.2.2 multiple times without changing the test_size variable. Notice how the accuracies can be different between runs?
The problem is that each time we randomly select a train/test split, sometimes we'll get luckier or unlucky with a particular distribution of training or test data. To borrow a term from statistics, a sample size of $n=1$ is too small! We can do better.
To get a better estimate of test accuracy, a common strategy is to use k-fold cross-validation. The general proceedure is:
Split the data into $k$ groups.
Then for each group (called a fold):
hold that group out as the test set, and use the remaining groups as a training set.
Fit a new model on the training set and record the resulting accuracy on the test set.
Take the average of all test accuracies.
A Note on Choosing k
The number of folds to use depends on your data. Setting the number of folds implicitly also sets your train/test split ratio. For example, using 10 folds implies 10 (90% train, 10% test) splits. Common choices are $k=10$ or $k=5$.
End of explanation
# Classifier A
x_A = df[["Count_Person",
"Median_Income_Person"]]
y_A = df["Label"]
classifierA = linear_model.Perceptron()
classifierA.fit(x_A, y_A)
scores = cross_val_score(classifierA, x_A, y_A, cv=5)
print('Classifier A')
print('-------------')
print('Number of Data Points:', x_A.shape[0])
print('Number of Features:', x_A.shape[1])
print('Classification Accuracy: %.2f' % classifierA.score(x_A, y_A))
print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
# Classifier A
x_A = df[["Percent_Person_PhysicalInactivity",
"Median_Income_Person"]]
y_A = df["Label"]
classifierA = svm.SVC()
classifierA.fit(x_A, y_A)
scores = cross_val_score(classifierA, x_A, y_A, cv=5)
print('Classifier A')
print('-------------')
print('Number of Data Points:', x_A.shape[0])
print('Number of Features:', x_A.shape[1])
print('Training Classification Accuracy: %.2f' % classifierA.score(x_A, y_A))
print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
# Classifier B
x_B = df.dropna()[[
"Percent_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol"
]]
y_B = df.dropna()["Label"]
classifierB = tree.DecisionTreeClassifier()
classifierB.fit(x_B, y_B)
scores = cross_val_score(classifierB, x_B, y_B, cv=5)
print('Classifier B')
print('-------------')
print('Number of Data Points:', x_B.shape[0])
print('Number of Features:', x_B.shape[1])
print('Training Classification Accuracy: %.2f' % classifierB.score(x_B, y_B))
print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
# Classifier C
x_C = df.dropna()[[
"Percent_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol"
]]
y_C = df.dropna()["Label"]
classifierC = linear_model.Perceptron()
classifierC.fit(x_C, y_C)
scores = cross_val_score(classifierC, x_C, y_C, cv=5)
print('Classifier C')
print('-------------')
print('Number of Data Points:', x_C.shape[0])
print('Number of Features:', x_C.shape[1])
print('Training Classification Accuracy: %.2f' % classifierC.score(x_C, y_C))
print('5-Fold Cross Validation Accuracy: %.2f' % np.mean(scores))
Explanation: 2.3A) Play around with the code box above to find a good value of $k$. What happens if $k$ is very large or very small?
2.2B) How does the average score across all folds change with $k$?
2.4) Other Metrics Worth Knowing
2.4.1) What about Regression? -- Mean Squared Error
Different models and different problems often use different accuracy metrics. You may have noticed classification accuracy doesn't make much sense for regression problems, where instead of predicting a label, the model predicts a numeric value. In regression, a common accuracy metric is the Mean Squared Error, or MSE.
$ MSE = \frac{1}{\text{# total data points}}\sum_{\text{all data points}}(\text{predicted value} - \text{actual value})^2$
It is a measure of the average difference between the predicted value and the actual value. The square ($^2$) can seem counterintuitive at first, but offers some nice mathematical properties.
2.4.2) More Classification Metrics
Accuracy alone never tells the full story.There are a number of other metrics borrowed from statistics that are commonly used on classification models.
It's possible for a model to have a high accuracy, but score very low on some of the following metrics:
True Positives: The cases where we predicted positively, and the actual label was positive.
True Negatives: The cases where we predicted negatively, and the actual label was negative.
False Positives: The cases where we predicted positively, but the actual label was negative.
False Negatives: The cases where we predicted negatively, but the actual label was positive.
False Positive Rate: Corresponds to the proportion of negative datapoints incorrectly considered positive relative to all negative points.
$FPR = \frac{FP}{TN + FP}$
Sensitivity: (Also known as True Positive Rate) corresponds to the proportion of positive datapoints correctly considered as positive relative to all positive points.
$TPR = \frac{TP}{TP + FN}$
Specificity: (Also known as True Negative Rate) corresponds to the proportion of negative datapoints correctly considered negative relative to all negative points.
$TNR = \frac{TN}{TN + FP}$
Precision: Proportion of correctly labeled positive points relative to the number of positive predictions
$Precision = \frac{TP}{TP+FP}$
Recall: Proportion of correctly labeled positive points relative to all points that were actually positive.
$Recall = \frac{TP}{TP+FN}$
F1 score: Measure of a balance between precision and recall.
$F1 = 2 \cdot \frac{1}{\frac{1}{precision} + \frac{1}{recall}}$
2.4.3) Tradeoffs Between Metrics
Oftentimes, our definition of a "good" model varies by situation or application case. In some cases, we might prefer a different tradeoff between accuracy, false positive rate, and false negative rate.
2.4) Read through the following scenarios. For each case, state which metrics you would prioritize, and why.
Scenario 1:
Doctors have identified a new extremely rare, but also very deadly disease. Fortunately, they also discover a simple medication, that if taken early enough, can prevent the disease. The doctors plan to use a machine learning model to predict which of their patients are at high-risk for getting the disease. A positively labeled patient is high-risk, while a negatively labeled patient is low-risk.
Scenario 2:
Data Is Cool Inc. is a company that attracts many highly (and equally) qualified applicants to its job posting. They are overwhelmed with the number of applications received, so the company implements a machine learning model to sort all the incoming resumes. A positively labeled resume gets passed to a recruiter for a very thorough, but time-costly review. Negatively labeled resumes are held for future job openings.
3) Tying It All Together -- Choosing A Model to Deploy
Now that we've seen many different evaluation metrics, let's put what we've learned into practice!
One of the most common problems you'll encounter as a data scientist is to decide between a set of candidate models.
Each of the following code boxes below generates a candidate classifier for predicting high vs low obesity rates in cities. The models can differ in different ways: number of features, learning algorithm used, number of datapoints, etc.
End of explanation
your_local_dcid = "geoId/0649670" # Replace with your own!
# Get your local data from data commons
local_data = datacommons_pandas.build_multivariate_dataframe(your_local_dcid,stat_vars_to_query)
# Cleaning and Preprocessing
local_data['DCID'] = local_data.index
city_name_dict = datacommons.get_property_values(city_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
local_data['City'] = pd.Series(city_name_dict)
local_data.set_index('City', inplace=True)
local_data.rename(columns={"dc/e9gftzl2hm8h9":"Commute_Time"}, inplace=True)
avg_commute_time = local_data["Commute_Time"]/local_data["Count_Person"]
local_data["Commute_Time"] = avg_commute_time
percent_noHealthInsurance = local_data["Count_Person_NoHealthInsurance"]/local_data["Count_Person"]
local_data["Percent_NoHealthInsurance"] = percent_noHealthInsurance
local_data["Label"] = local_data['Percent_Person_Obesity'] >= 30.0
local_data["Label"] = local_data["Label"].astype(int)
# Build data to feed into model
x_local = local_data[[
"Median_Income_Person",
"Percent_NoHealthInsurance",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol",
"Commute_Time"
]]
x_local = scaler.transform(x_local)
y_local = local_data["Label"]
# Make Prediction
prediction = large_model.predict(x_local)
# Report Results
print(f'Prediction for {local_data.index[0]}:')
print(f'\tThe predicted label was {prediction[0]}')
print(f'\tThe actual label was {y_local[0]}')
Explanation: 3A) Run the code boxes above and select which model you would choose to deploy. Justify your answer.
3B) Consider a new Classifier D. Its results look like this:
Number of Data Points: 5,000 \
Number of Features: 10,000 \
Training Classification Accuracy: 98% \
5-Fold Cross Validation Accuracy: 95%.
Would you deploy classifier D? Name one advantage and one disadvantage of such a model.
4) Extension: What about YOUR city?
Now that we've got a model trained up, let's play with it!
Use the Data Commons Place Explorer to find the DCID of a town or city local to you.
Use the code box below to add your local town or city's data, and run the model that data.
Note: Data may not be available for all locations. If you encounter errors, please try a different location!
End of explanation |
9,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Station Plot
Make a station plot, complete with sky cover and weather symbols.
The station plot itself is pretty straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
Step1: The setup
First read in the data. We use the metar reader because it simplifies a lot of tasks,
like dealing with separating text and assembling a pandas dataframe
https
Step2: This sample data has way too many stations to plot all of them. The number
of stations plotted will be reduced using reduce_point_density.
Step3: The payoff | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from metpy.calc import reduce_point_density
from metpy.cbook import get_test_data
from metpy.io import metar
from metpy.plots import add_metpy_logo, current_weather, sky_cover, StationPlot
Explanation: Station Plot
Make a station plot, complete with sky cover and weather symbols.
The station plot itself is pretty straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
End of explanation
data = metar.parse_metar_file(get_test_data('metar_20190701_1200.txt', as_file_obj=False))
# Drop rows with missing winds
data = data.dropna(how='any', subset=['wind_direction', 'wind_speed'])
Explanation: The setup
First read in the data. We use the metar reader because it simplifies a lot of tasks,
like dealing with separating text and assembling a pandas dataframe
https://thredds-test.unidata.ucar.edu/thredds/catalog/noaaport/text/metar/catalog.html
End of explanation
# Set up the map projection
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
# Use the Cartopy map projection to transform station locations to the map and
# then refine the number of stations plotted by setting a 300km radius
point_locs = proj.transform_points(ccrs.PlateCarree(), data['longitude'].values,
data['latitude'].values)
data = data[reduce_point_density(point_locs, 300000.)]
Explanation: This sample data has way too many stations to plot all of them. The number
of stations plotted will be reduced using reduce_point_density.
End of explanation
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering.
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection.
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1100, 300, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable.
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'].values, data['latitude'].values,
clip_on=True, transform=ccrs.PlateCarree(), fontsize=12)
# Plot the temperature and dew point to the upper and lower left, respectively, of
# the center point. Each one uses a different color.
stationplot.plot_parameter('NW', data['air_temperature'].values, color='red')
stationplot.plot_parameter('SW', data['dew_point_temperature'].values,
color='darkgreen')
# A more complex example uses a custom formatter to control how the sea-level pressure
# values are plotted. This uses the standard trailing 3-digits of the pressure value
# in tenths of millibars.
stationplot.plot_parameter('NE', data['air_pressure_at_sea_level'].values,
formatter=lambda v: format(10 * v, '.0f')[-3:])
# Plot the cloud cover symbols in the center location. This uses the codes made above and
# uses the `sky_cover` mapper to convert these values to font codes for the
# weather symbol font.
stationplot.plot_symbol('C', data['cloud_coverage'].values, sky_cover)
# Same this time, but plot current weather to the left of center, using the
# `current_weather` mapper to convert symbols to the right glyphs.
stationplot.plot_symbol('W', data['current_wx1_symbol'].values, current_weather)
# Add wind barbs
stationplot.plot_barb(data['eastward_wind'].values, data['northward_wind'].values)
# Also plot the actual text of the station id. Instead of cardinal directions,
# plot further out by specifying a location of 2 increments in x and 0 in y.
stationplot.plot_text((2, 0), data['station_id'].values)
plt.show()
Explanation: The payoff
End of explanation |
9,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example for ERA5 weather data download
This example shows you how to download ERA5 weather data from the Climate Data Store (CDS) and store it locally. Furthermore, it shows how to convert the weather data to the format needed by the pvlib and windpowerlib.
In order to download ERA5 weather data you need an account at the CDS.
Furthermore, you need to install the cdsapi package. See here for installation details.
When downloading the data using the API your request gets queued and may take a while to be completed. All actual calls of the data download are therefore commented to avoid unintended download. Instead, an example netcdf file is provided.
Download data for single coordinate
Download data for a region
Convert data into pvlib and windpowerlib format
Step1: Download data for single coordinate <a class="anchor" id="single_loc"></a>
To download data for a single location you have to specify latitude and longitude of the desired location. Data will be retrieved for the nearest weather data point to that location.
Step2: Besides a location you have to specify a time period for which you would like to download the data as well as the weather variables you need. The feedinlib provides predefined sets of variables that are needed to use the pvlib and windpowerlib. These can be applied by setting the variable parameter to "pvlib" or "windpowerlib", as shown below. If you want to download data for both libraries you can set variable to "feedinlib".
Concerning the start and end date, keep in mind that all timestamps in the feedinlib are in UTC. So if you later on want to convert the data to a different time zone, the data may not cover the whole period you intended to download. To avoid this set your start date to one day before the start of your required period if you are East of the zero meridian or your end date to one day after your required period ends if you are West of the zero meridian.
Step3: If you want to store the downloaded data, which is recommended as download may take a while, you may provide a filename (including path) to save data to.
Step4: Now we can retrieve the data
Step5: ```python
get pvlib data for specified area
ds_berlin = era5.get_era5_data_from_datespan_and_position(
variable=variable,
start_date=start_date, end_date=end_date,
latitude=latitude, longitude=longitude,
target_file=target_file)
```
bash
2020-01-12 22
Step6: Let's first plot the downloaded weather data points on a map.
Step7: Now let's convert the weather data to the pvlib format.
With the area parameter you can specify wether you want to retrieve weather dataframes for a single location, a region within the downloaded region or the whole downloaded region.
In case area is not a single location, the index of the resulting dataframe will be a multiindex with levels (time, latitude, longitude). Be aware that in order to use it for pvlib or windpowerlib calculations you need to select a single location.
Step8: The conversion to the windpowerlib format is analog to the pvlib conversion.
Step9: Furthermore, it is possible to specify a start and end date to retrieve data for. They must be provided as something that can be converted to a timestamp, i.e. '2013-07-02'.
Step10: The following shows you in short how to use the weather data for feed-in calculations and mainly serves as a test wether the conversion works correctly. More detailed explanation on feed-in calculations using the feedinlib can be found in the notebooks run_pvlib_model.ipynb and run_windpowerlib_turbine_model.ipynb. | Python Code:
from feedinlib import era5
Explanation: Example for ERA5 weather data download
This example shows you how to download ERA5 weather data from the Climate Data Store (CDS) and store it locally. Furthermore, it shows how to convert the weather data to the format needed by the pvlib and windpowerlib.
In order to download ERA5 weather data you need an account at the CDS.
Furthermore, you need to install the cdsapi package. See here for installation details.
When downloading the data using the API your request gets queued and may take a while to be completed. All actual calls of the data download are therefore commented to avoid unintended download. Instead, an example netcdf file is provided.
Download data for single coordinate
Download data for a region
Convert data into pvlib and windpowerlib format
End of explanation
latitude = 52.47
longitude = 13.30
Explanation: Download data for single coordinate <a class="anchor" id="single_loc"></a>
To download data for a single location you have to specify latitude and longitude of the desired location. Data will be retrieved for the nearest weather data point to that location.
End of explanation
# set start and end date (end date will be included
# in the time period for which data is downloaded)
start_date, end_date = '2017-01-01', '2017-12-31'
# set variable set to download
variable = "pvlib"
Explanation: Besides a location you have to specify a time period for which you would like to download the data as well as the weather variables you need. The feedinlib provides predefined sets of variables that are needed to use the pvlib and windpowerlib. These can be applied by setting the variable parameter to "pvlib" or "windpowerlib", as shown below. If you want to download data for both libraries you can set variable to "feedinlib".
Concerning the start and end date, keep in mind that all timestamps in the feedinlib are in UTC. So if you later on want to convert the data to a different time zone, the data may not cover the whole period you intended to download. To avoid this set your start date to one day before the start of your required period if you are East of the zero meridian or your end date to one day after your required period ends if you are West of the zero meridian.
End of explanation
target_file = 'ERA5_pvlib_2017.nc'
Explanation: If you want to store the downloaded data, which is recommended as download may take a while, you may provide a filename (including path) to save data to.
End of explanation
latitude = [52.3, 52.7] # [latitude south, latitude north]
longitude = [13.1, 13.6] # [longitude west, longitude east]
target_file = 'ERA5_example_data.nc'
Explanation: Now we can retrieve the data:
```python
get windpowerlib data for specified location
ds = era5.get_era5_data_from_datespan_and_position(
variable=variable,
start_date=start_date, end_date=end_date,
latitude=latitude, longitude=longitude,
target_file=target_file)
```
bash
2020-01-12 20:53:56,465 INFO Welcome to the CDS
2020-01-12 20:53:56,469 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/resources/reanalysis-era5-single-levels
2020-01-12 20:53:57,023 INFO Request is queued
2020-01-12 20:53:58,085 INFO Request is running
2020-01-12 21:48:24,341 INFO Request is completed
2020-01-12 21:48:24,344 INFO Downloading request for 5 variables to ERA5_pvlib_2017.nc
2020-01-12 21:48:24,346 INFO Downloading http://136.156.132.153/cache-compute-0002/cache/data7/adaptor.mars.internal-1578858837.3774962-24514-9-8081d664-0a1e-48b9-951c-bc9b8e2caa44.nc to ERA5_pvlib_2017.nc (121.9K)
2020-01-12 21:48:24,653 INFO Download rate 400.6K/s
Download data for a region<a class="anchor" id="region"></a>
When wanting to download weather data for a region instead of providing a single value for each latitude and longitude you have to provide latitude and longitude as lists in the following form:
End of explanation
era5_netcdf_filename = 'ERA5_example_data.nc'
# reimport downloaded data
import xarray as xr
ds = xr.open_dataset(era5_netcdf_filename)
ds
Explanation: ```python
get pvlib data for specified area
ds_berlin = era5.get_era5_data_from_datespan_and_position(
variable=variable,
start_date=start_date, end_date=end_date,
latitude=latitude, longitude=longitude,
target_file=target_file)
```
bash
2020-01-12 22:55:35,301 INFO Download rate 1.6M/s
2020-01-12 22:03:08,085 INFO Welcome to the CDS
2020-01-12 22:03:08,086 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/resources/reanalysis-era5-single-levels
2020-01-12 22:03:08,756 INFO Request is queued
2020-01-12 22:03:09,809 INFO Request is running
2020-01-12 22:55:34,863 INFO Request is completed
2020-01-12 22:55:34,864 INFO Downloading request for 5 variables to ERA5_example_data.nc
2020-01-12 22:55:34,864 INFO Downloading http://136.156.132.235/cache-compute-0006/cache/data5/adaptor.mars.internal-1578862989.052999-21409-23-831562a8-e0b2-4b19-8463-e14931a3f630.nc to ERA5_example_data.nc (720.7K)
If you want weather data for the whole world, you may leave latitude and longitude unspecified.
```python
get feedinlib data (includes pvlib and windpowerlib data)
for the whole world
ds = era5.get_era5_data_from_datespan_and_position(
variable="feedinlib",
start_date=start_date, end_date=end_date,
target_file=target_file)
```
Convert data into pvlib and windpowerlib format<a class="anchor" id="convert"></a>
In order to use the weather data for your feed-in calculations using the pvlib and windpowerlib it has to be converted into the required format. This section shows you how this is done.
End of explanation
# get all weather data points in dataset
from shapely.geometry import Point
import geopandas as gpd
points = []
for x in ds.longitude:
for y in ds.latitude:
points.append(Point(x, y))
points_df = gpd.GeoDataFrame({'geometry': points})
# read provided shape file
region_shape = gpd.read_file('berlin_shape.geojson')
# plot weather data points on map
base = region_shape.plot(color='white', edgecolor='black')
points_df.plot(ax=base, marker='o', color='red', markersize=5);
Explanation: Let's first plot the downloaded weather data points on a map.
End of explanation
# for single location (as list of longitude and latitude)
single_location = [13.2, 52.4]
pvlib_df = era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib',
area=single_location)
pvlib_df.head()
# for whole region
era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib').head()
# specify rectangular area
area = [(13.2, 13.7), (52.4, 52.8)]
era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib',
area=area).head()
# specify area giving a Polygon
from shapely.geometry import Polygon
lat_point_list = [52.3, 52.3, 52.65]
lon_point_list = [13.0, 13.4, 13.4]
area = Polygon(zip(lon_point_list, lat_point_list))
era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib',
area=area).head()
# export to csv
pvlib_df.to_csv('pvlib_df_ERA5.csv')
Explanation: Now let's convert the weather data to the pvlib format.
With the area parameter you can specify wether you want to retrieve weather dataframes for a single location, a region within the downloaded region or the whole downloaded region.
In case area is not a single location, the index of the resulting dataframe will be a multiindex with levels (time, latitude, longitude). Be aware that in order to use it for pvlib or windpowerlib calculations you need to select a single location.
End of explanation
# for whole region
windpowerlib_df = era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='windpowerlib',
area=single_location)
windpowerlib_df.head()
Explanation: The conversion to the windpowerlib format is analog to the pvlib conversion.
End of explanation
# get weather data in pvlib format for July
start = '2017-07-01'
end = '2017-07-31'
era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib',
area=single_location,
start=start,
end=end).head()
Explanation: Furthermore, it is possible to specify a start and end date to retrieve data for. They must be provided as something that can be converted to a timestamp, i.e. '2013-07-02'.
End of explanation
from feedinlib import Photovoltaic
system_data = {
'module_name': 'Advent_Solar_Ventura_210___2008_',
'inverter_name': 'ABB__MICRO_0_25_I_OUTD_US_208__208V_',
'azimuth': 180,
'tilt': 30,
'albedo': 0.2}
pv_system = Photovoltaic(**system_data)
feedin = pv_system.feedin(
weather=pvlib_df,
location=(52.4, 13.2))
feedin.plot()
from feedinlib import WindPowerPlant
turbine_data = {
'turbine_type': 'E-101/3050',
'hub_height': 135
}
wind_turbine = WindPowerPlant(**turbine_data)
feedin = wind_turbine.feedin(
weather=windpowerlib_df)
feedin.plot()
Explanation: The following shows you in short how to use the weather data for feed-in calculations and mainly serves as a test wether the conversion works correctly. More detailed explanation on feed-in calculations using the feedinlib can be found in the notebooks run_pvlib_model.ipynb and run_windpowerlib_turbine_model.ipynb.
End of explanation |
9,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datapath Example 3
This notebook gives an example of how to build relatively simple data paths.
It assumes that you understand the concepts presented in the example 2
notebook.
Exampe Data Model
The examples require that you understand a little bit about the example
catalog data model, which is based on the FaceBase project.
Key tables
'dataset'
Step1: Building a DataPath
Build a data path by linking together tables that are related. To make things a little easier we will use python variables to reference the tables. This is not necessary, but simplifies the examples.
Step2: Initiate a path from a table object
Like the example 2 notebook, begin by initiating a path instance from a Table object. This path will be "rooted" at the table it was initiated from, in this case, the dataset table. DataPath's have URIs that identify the resource in the catalog.
Step3: Link other related tables to the path
In the catalog's model, tables are related by foreign key references. Related tables may be linked together in a DataPath. Here we link the following tables based on their foreign key references (i.e., dataset <- experiment <- replicate).
Step4: Path context
By default, DataPath objects return entities for the last linked entity set in the path. The path from the prior step ended in replicate which is therefore the context for this path.
Step5: Get entities for the current context
The following DataPath will fetch replicate entities not datasets.
Step6: Get entities for a different path context
Let's say we wanted to fetch the entities for the dataset table rather than the current context which is the replicate table. We can do that by referencing the table as a property of the path object. Note that these are known as "table instances" rather than tables when used within a path expression. We will discuss table instances later in this notebook.
Step7: From that table instance we can fetch entities, add a filter specific to that table instance, or even link another table. Here we will get the dataset entities from the path.
Step8: Notice that we fetched fewer entities this time which is the number of dataset entities rather than the replicate entities that we previously fetched.
Filtering a DataPath
Building off of the path, a filter can be added. Like fetching entities, linking and filtering are performed relative to the current context. In this filter, the assay's attriburtes are referenced in the expression.
Currently, binary comparisons and logical operators are supported. Unary opertors have not yet been implemented. In binary comparisons, the left operand must be an attribute (column name) while the right operand must be a literal
value.
Step9: Table Instances
So far we have discussed base tables. A base table is a representation of the table as it is stored in the ERMrest catalog. A table instance is a usage or reference of a table within the context of a data path. As demonstrated above, we may link together multiple tables and thus create multiple table instances within a data path.
For example, in path.link(dataset).link(experiment).link(replicate) the table instance experiment is no longer the same as the original base table experiment because within the context of this data path the experiment entities must satisfy the constraints of the data path. The experiment entities must reference a dataset entity, and they must be referenced by a replicate entity. Thus within this path, the entity set for experiment may be quite different than the entity set for the base table on its own.
Table instances are bound to the path
Whenever you initiate a data path (e.g., table.path) or link a table to a path (e.g., path.link(table)) a table instance is created and bound to the DataPath object (e.g., path). These table instances can be referenced via the DataPath's table_instances container or directly as a property of the DataPath object itself.
Step10: Aliases for table instances
Whenever a table instance is created and bound to a path, it is given a name. If no name is specified for it, it will be named after the name of its base table. For example, a table named "My Table" will result in a table instance also named "My Table". Tables may appear more than once in a path (as table instances), and if the table name is taken, the instance will be given the "'base name' + number" (e.g., "My Table2").
You may wish to specify the name of your table instance. In conventional database terms, an alternate name is called an "alias". Here we give the dataset table instance an alias of 'D' though longer strings are also valid as long as they do not contain special characters in them.
Step11: You'll notice that in this path we added an additional instance of the dataset table from our catalog model. In addition, we linked it to the isa.replicate table. This was possible because in this model, there is a foriegn key reference from the base table replicate to the base table dataset. The entities for the table instance named dataset and the instance name D will likely consist of different entities because the constraints for each are different.
Selecting Attributes From Linked Entities
Returning to the initial example, if we want to include additional attributes
from other table instances in the path, we need to be able to reference the
table instances at any point in the path. First, we will build our original path.
Step12: Now let's fetch an entity set with attributes pulled from each of the table instances in the path.
Step13: Notice that the ResultSet also has a uri property. This URI may differ from the origin path URI because the attribute projection does not get appended to the path URI.
Step14: As usual, fetch(...) the entities from the catalog. | Python Code:
# Import deriva modules
from deriva.core import ErmrestCatalog, get_credential
# Connect with the deriva catalog
protocol = 'https'
hostname = 'www.facebase.org'
catalog_number = 1
# If you need to authenticate, use Deriva Auth agent and get the credential
credential = get_credential(hostname)
catalog = ErmrestCatalog(protocol, hostname, catalog_number, credential)
# Get the path builder interface for this catalog
pb = catalog.getPathBuilder()
Explanation: Datapath Example 3
This notebook gives an example of how to build relatively simple data paths.
It assumes that you understand the concepts presented in the example 2
notebook.
Exampe Data Model
The examples require that you understand a little bit about the example
catalog data model, which is based on the FaceBase project.
Key tables
'dataset' : represents a unit of data usually a 'study' or 'collection'
'experiment' : a bioassay (typically RNA-seq or ChIP-seq assays)
'replicate' : a record of a replicate (bio or technical) related to an experiment
Relationships
dataset <- experiment: A dataset may have one to many experiments. I.e., there
is a foreign key reference from experiment to dataset.
experiment <- replicate: An experiment may have one to many replicates. I.e., there is a
foreign key reference from replicate to experiment.
End of explanation
dataset = pb.isa.dataset
experiment = pb.isa.experiment
replicate = pb.isa.replicate
Explanation: Building a DataPath
Build a data path by linking together tables that are related. To make things a little easier we will use python variables to reference the tables. This is not necessary, but simplifies the examples.
End of explanation
path = dataset.path
print(path.uri)
Explanation: Initiate a path from a table object
Like the example 2 notebook, begin by initiating a path instance from a Table object. This path will be "rooted" at the table it was initiated from, in this case, the dataset table. DataPath's have URIs that identify the resource in the catalog.
End of explanation
path.link(experiment).link(replicate)
print(path.uri)
Explanation: Link other related tables to the path
In the catalog's model, tables are related by foreign key references. Related tables may be linked together in a DataPath. Here we link the following tables based on their foreign key references (i.e., dataset <- experiment <- replicate).
End of explanation
path.context.name
Explanation: Path context
By default, DataPath objects return entities for the last linked entity set in the path. The path from the prior step ended in replicate which is therefore the context for this path.
End of explanation
entities = path.entities()
len(entities)
Explanation: Get entities for the current context
The following DataPath will fetch replicate entities not datasets.
End of explanation
path.table_instances['dataset']
# or
path.dataset
Explanation: Get entities for a different path context
Let's say we wanted to fetch the entities for the dataset table rather than the current context which is the replicate table. We can do that by referencing the table as a property of the path object. Note that these are known as "table instances" rather than tables when used within a path expression. We will discuss table instances later in this notebook.
End of explanation
entities = path.dataset.entities()
len(entities)
Explanation: From that table instance we can fetch entities, add a filter specific to that table instance, or even link another table. Here we will get the dataset entities from the path.
End of explanation
path.filter(replicate.bioreplicate_number == 1)
print(path.uri)
entities = path.entities()
len(entities)
Explanation: Notice that we fetched fewer entities this time which is the number of dataset entities rather than the replicate entities that we previously fetched.
Filtering a DataPath
Building off of the path, a filter can be added. Like fetching entities, linking and filtering are performed relative to the current context. In this filter, the assay's attriburtes are referenced in the expression.
Currently, binary comparisons and logical operators are supported. Unary opertors have not yet been implemented. In binary comparisons, the left operand must be an attribute (column name) while the right operand must be a literal
value.
End of explanation
dataset_instance = path.table_instances['dataset']
# or
dataset_instance = path.dataset
Explanation: Table Instances
So far we have discussed base tables. A base table is a representation of the table as it is stored in the ERMrest catalog. A table instance is a usage or reference of a table within the context of a data path. As demonstrated above, we may link together multiple tables and thus create multiple table instances within a data path.
For example, in path.link(dataset).link(experiment).link(replicate) the table instance experiment is no longer the same as the original base table experiment because within the context of this data path the experiment entities must satisfy the constraints of the data path. The experiment entities must reference a dataset entity, and they must be referenced by a replicate entity. Thus within this path, the entity set for experiment may be quite different than the entity set for the base table on its own.
Table instances are bound to the path
Whenever you initiate a data path (e.g., table.path) or link a table to a path (e.g., path.link(table)) a table instance is created and bound to the DataPath object (e.g., path). These table instances can be referenced via the DataPath's table_instances container or directly as a property of the DataPath object itself.
End of explanation
path.link(dataset.alias('D'))
path.D.uri
Explanation: Aliases for table instances
Whenever a table instance is created and bound to a path, it is given a name. If no name is specified for it, it will be named after the name of its base table. For example, a table named "My Table" will result in a table instance also named "My Table". Tables may appear more than once in a path (as table instances), and if the table name is taken, the instance will be given the "'base name' + number" (e.g., "My Table2").
You may wish to specify the name of your table instance. In conventional database terms, an alternate name is called an "alias". Here we give the dataset table instance an alias of 'D' though longer strings are also valid as long as they do not contain special characters in them.
End of explanation
path = dataset.path.link(experiment).link(replicate).filter(replicate.bioreplicate_number == 1)
print(path.uri)
Explanation: You'll notice that in this path we added an additional instance of the dataset table from our catalog model. In addition, we linked it to the isa.replicate table. This was possible because in this model, there is a foriegn key reference from the base table replicate to the base table dataset. The entities for the table instance named dataset and the instance name D will likely consist of different entities because the constraints for each are different.
Selecting Attributes From Linked Entities
Returning to the initial example, if we want to include additional attributes
from other table instances in the path, we need to be able to reference the
table instances at any point in the path. First, we will build our original path.
End of explanation
results = path.attributes(path.dataset.accession,
path.experiment.experiment_type.alias('type_of_experiment'),
path.replicate.technical_replicate_number.alias('technical_replicate_num'))
print(results.uri)
Explanation: Now let's fetch an entity set with attributes pulled from each of the table instances in the path.
End of explanation
path.uri != results.uri
Explanation: Notice that the ResultSet also has a uri property. This URI may differ from the origin path URI because the attribute projection does not get appended to the path URI.
End of explanation
results.fetch(limit=5)
for result in results:
print(result)
Explanation: As usual, fetch(...) the entities from the catalog.
End of explanation |
9,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
9,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IonQ ProjectQ Backend Example
This notebook will walk you through a basic example of using IonQ hardware to run ProjectQ circuits.
Setup
The only requirement to run ProjectQ circuits on IonQ hardware is an IonQ API token.
Once you have acquired a token, please try out the examples in this notebook!
Usage & Examples
NOTE
Step1: Example — Bell Pair
Notes about running circuits on IonQ backends
Circuit building and visualization should feel identical to building a circuit using any other backend with ProjectQ.
That said, there are a couple of things to note when running on IonQ backends
Step2: Run the bell pair circuit
Now, let's run our bell pair circuit on the simulator.
All that is left is to call the main engine's flush method
Step3: You can also use the built-in matplotlib support to plot the histogram of results
Step6: Example - Bernstein-Vazirani
For our second example, let's build a Bernstein-Vazirani circuit and run it on a real IonQ quantum computer.
Rather than manually building the BV circuit every time, we'll create a method that can build one for any oracle $s$, and any register size.
Step7: Now let's use that method to create a BV circuit to submit
Step8: Time to run it on an IonQ QPU!
Step9: Because QPU time is a limited resource, QPU jobs are handled in a queue and may take a while to complete. The IonQ backend accounts for this delay by providing basic attributes which may be used to tweak the behavior of the backend while it waits on job results | Python Code:
# NOTE: Optional! This ignores warnings emitted from ProjectQ imports.
import warnings
warnings.filterwarnings('ignore')
# Import ProjectQ and IonQBackend objects, the setup an engine
import projectq.setups.ionq
from projectq import MainEngine
from projectq.backends import IonQBackend
# REPLACE WITH YOUR API TOKEN
token = 'your api token'
device = 'ionq_simulator'
# Create an IonQBackend
backend = IonQBackend(
use_hardware=True,
token=token,
num_runs=200,
device=device,
)
# Make sure to get an engine_list from the ionq setup module
engine_list = projectq.setups.ionq.get_engine_list(
token=token,
device=device,
)
# Create a ProjectQ engine
engine = MainEngine(backend, engine_list)
Explanation: IonQ ProjectQ Backend Example
This notebook will walk you through a basic example of using IonQ hardware to run ProjectQ circuits.
Setup
The only requirement to run ProjectQ circuits on IonQ hardware is an IonQ API token.
Once you have acquired a token, please try out the examples in this notebook!
Usage & Examples
NOTE: The IonQBackend expects an API key to be supplied via the token keyword argument to its constructor. If no token is directly provided, the backend will prompt you for one.
The IonQBackend currently supports two device types:
* ionq_simulator: IonQ's simulator backend.
* ionq_qpu: IonQ's QPU backend.
To view the latest list of available devices, you can run the show_devices function in the projectq.backends._ionq._ionq_http_client module.
End of explanation
# Import gates to apply:
from projectq.ops import All, H, CNOT, Measure
# Allocate two qubits
circuit = engine.allocate_qureg(2)
qubit0, qubit1 = circuit
# add gates — here we're creating a simple bell pair
H | qubit0
CNOT | (qubit0, qubit1)
All(Measure) | circuit
Explanation: Example — Bell Pair
Notes about running circuits on IonQ backends
Circuit building and visualization should feel identical to building a circuit using any other backend with ProjectQ.
That said, there are a couple of things to note when running on IonQ backends:
IonQ backends do not allow arbitrary unitaries, mid-circuit resets or measurements, or multi-experiment jobs. In practice, this means using reset, initialize, u u1, u2, u3, cu, cu1, cu2, or cu3 gates will throw an exception on submission, as will measuring mid-circuit, and submmitting jobs with multiple experiments.
While barrier is allowed for organizational and visualization purposes, the IonQ compiler does not see it as a compiler directive.
Now, let's make a simple Bell pair circuit:
End of explanation
# Flush the circuit, which will submit the circuit to IonQ's API for processing
engine.flush()
# If all went well, we can view results from the circuit execution
probabilities = engine.backend.get_probabilities(circuit)
print(probabilities)
Explanation: Run the bell pair circuit
Now, let's run our bell pair circuit on the simulator.
All that is left is to call the main engine's flush method:
End of explanation
# show a plot of result probabilities
import matplotlib.pyplot as plt
from projectq.libs.hist import histogram
# Show the histogram
histogram(engine.backend, circuit)
plt.show()
Explanation: You can also use the built-in matplotlib support to plot the histogram of results:
End of explanation
from projectq.ops import All, H, Z, CX, Measure
def oracle(qureg, input_size, s_int):
Apply the 'oracle'.
s = ('{0:0' + str(input_size) + 'b}').format(s_int)
for bit in range(input_size):
if s[input_size - 1 - bit] == '1':
CX | (qureg[bit], qureg[input_size])
def run_bv_circuit(eng, s_int, input_size):
build the Bernstein-Vazirani circuit
Args:
eng (MainEngine): A ProjectQ engine instance with an IonQBackend.
s_int (int): value of s, the secret bitstring, as an integer
input_size (int): size of the input register,
i.e. the number of (qu)bits to use for the binary
representation of s
# confirm the bitstring of S is what we think it should be
s = ('{0:0' + str(input_size) + 'b}').format(s_int)
print('s: ', s)
# We need a circuit with `input_size` qubits, plus one ancilla qubit
# Also need `input_size` classical bits to write the output to
circuit = eng.allocate_qureg(input_size + 1)
qubits = circuit[:-1]
output = circuit[input_size]
# put ancilla in state |-⟩
H | output
Z | output
# Apply Hadamard gates before querying the oracle
All(H) | qubits
# Apply the inner-product oracle
oracle(circuit, input_size, s_int)
# Apply Hadamard gates after querying the oracle
All(H) | qubits
# Measurement
All(Measure) | qubits
return qubits
Explanation: Example - Bernstein-Vazirani
For our second example, let's build a Bernstein-Vazirani circuit and run it on a real IonQ quantum computer.
Rather than manually building the BV circuit every time, we'll create a method that can build one for any oracle $s$, and any register size.
End of explanation
# Run a BV circuit:
s_int = 3
input_size = 3
circuit = run_bv_circuit(engine, s_int, input_size)
engine.flush()
Explanation: Now let's use that method to create a BV circuit to submit:
End of explanation
# Create an IonQBackend set to use the 'ionq_qpu' device
device = 'ionq_qpu'
backend = IonQBackend(
use_hardware=True,
token=token,
num_runs=100,
device=device,
)
# Make sure to get an engine_list from the ionq setup module
engine_list = projectq.setups.ionq.get_engine_list(
token=token,
device=device,
)
# Create a ProjectQ engine
engine = MainEngine(backend, engine_list)
# Setup another BV circuit
circuit = run_bv_circuit(engine, s_int, input_size)
# Run the circuit!
engine.flush()
# Show the histogram
histogram(engine.backend, circuit)
plt.show()
Explanation: Time to run it on an IonQ QPU!
End of explanation
# Create an IonQ backend with custom job fetch/wait settings
backend = IonQBackend(
token=token,
device=device,
num_runs=100,
use_hardware=True,
# Number of times to check for results before giving up
num_retries=3000,
# The number of seconds to wait between attempts
interval=1,
)
Explanation: Because QPU time is a limited resource, QPU jobs are handled in a queue and may take a while to complete. The IonQ backend accounts for this delay by providing basic attributes which may be used to tweak the behavior of the backend while it waits on job results:
End of explanation |
9,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Station Plot
Make a station plot, complete with sky cover and weather symbols.
The station plot itself is pretty straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
Step1: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
Step2: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
Step3: Now that we have the data we want, we need to perform some conversions
Step4: Now all the data wrangling is finished, just need to set up plotting and go
Set up the map projection and set up a cartopy feature for state borders
Step5: The payoff | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as feat
import matplotlib.pyplot as plt
import numpy as np
from metpy.calc import get_wind_components
from metpy.cbook import get_test_data
from metpy.plots import StationPlot
from metpy.plots.wx_symbols import current_weather, sky_cover
from metpy.units import units
Explanation: Station Plot
Make a station plot, complete with sky cover and weather symbols.
The station plot itself is pretty straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
End of explanation
f = get_test_data('station_data.txt')
all_data = np.loadtxt(f, skiprows=1, delimiter=',',
usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'),
('slp', 'f'), ('air_temperature', 'f'),
('cloud_fraction', 'f'), ('dewpoint', 'f'),
('weather', '16S'),
('wind_dir', 'f'), ('wind_speed', 'f')]))
Explanation: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
End of explanation
# Get the full list of stations in the data
all_stids = [s.decode('ascii') for s in all_data['stid']]
# Pull out these specific stations
whitelist = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',
'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',
'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',
'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',
'SAT', 'BUY', '0CO', 'ZPC', 'VIH']
# Loop over all the whitelisted sites, grab the first data, and concatenate them
data = np.concatenate([all_data[all_stids.index(site)].reshape(1,) for site in whitelist])
Explanation: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
End of explanation
# Get all of the station IDs as a list of strings
stid = [s.decode('ascii') for s in data['stid']]
# Get the wind components, converting from m/s to knots as will be appropriate
# for the station plot
u, v = get_wind_components((data['wind_speed'] * units('m/s')).to('knots'),
data['wind_dir'] * units.degree)
# Convert the fraction value into a code of 0-8, which can be used to pull out
# the appropriate symbol
cloud_frac = (8 * data['cloud_fraction']).astype(int)
# Map weather strings to WMO codes, which we can use to convert to symbols
# Only use the first symbol if there are multiple
wx_text = [s.decode('ascii') for s in data['weather']]
wx_codes = {'': 0, 'HZ': 5, 'BR': 10, '-DZ': 51, 'DZ': 53, '+DZ': 55,
'-RA': 61, 'RA': 63, '+RA': 65, '-SN': 71, 'SN': 73, '+SN': 75}
wx = [wx_codes[s.split()[0] if ' ' in s else s] for s in wx_text]
Explanation: Now that we have the data we want, we need to perform some conversions:
Get a list of strings for the station IDs
Get wind components from speed and direction
Convert cloud fraction values to integer codes [0 - 8]
Map METAR weather codes to WMO codes for weather symbols
End of explanation
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
state_boundaries = feat.NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lines',
scale='110m', facecolor='none')
Explanation: Now all the data wrangling is finished, just need to set up plotting and go
Set up the map projection and set up a cartopy feature for state borders
End of explanation
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(feat.LAND, zorder=-1)
ax.add_feature(feat.OCEAN, zorder=-1)
ax.add_feature(feat.LAKES, zorder=-1)
ax.coastlines(resolution='110m', zorder=2, color='black')
ax.add_feature(state_boundaries)
ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black')
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['lon'], data['lat'], transform=ccrs.PlateCarree(),
fontsize=12)
# Plot the temperature and dew point to the upper and lower left, respectively, of
# the center point. Each one uses a different color.
stationplot.plot_parameter('NW', data['air_temperature'], color='red')
stationplot.plot_parameter('SW', data['dewpoint'], color='darkgreen')
# A more complex example uses a custom formatter to control how the sea-level pressure
# values are plotted. This uses the standard trailing 3-digits of the pressure value
# in tenths of millibars.
stationplot.plot_parameter('NE', data['slp'],
formatter=lambda v: format(10 * v, '.0f')[-3:])
# Plot the cloud cover symbols in the center location. This uses the codes made above and
# uses the `sky_cover` mapper to convert these values to font codes for the
# weather symbol font.
stationplot.plot_symbol('C', cloud_frac, sky_cover)
# Same this time, but plot current weather to the left of center, using the
# `current_weather` mapper to convert symbols to the right glyphs.
stationplot.plot_symbol('W', wx, current_weather)
# Add wind barbs
stationplot.plot_barb(u, v)
# Also plot the actual text of the station id. Instead of cardinal directions,
# plot further out by specifying a location of 2 increments in x and 0 in y.
stationplot.plot_text((2, 0), stid)
plt.show()
Explanation: The payoff
End of explanation |
9,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparison of HeadLineSinkString and LeakyLineDoubletString vs. image well
Step1: Consider a well pumping in a phreatic aquifer with $S_y=0.1$. The hydraulic conductivity of the aquifer is 10 m/d and the saturated thickness may be approximated as constant and equal to 20 m. The well is located at $(x,y)=(0,0)$. The discharge of the well is 1000 m$^3$/d and the radius is 0.3 m. there is a very long river with a fixed head located along the line $x=50$ m. The head is computed at $(x,y)=(20,0)$ for the first 20 days after the well starts pumping. The solution for an image well is compared to the solution using a HeadLineSinkString element of different lengths.
Step2: The solution is repeated for the case where there is a long impermeable wall along $x=50$ m rather than a river. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ttim import *
Explanation: Comparison of HeadLineSinkString and LeakyLineDoubletString vs. image well
End of explanation
ml1 = ModelMaq(kaq=10, z=[20, 0], Saq=[0.1], phreatictop=True, tmin=0.001, tmax=100)
w1 = Well(ml1, 0, 0, rw=0.3, tsandQ=[(0, 1000)])
w2 = Well(ml1, 100, 0, rw=0.3, tsandQ=[(0, -1000)])
ml1.solve()
t = np.linspace(0.1, 20, 100)
h1 = ml1.head(20, 0, t)
plt.plot(t, h1[0], label='river modeled with image well')
plt.xlabel('time (d)')
plt.ylabel('head (m)');
plt.plot(t, h1[0], label='river modeled with image well')
for ystart in [-50, -100]:
ml2 = ModelMaq(kaq=10, z=[20, 0], Saq=[0.1], phreatictop=True, tmin=0.001, tmax=100)
w = Well(ml2, 0, 0, rw=0.3, tsandQ=[(0, 1000)])
yls = np.arange(ystart, -ystart + 1, 20)
xls = 50 * np.ones(len(yls))
lss = HeadLineSinkString(ml2, xy=list(zip(xls, yls)), tsandh='fixed')
ml2.solve()
h2 = ml2.head(20, 0, t)
plt.plot(t, h2[0], '--', label=f'line-sink string from {ystart} to {-ystart}')
plt.title('head at (x,y)=(20,0)')
plt.xlabel('time (d)')
plt.ylabel('head (m)')
plt.legend();
Explanation: Consider a well pumping in a phreatic aquifer with $S_y=0.1$. The hydraulic conductivity of the aquifer is 10 m/d and the saturated thickness may be approximated as constant and equal to 20 m. The well is located at $(x,y)=(0,0)$. The discharge of the well is 1000 m$^3$/d and the radius is 0.3 m. there is a very long river with a fixed head located along the line $x=50$ m. The head is computed at $(x,y)=(20,0)$ for the first 20 days after the well starts pumping. The solution for an image well is compared to the solution using a HeadLineSinkString element of different lengths.
End of explanation
ml1 = ModelMaq(kaq=10, z=[20, 0], Saq=[0.1], phreatictop=True, tmin=0.001, tmax=100)
w1 = Well(ml1, 0, 0, rw=0.3, tsandQ=[(0, 1000)])
w2 = Well(ml1, 100, 0, rw=0.3, tsandQ=[(0, 1000)])
ml1.solve()
t = np.linspace(0.1, 20, 100)
h1 = ml1.head(20, 0, t)
plt.plot(t, h1[0], label='impermeable wall modeled with image well')
plt.xlabel('time (d)')
plt.ylabel('head (m)');
plt.plot(t, h1[0], label='river modeled with image well')
for ystart in [-100, -200, -400]:
ml2 = ModelMaq(kaq=10, z=[20, 0], Saq=[0.1], phreatictop=True, tmin=0.001, tmax=100)
w = Well(ml2, 0, 0, rw=0.3, tsandQ=[(0, 1000)])
yls = np.arange(ystart, -ystart + 1, 20)
xls = 50 * np.ones(len(yls))
lss = LeakyLineDoubletString(ml2, xy=list(zip(xls, yls)), res='imp')
ml2.solve()
h2 = ml2.head(20, 0, t)
plt.plot(t, h2[0], '--', label=f'line-doublet string from {ystart} to {-ystart}')
plt.title('head at (x,y)=(20,0)')
plt.xlabel('time (d)')
plt.ylabel('head (m)')
plt.legend();
Explanation: The solution is repeated for the case where there is a long impermeable wall along $x=50$ m rather than a river.
End of explanation |
9,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Reproducibility
This notebook explains how to get fully reproducible code with TensorFlow.
<table align="left">
<td>
<a target="_blank" href="https
Step1: Warning
Step2: Checklist
Do not run TensorFlow on the GPU.
Beware of multithreading, and make TensorFlow single-threaded.
Set all the random seeds.
Eliminate any other source of variability.
Do Not Run TensorFlow on the GPU
Some operations (like tf.reduce_sum()) have favor performance over precision, and their outputs may vary slightly across runs. To get reproducible results, make sure TensorFlow runs on the CPU
Step3: Beware of Multithreading
Because floats have limited precision, the order of execution matters
Step4: You should make sure TensorFlow runs your ops on a single thread
Step5: The thread pools for all sessions are created when you create the first session, so all sessions in the rest of this notebook will be single-threaded
Step6: Set all the random seeds!
Python's built-in hash() function
Step7: Since Python 3.3, the result will be different every time, unless you start Python with the PYTHONHASHSEED environment variable set to 0
Step8: Python Random Number Generators (RNGs)
Step9: NumPy RNGs
Step10: TensorFlow RNGs
TensorFlow's behavior is more complex because of two things
Step11: Every time you reset the graph, you need to set the seed again
Step12: If you create your own graph, it will ignore the default graph's seed
Step13: You must set its own seed
Step14: If you set the seed after the random operation is created, the seed has no effet
Step15: A note about operation seeds
You can also set a seed for each individual random operation. When you do, it is combined with the graph seed into the final seed used by that op. The following table summarizes how this works
Step16: In the following example, you may think that all random ops will have the same random seed, but rnd3 will actually have a different seed
Step17: Estimators API
Tip
Step18: If you use the Estimators API, make sure to create a RunConfig and set its tf_random_seed, then pass it to the constructor of your estimator
Step19: Let's try it on MNIST
Step20: Unfortunately, the numpy_input_fn does not allow us to set the seed when shuffle=True, so we must shuffle the data ourself and set shuffle=False.
Step21: The final loss should be exactly 0.46282205.
Instead of using the numpy_input_fn() function (which cannot reproducibly shuffle the dataset at each epoch), you can create your own input function using the Data API and set its shuffling seed
Step22: The final loss should be exactly 1.0556093.
```python
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X"
Step23: You should get exactly 97.16% accuracy on the training set at the end of training.
Eliminate other sources of variability
For example, os.listdir() returns file names in an order that depends on how the files were indexed by the file system
Step24: You should sort the file names before you use them | Python Code:
from IPython.display import IFrame
IFrame(src="https://www.youtube.com/embed/Ys8ofBeR2kA", width=560, height=315, frameborder="0", allowfullscreen=True)
Explanation: TensorFlow Reproducibility
This notebook explains how to get fully reproducible code with TensorFlow.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml/blob/master/extra_tensorflow_reproducibility.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
Warning: this notebook accompanies the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition project, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use.
Watch this video to understand the key ideas behind TensorFlow reproducibility:
End of explanation
from __future__ import division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
import numpy as np
import tensorflow as tf
from tensorflow import keras
Explanation: Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use.
End of explanation
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
Explanation: Checklist
Do not run TensorFlow on the GPU.
Beware of multithreading, and make TensorFlow single-threaded.
Set all the random seeds.
Eliminate any other source of variability.
Do Not Run TensorFlow on the GPU
Some operations (like tf.reduce_sum()) have favor performance over precision, and their outputs may vary slightly across runs. To get reproducible results, make sure TensorFlow runs on the CPU:
End of explanation
2. * 5. / 7.
2. / 7. * 5.
Explanation: Beware of Multithreading
Because floats have limited precision, the order of execution matters:
End of explanation
config = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
with tf.Session(config=config) as sess:
#... this will run single threaded
pass
Explanation: You should make sure TensorFlow runs your ops on a single thread:
End of explanation
with tf.Session() as sess:
#... also single-threaded!
pass
Explanation: The thread pools for all sessions are created when you create the first session, so all sessions in the rest of this notebook will be single-threaded:
End of explanation
print(set("Try restarting the kernel and running this again"))
print(set("Try restarting the kernel and running this again"))
Explanation: Set all the random seeds!
Python's built-in hash() function
End of explanation
if os.environ.get("PYTHONHASHSEED") != "0":
raise Exception("You must set PYTHONHASHSEED=0 when starting the Jupyter server to get reproducible results.")
Explanation: Since Python 3.3, the result will be different every time, unless you start Python with the PYTHONHASHSEED environment variable set to 0:
shell
PYTHONHASHSEED=0 python
```pycon
print(set("Now the output is stable across runs"))
{'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'}
exit()
```
shell
PYTHONHASHSEED=0 python
```pycon
print(set("Now the output is stable across runs"))
{'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'}
```
Alternatively, you could set this environment variable system-wide, but that's probably not a good idea, because this automatic randomization was introduced for security reasons.
Unfortunately, setting the environment variable from within Python (e.g., using os.environ["PYTHONHASHSEED"]="0") will not work, because Python reads it upon startup. For Jupyter notebooks, you have to start the Jupyter server like this:
shell
PYTHONHASHSEED=0 jupyter notebook
End of explanation
import random
random.seed(42)
print(random.random())
print(random.random())
print()
random.seed(42)
print(random.random())
print(random.random())
Explanation: Python Random Number Generators (RNGs)
End of explanation
import numpy as np
np.random.seed(42)
print(np.random.rand())
print(np.random.rand())
print()
np.random.seed(42)
print(np.random.rand())
print(np.random.rand())
Explanation: NumPy RNGs
End of explanation
import tensorflow as tf
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
Explanation: TensorFlow RNGs
TensorFlow's behavior is more complex because of two things:
* you create a graph, and then you execute it. The random seed must be set before you create the random operations.
* there are two seeds: one at the graph level, and one at the individual random operation level.
End of explanation
tf.reset_default_graph()
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
Explanation: Every time you reset the graph, you need to set the seed again:
End of explanation
tf.reset_default_graph()
tf.set_random_seed(42)
graph = tf.Graph()
with graph.as_default():
rnd = tf.random_uniform(shape=[])
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
Explanation: If you create your own graph, it will ignore the default graph's seed:
End of explanation
graph = tf.Graph()
with graph.as_default():
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
Explanation: You must set its own seed:
End of explanation
tf.reset_default_graph()
rnd = tf.random_uniform(shape=[])
tf.set_random_seed(42) # BAD, NO EFFECT!
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
tf.set_random_seed(42) # BAD, NO EFFECT!
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
Explanation: If you set the seed after the random operation is created, the seed has no effet:
End of explanation
tf.reset_default_graph()
rnd1 = tf.random_uniform(shape=[], seed=42)
rnd2 = tf.random_uniform(shape=[], seed=42)
rnd3 = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print()
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
Explanation: A note about operation seeds
You can also set a seed for each individual random operation. When you do, it is combined with the graph seed into the final seed used by that op. The following table summarizes how this works:
| Graph seed | Op seed | Resulting seed |
|------------|---------|--------------------------------|
| None | None | Random |
| graph_seed | None | f(graph_seed, op_index) |
| None | op_seed | f(default_graph_seed, op_seed) |
| graph_seed | op_seed | f(graph_seed, op_seed) |
f() is a deterministic function.
op_index = graph._last_id when there is a graph seed, different random ops without op seeds will have different outputs. However, each of them will have the same sequence of outputs at every run.
In eager mode, there is a global seed instead of graph seed (since there is no graph in eager mode).
End of explanation
tf.reset_default_graph()
tf.set_random_seed(42)
rnd1 = tf.random_uniform(shape=[], seed=42)
rnd2 = tf.random_uniform(shape=[], seed=42)
rnd3 = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print()
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
Explanation: In the following example, you may think that all random ops will have the same random seed, but rnd3 will actually have a different seed:
End of explanation
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
Explanation: Estimators API
Tip: in a Jupyter notebook, you probably want to set the random seeds regularly so that you can come back and run the notebook from there (instead of from the beginning) and still get reproducible outputs.
End of explanation
my_config = tf.estimator.RunConfig(tf_random_seed=42)
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10,
feature_columns=feature_cols,
config=my_config)
Explanation: If you use the Estimators API, make sure to create a RunConfig and set its tf_random_seed, then pass it to the constructor of your estimator:
End of explanation
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
Explanation: Let's try it on MNIST:
End of explanation
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train_shuffled}, y=y_train_shuffled, num_epochs=10, batch_size=32, shuffle=False)
dnn_clf.train(input_fn=input_fn)
Explanation: Unfortunately, the numpy_input_fn does not allow us to set the seed when shuffle=True, so we must shuffle the data ourself and set shuffle=False.
End of explanation
def create_dataset(X, y=None, n_epochs=1, batch_size=32,
buffer_size=1000, seed=None):
dataset = tf.data.Dataset.from_tensor_slices(({"X": X}, y))
dataset = dataset.repeat(n_epochs)
dataset = dataset.shuffle(buffer_size, seed=seed)
return dataset.batch(batch_size)
input_fn=lambda: create_dataset(X_train, y_train, seed=42)
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
my_config = tf.estimator.RunConfig(tf_random_seed=42)
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10,
feature_columns=feature_cols,
config=my_config)
dnn_clf.train(input_fn=input_fn)
Explanation: The final loss should be exactly 0.46282205.
Instead of using the numpy_input_fn() function (which cannot reproducibly shuffle the dataset at each epoch), you can create your own input function using the Data API and set its shuffling seed:
End of explanation
keras.backend.clear_session()
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax"),
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10)
Explanation: The final loss should be exactly 1.0556093.
```python
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train_shuffled}, y=y_train_shuffled,
num_epochs=10, batch_size=32, shuffle=False)
dnn_clf.train(input_fn=input_fn)
```
Keras API
If you use the Keras API, all you need to do is set the random seed any time you clear the session:
End of explanation
for i in range(10):
with open("my_test_foo_{}".format(i), "w"):
pass
[f for f in os.listdir() if f.startswith("my_test_foo_")]
for i in range(10):
with open("my_test_bar_{}".format(i), "w"):
pass
[f for f in os.listdir() if f.startswith("my_test_bar_")]
Explanation: You should get exactly 97.16% accuracy on the training set at the end of training.
Eliminate other sources of variability
For example, os.listdir() returns file names in an order that depends on how the files were indexed by the file system:
End of explanation
filenames = os.listdir()
filenames.sort()
[f for f in filenames if f.startswith("my_test_foo_")]
for f in os.listdir():
if f.startswith("my_test_foo_") or f.startswith("my_test_bar_"):
os.remove(f)
Explanation: You should sort the file names before you use them:
End of explanation |
9,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
A quick demo of how to control the ASCII format of numeric arrays written by FloPy
load and run the Freyberg model
Step1: Each Util2d instance now has a .format attribute, which is an ArrayFormat instance
Step2: The ArrayFormat class exposes each of the attributes seen in the ArrayFormat.___str___() call. ArrayFormat also exposes .fortran, .py and .numpy atrributes, which are the respective format descriptors
Step3: (re)-setting .format
We can reset the format using a standard fortran type format descriptor
Step4: Let's load the model we just wrote and check that the desired botm[0].format was used
Step5: We can also reset individual format components (we can also generate some warnings)
Step6: We can also select free format. Note that setting to free format resets the format attributes to the default, max precision | Python Code:
%matplotlib inline
import sys
import os
import platform
import numpy as np
import matplotlib.pyplot as plt
import flopy
#Set name of MODFLOW exe
# assumes executable is in users path statement
version = 'mf2005'
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
mfexe = exe_name
#Set the paths
loadpth = os.path.join('..', 'data', 'freyberg')
modelpth = os.path.join('data')
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
ml = flopy.modflow.Modflow.load('freyberg.nam', model_ws=loadpth,
exe_name=exe_name, version=version)
ml.model_ws = modelpth
ml.write_input()
success, buff = ml.run_model()
if not success:
print ('Something bad happened.')
files = ['freyberg.hds', 'freyberg.cbc']
for f in files:
if os.path.isfile(os.path.join(modelpth, f)):
msg = 'Output file located: {}'.format(f)
print (msg)
else:
errmsg = 'Error. Output file cannot be found: {}'.format(f)
print (errmsg)
Explanation: FloPy
A quick demo of how to control the ASCII format of numeric arrays written by FloPy
load and run the Freyberg model
End of explanation
print(ml.lpf.hk[0].format)
Explanation: Each Util2d instance now has a .format attribute, which is an ArrayFormat instance:
End of explanation
print(ml.dis.botm[0].format.fortran)
print(ml.dis.botm[0].format.py)
print(ml.dis.botm[0].format.numpy)
Explanation: The ArrayFormat class exposes each of the attributes seen in the ArrayFormat.___str___() call. ArrayFormat also exposes .fortran, .py and .numpy atrributes, which are the respective format descriptors:
End of explanation
ml.dis.botm[0].format.fortran = "(6f10.4)"
print(ml.dis.botm[0].format.fortran)
print(ml.dis.botm[0].format.py)
print(ml.dis.botm[0].format.numpy)
ml.write_input()
success, buff = ml.run_model()
Explanation: (re)-setting .format
We can reset the format using a standard fortran type format descriptor
End of explanation
ml1 = flopy.modflow.Modflow.load("freyberg.nam",model_ws=modelpth)
print(ml1.dis.botm[0].format)
Explanation: Let's load the model we just wrote and check that the desired botm[0].format was used:
End of explanation
ml.dis.botm[0].format.width = 9
ml.dis.botm[0].format.decimal = 1
print(ml1.dis.botm[0].format)
Explanation: We can also reset individual format components (we can also generate some warnings):
End of explanation
ml.dis.botm[0].format.free = True
print(ml1.dis.botm[0].format)
ml.write_input()
success, buff = ml.run_model()
ml1 = flopy.modflow.Modflow.load("freyberg.nam",model_ws=modelpth)
print(ml1.dis.botm[0].format)
Explanation: We can also select free format. Note that setting to free format resets the format attributes to the default, max precision:
End of explanation |
9,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 01 - Hello World em Aprendizagem de Máquina
Para começar o nosso estudo de aprendizagem de máquina vamos começar com um exemplo simples de aprendizagem. O objetivo aqui é entender o que é Aprendizagem de Máquina e como podemos usá-la. Não serão apresentados detalhes dos métodos aplicados, eles serão explicados ao longo do curso.
O material do curso será baseado no curso Intro to Machine Learning da Udacity e também no conteúdo de alguns livros
Step1: Vamos agora criar um modelo baseado nesse conjunto de dados. Vamos utilizar o algoritmo de árvore de decisão para fazer isso.
Step2: clf consiste no classificador baseado na árvore de decisão. Precisamos treina-lo com o conjunto da base de dados de treinamento.
Step3: Observer que o classificador recebe com parâmetro as features e os labels. Esse classificador é um tipo de classificador supervisionado, logo precisa conhecer o "gabarito" das instâncias que estão sendo passadas.
Uma vez que temos o modelo construído, podemos utiliza-lo para classificar uma instância desconhecida.
Step4: Ele classificou essa fruta como sendo uma Laranja.
HelloWorld++
Vamos estender um pouco mais esse HelloWorld. Claro que o exemplo anterior foi só para passar a idéia de funcionamento de um sistema desse tipo. No entanto, o nosso programa não está aprendendo muita coisa já que a quantidade de exemplos passada para ele é muito pequena. Vamos trabalhar com um exemplo um pouco maior.
Para esse exemplo, vamos utilizar o Iris Dataset. Esse é um clássico dataset utilizado na aprendizagem de máquina. Ele tem o propósito mais didático e a tarefa é classificar 3 espécies de um tipo de flor (Iris). A classificação é feita a partir de 4 características da planta
Step5: Imprimindo as características
Step6: Imprimindo os labels
Step7: Imprimindo os dados
Step8: Antes de continuarmos, vale a pena mostrar que o Scikit-Learn exige alguns requisitos para se trabalhar com os dados. Esse tutorial não tem como objetivo fazer um estudo detalhado da biblioteca, mas é importante tomar conhecimento de tais requisitos para entender alguns exemplos que serão mostrados mais à frente. São eles
Step9: Quando importamos a base diretamente do ScikitLearn, as features e labels já vieram em objetos distintos. Só por questão de simplificação dos nomes, vou renomeá-los.
Step10: Construindo e testando um modelo de treinamento
Uma vez que já temos nossa base de dados, o próximo passo é construir nosso modelo de aprendizagem de máquina capaz de utilizar o dataset. No entanto, antes de construirmos nosso modelo é preciso saber qual modelo desenvolver e para isso precisamos definir qual o nosso propósito na tarefa de treinamento.
Existem vários tipos de tarefas dentro da aprendizagem de máquina. Como dito anteriormente, vamos trabalhar com a tarefa de classificação. A classificação consiste em criar um modelo a partir de dados que estejam de alguma forma classificados. O modelo gerado é capaz de determinar qual classe uma instância pertence a partir dos dados que foram dados como entrada.
Na apresentação do dataset da Iris vimos que cada instância é classificada com um tipo (no caso, o tipo da espécie a qual a planta pertence). Sendo assim, vamos tratar esse problema como um problema de classificação. Existem outras tarefas dentro da aprendizagem de máquina, como
Step11: Agora que já temos nosso dataset separado, vamos criar o classificador e treina-lo com os dados de treinamento.
Step12: O classificador foi treinado, agora vamos utiliza-lo para classificar as instâncias da base de teste.
Step13: Como estamos trabalhando com o aprendizado supervisionado, podemos comparar com o target que já conhecemos da base de teste.
Step14: Observe que, neste caso, nosso classificador teve uma acurácia de 100% acertando todas as instâncias informadas. Claro que esse é só um exemplo e normalmente trabalhamos com valores de acurácias menores que 100%. No entanto, vale ressaltar que para algumas tarefas, como reconhecimento de imagens, as taxas de acurácias estão bem próximas de 100%.
Visualizando nosso modelo
A vantagem em se trablhar com a árvore de decisão é que podemos visualizar exatamente o que modelo faz. De forma geral, uma árvore de decisão é uma árvore que permite serparar o conjunto de dados. Cada nó da árvore é "uma pergunta" que direciona aquela instância ao longo da árvore. Nos nós folha da árvore se encontram as classes. Esse tipo de modelo será mais detalhado mais a frente no nosso curso.
Para isso, vamos utilizar um código que visualiza a árvore gerada.
Step15: Observe que nos nós internos pergunta sim ou não para alguma característica. Por exemplo, no nó raiz a pergunta é "pedal width é menor ou igual a 0.8". Isso significa que se a instância que estou querendo classificar possui pedal width menor que 0.8 ela será classificada como setosa. Se isso não for true ela será redirecionada para outro nó que irá analisar outra característica. Esse processo continua até que consiga atingir um nó folha. Como execício faça a classificação, acompahando na tabela, para as instâncias de testes. | Python Code:
# Vamos transformar as informações textuais em números: (0) irregular, (1) Suave.
# Os labels também serão transformados em números: (0) Maçã e (1) Laranja
features = [[140, 1], [130, 1], [150, 0], [170, 0]]
labels = [0, 0, 1, 1]
Explanation: Tutorial 01 - Hello World em Aprendizagem de Máquina
Para começar o nosso estudo de aprendizagem de máquina vamos começar com um exemplo simples de aprendizagem. O objetivo aqui é entender o que é Aprendizagem de Máquina e como podemos usá-la. Não serão apresentados detalhes dos métodos aplicados, eles serão explicados ao longo do curso.
O material do curso será baseado no curso Intro to Machine Learning da Udacity e também no conteúdo de alguns livros:
[1]: Inteligência Artificial. Uma Abordagem de Aprendizado de Máquina (FACELI et. al, 2011)
[2]: Machine Learning: An Algorithmic Perspective, Second Edition (MARSLAND et. al, 2014)
[3]: Redes Neurais Artificiais Para Engenharia e Ciências Aplicadas. Fundamentos Teóricos e Aspectos Práticos (da SILVA I., 2016)
[4]: An Introduction to Statistical Learning with Applications in R (JAMES, G. et al, 2015)
Em termos de linguagem de programação, usaremos o Python e as bibliotecas do ScikitLearn e do Tensorflow. Bibliotecas auxiliares como Pandas, NumPy, Scipy, MatPlotLib dentre outras também serão necessárias.
O material dessa primeira aula é baseado em dois vídeos:
Hello World - Machine Learning Recipes #1 (by Josh Gordon - Google)
Visualizing a Decision Tree - Machine Learning Recipes #2 (by Josh Gordon - Google)
Vamos Começar :)
O primeiro passo é entender o que é Aprendizagem de Máquina (em inglês, Machine Learning). Uma definição que consta em [2] é a seguinte:
Machine Learning, then, is about making computers modify or adapt their actions (whether theses actions are making predictions, or controlling a robot) so that these actions get more accurate, where accuracy is measured by how well the chosen actions reflect the correct ones.
Podemos enxergar a aprendizagem de máquina como sendo um campo da Inteligência Artificial que visa prover os computadores a capacidade de modificar e adaptar as sua ações de acordo com o problema e, ao longo do processo, melhorar o seu desempenho.
É nessa área que se encontra a base de sistemas que usamos no dia a dia como:
Sistemas de tradução automática
Sistemas de recomendação de filmes
Assistentes pessoais como a Siri
Dentre tantas outras aplicações que serão detalhadas ao longo do curso.
Todos esses sistemas são possíveis graças a um amplo estudo de uma série de algoritmos que compoõe a aprendizagem de máquina. Existem disversas formas de classficiar esse conjunto de algoritmos. Uma forma simples é dividi-los em 4 grupos. Citando [2], temos:
Aprendizado Supervisionado (Supervised Learning): A training set of examples with the correct responses (targets) is provided and, based on this training set, the algorithm generalises to respond correctly to all possible inputs. This also colled learning from exemplars.
Aprendizado Não-Supervisionado (Unsupervised Learning): Correct responses are not provided, but instead the algorithm tries to identify similarities between the inputs so that inputs that have something in common are categorised together. The statistical approach to unsupervised learning is known as density estimation.
Aprendizado por Reforço (Reinforcement Learning): This is somewhere between supervised and unsupervised learning. The algortithm gets told when the answer is wrong, but dows not get told how to correct it. It has to explore and try out different possibilities until it works out how to get the answer right. Reinforcement learning is sometime called learning with a critic because of this monitor that scores the answer, but does not suggest improvements.
Aprendizado Evolucionário (Evolutionary Learning): Biological evolution can be seen as a learning process: biological organisms adapt to improve their survival rates and chance of having offspring in their environment. We'll look at how we can model this in a computer, using an idea of fitness, which corresponds to a score for how good the current solution is.
Neste curso iremos explorar alguns dos principais algoritmos de cada um dos grupos.
Hello World
Para começar vamos tentar entender um pouco de como funciona o processo que será tratado nos algoritmos com uma tarefa simples de classificação. A classificação é uma das técnicas de aprendizado supervisionado e consiste em dado um conjunto de dados, você deve classificar cada instância deste conjunto em uma classe. Isso será tema do próximo tutorial e será melhor detalhado.
Para simplificar, imagine a seguinte tarefa: desejo construir um programa que classifica laranjas e maças. Para entender o problema, assista: https://www.youtube.com/watch?v=cKxRvEZd3Mw
É fácil perceber que não podemos simplesmente programar todas as variações de características que temos em relação à maças e laranjas. No entanto, podemos aprender padrões que caracterizam uma maça e uma laranja. Se uma nova fruta for passada ao programa, a presença ou não desses padrões permitirá classifica-la em maça, laranja ou outra fruta.
Vamos trabalhar com uma base de dados de exemplo que possui dados coletados com características de laranjas e maças. Para simplificar, vamos trabalhar com duas características: peso e textura. Em aprendizagem de máquina, as caracterísicas que compõe nosso conjunto de dados são chamadas de features.
Peso | Textura | Classe (label)
------------ | ------------- | -------------
150g | Irregular | Laranja
170g | Irregular | Laranja
140g | Suave | Maçã
130g | Suave | Maçã
Cada linha da nossa base de dados é chamada de instância (examples). Cada exemplo é classificado de acordo com um label ou classe. Nesse caso, iremos trabalhar com duas classes que são os tipos de frutas.
Toda a nossa tabela são os dados de treinamento. Entenda esses dados como aqueles que o nosso programa irá usar para aprender. De forma geral e bem simplificada, quanto mais dados nós tivermos, melhor o nosso programa irá aprender.
Vamos simular esse problema no código.
End of explanation
from sklearn import tree
clf = tree.DecisionTreeClassifier()
Explanation: Vamos agora criar um modelo baseado nesse conjunto de dados. Vamos utilizar o algoritmo de árvore de decisão para fazer isso.
End of explanation
clf = clf.fit(features, labels)
Explanation: clf consiste no classificador baseado na árvore de decisão. Precisamos treina-lo com o conjunto da base de dados de treinamento.
End of explanation
# Peso 160 e Textura Irregular. Observe que esse tipo de fruta não está presente na base de dados.
print(clf.predict([[160, 0]]))
Explanation: Observer que o classificador recebe com parâmetro as features e os labels. Esse classificador é um tipo de classificador supervisionado, logo precisa conhecer o "gabarito" das instâncias que estão sendo passadas.
Uma vez que temos o modelo construído, podemos utiliza-lo para classificar uma instância desconhecida.
End of explanation
from sklearn.datasets import load_iris
dataset_iris = load_iris()
Explanation: Ele classificou essa fruta como sendo uma Laranja.
HelloWorld++
Vamos estender um pouco mais esse HelloWorld. Claro que o exemplo anterior foi só para passar a idéia de funcionamento de um sistema desse tipo. No entanto, o nosso programa não está aprendendo muita coisa já que a quantidade de exemplos passada para ele é muito pequena. Vamos trabalhar com um exemplo um pouco maior.
Para esse exemplo, vamos utilizar o Iris Dataset. Esse é um clássico dataset utilizado na aprendizagem de máquina. Ele tem o propósito mais didático e a tarefa é classificar 3 espécies de um tipo de flor (Iris). A classificação é feita a partir de 4 características da planta: sepal length, sepal width, petal length e petal width.
<img src="http://5047-presscdn.pagely.netdna-cdn.com/wp-content/uploads/2015/04/iris_petal_sepal.png" />
As flores são classificadas em 3 tipos: Iris Setosa, Iris Versicolor e Iris Virginica.
Vamos para o código ;)
O primeiro passo é carregar a base de dados. Os arquivos desta base estão disponíveis no UCI Machine Learning Repository. No entanto, como é uma base bastante utilizada, o ScikitLearn permite importá-la diretamente da biblioteca.
End of explanation
print(dataset_iris.feature_names)
Explanation: Imprimindo as características:
End of explanation
print(dataset_iris.target_names)
Explanation: Imprimindo os labels:
End of explanation
print(dataset_iris.data)
# Nessa lista, 0 = setosa, 1 = versicolor e 2 = verginica
print(dataset_iris.target)
Explanation: Imprimindo os dados:
End of explanation
# Verifique os tipos das features e das classes
print(type(dataset_iris.data))
print(type(dataset_iris.target))
# Verifique o tamanho das features (primeira dimensão = numero de instâncias, segunda dimensão = número de atributos)
print(dataset_iris.data.shape)
# Verifique o tamanho dos labels
print(dataset_iris.target.shape)
Explanation: Antes de continuarmos, vale a pena mostrar que o Scikit-Learn exige alguns requisitos para se trabalhar com os dados. Esse tutorial não tem como objetivo fazer um estudo detalhado da biblioteca, mas é importante tomar conhecimento de tais requisitos para entender alguns exemplos que serão mostrados mais à frente. São eles:
As features e os labels devem ser armazenados em objetos distintos
Ambos devem ser numéricos
Ambos devem ser representados por uma Array Numpy
Ambos devem ter tamanhos específicos
Vamos ver estas informações no Iris-dataset.
End of explanation
X = dataset_iris.data
Y = dataset_iris.target
Explanation: Quando importamos a base diretamente do ScikitLearn, as features e labels já vieram em objetos distintos. Só por questão de simplificação dos nomes, vou renomeá-los.
End of explanation
import numpy as np
# Determinando os índices que serão retirados da base de treino para formar a base de teste
test_idx = [0, 50, 100] # as instâncias 0, 50 e 100 da base de dados
# Criando a base de treino
train_target = np.delete(dataset_iris.target, test_idx)
train_data = np.delete(dataset_iris.data, test_idx, axis=0)
# Criando a base de teste
test_target = dataset_iris.target[test_idx]
test_data = dataset_iris.data[test_idx]
print("Tamanho dos dados originais: ", dataset_iris.data.shape) #np.delete não modifica os dados originais
print("Tamanho do treinamento: ", train_data.shape)
print("Tamanho do teste: ", test_data.shape)
Explanation: Construindo e testando um modelo de treinamento
Uma vez que já temos nossa base de dados, o próximo passo é construir nosso modelo de aprendizagem de máquina capaz de utilizar o dataset. No entanto, antes de construirmos nosso modelo é preciso saber qual modelo desenvolver e para isso precisamos definir qual o nosso propósito na tarefa de treinamento.
Existem vários tipos de tarefas dentro da aprendizagem de máquina. Como dito anteriormente, vamos trabalhar com a tarefa de classificação. A classificação consiste em criar um modelo a partir de dados que estejam de alguma forma classificados. O modelo gerado é capaz de determinar qual classe uma instância pertence a partir dos dados que foram dados como entrada.
Na apresentação do dataset da Iris vimos que cada instância é classificada com um tipo (no caso, o tipo da espécie a qual a planta pertence). Sendo assim, vamos tratar esse problema como um problema de classificação. Existem outras tarefas dentro da aprendizagem de máquina, como: clusterização, agrupamento, dentre outras. Mais detalhes de cada uma deles serão apresentados na aula de aprendizagem de máquina.
O passo seguinte é construir o modelo. Para tal, vamos seguir 4 passos:
Passo 1: Importar o classificador que deseja utilizar
Passo 2: Instanciar o modelo
Passo 3: Treinar o modelo
Passo 4: Fazer predições para novos valores
Nessa apresentação, vamos continuar utilizando o modelo de Árvore de Decisão. O fato de usá-la nesta etapa é que é fácil visualizar o que o modelo está fazendo com os dados.
Para nosso exemplo, vamos treinar o modelo com um conjunto de dados e, em seguida, vamos testá-lo com um conjunto de dados que não foram utilizados para treinar. Para isso, vamos retirar algumas instâncias da base de treinamento e usá-las posteriormente para testá-la. Vamos chamar isso de dividir a base em base de treino e base de teste. É fácil perceber que não faz sentido testarmos nosso modelo com um padrão que ele já conhece. Por isso, faz-se necessária essa separação.
End of explanation
clf = tree.DecisionTreeClassifier()
clf.fit(train_data, train_target)
Explanation: Agora que já temos nosso dataset separado, vamos criar o classificador e treina-lo com os dados de treinamento.
End of explanation
print(clf.predict(test_data))
Explanation: O classificador foi treinado, agora vamos utiliza-lo para classificar as instâncias da base de teste.
End of explanation
print(test_target)
Explanation: Como estamos trabalhando com o aprendizado supervisionado, podemos comparar com o target que já conhecemos da base de teste.
End of explanation
from IPython.display import Image
import pydotplus
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=dataset_iris.feature_names,
class_names=dataset_iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
Image(graph.create_png(), width=800)
Explanation: Observe que, neste caso, nosso classificador teve uma acurácia de 100% acertando todas as instâncias informadas. Claro que esse é só um exemplo e normalmente trabalhamos com valores de acurácias menores que 100%. No entanto, vale ressaltar que para algumas tarefas, como reconhecimento de imagens, as taxas de acurácias estão bem próximas de 100%.
Visualizando nosso modelo
A vantagem em se trablhar com a árvore de decisão é que podemos visualizar exatamente o que modelo faz. De forma geral, uma árvore de decisão é uma árvore que permite serparar o conjunto de dados. Cada nó da árvore é "uma pergunta" que direciona aquela instância ao longo da árvore. Nos nós folha da árvore se encontram as classes. Esse tipo de modelo será mais detalhado mais a frente no nosso curso.
Para isso, vamos utilizar um código que visualiza a árvore gerada.
End of explanation
print(test_data)
print(test_target)
Explanation: Observe que nos nós internos pergunta sim ou não para alguma característica. Por exemplo, no nó raiz a pergunta é "pedal width é menor ou igual a 0.8". Isso significa que se a instância que estou querendo classificar possui pedal width menor que 0.8 ela será classificada como setosa. Se isso não for true ela será redirecionada para outro nó que irá analisar outra característica. Esse processo continua até que consiga atingir um nó folha. Como execício faça a classificação, acompahando na tabela, para as instâncias de testes.
End of explanation |
9,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step5: Configure project and resource names
Step6: REGION - Used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
MODEL_ARTIFACT_DIR - Folder directory path to your model artifacts within a Cloud Storage bucket, for example
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Write your pre-processor
Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.
Create preprocess.py, which contains a class to do this scaling
Step10: Train and store model with pre-processor
Next, use preprocess.MySimpleScaler to preprocess the iris data, then train a model using scikit-learn.
At the end, export your trained model as a joblib (.joblib) file and export your MySimpleScaler instance as a pickle (.pkl) file
Step11: Upload model artifacts and custom code to Cloud Storage
Before you can deploy your model for serving, Vertex AI needs access to the following files in Cloud Storage
Step12: Build a FastAPI server
Step13: Add pre-start script
FastAPI will execute this script before starting up the server. The PORT environment variable is set to equal AIP_HTTP_PORT in order to run FastAPI on same the port expected by Vertex AI.
Step14: Store test instances to use later
To learn more about formatting input instances in JSON, read the documentation.
Step15: Build and push container to Artifact Registry
Build your container
Optionally copy in your credentials to run the container locally.
Step16: Write the Dockerfile, using tiangolo/uvicorn-gunicorn-fastapi as a base image. This will automatically run FastAPI for you using Gunicorn and Uvicorn. Visit the FastAPI docs to read more about deploying FastAPI with Docker.
Step17: Build the image and tag the Artifact Registry path that you will push to.
Step18: Run and test the container locally (optional)
Run the container locally in detached mode and provide the environment variables that the container requires. These env vars will be provided to the container by Vertex Prediction once deployed. Test the /health and /predict routes, then stop the running image.
Step19: Push the container to artifact registry
Configure Docker to access Artifact Registry. Then push your container image to your Artifact Registry repository.
Step20: Deploy to Vertex AI
Use the Python SDK to upload and deploy your model.
Upload the custom container model
Step21: Deploy the model on Vertex AI
After this step completes, the model is deployed and ready for online prediction.
Step22: Send predictions
Using Python SDK
Step23: Using REST
Step24: Using gcloud CLI
Step25: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
%%writefile requirements.txt
joblib~=1.0
numpy~=1.20
scikit-learn~=0.24
google-cloud-storage>=1.26.0,<2.0.0dev
# Required in Docker serving container
%pip install -U --user -r requirements.txt
# For local FastAPI development and running
%pip install -U --user "uvicorn[standard]>=0.12.0,<0.14.0" fastapi~=0.63
# Vertex SDK for Python
%pip install -U --user google-cloud-aiplatform
Explanation: <table align="left">
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
This tutorial walks through building a custom container to serve a scikit-learn model on Vertex Predictions. You will use the FastAPI Python web server framework to create a prediction and health endpoint.
You will also cover incorporating a pre-processor from training into your online serving.
Dataset
This tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that
marks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica.
This tutorial uses the copy of the Iris dataset included in the
scikit-learn library.
Objective
The goal is to:
- Train a model that uses a flower's measurements as input to predict what type of iris it is.
- Save the model and its serialized pre-processor
- Build a FastAPI server to handle predictions and health checks
- Build a custom container with model artifacts
- Upload and deploy custom container to Vertex Prediction
This tutorial focuses more on deploying this model with Vertex AI than on
the design of the model itself.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Learn about Vertex AI
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
Docker
Git
Google Cloud SDK (gcloud)
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install additional package dependencies not installed in your notebook environment, such as NumPy, Scikit-learn, FastAPI, Uvicorn, and joblib. Use the latest major GA version of each package.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
# Get your Google Cloud project ID from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
try:
PROJECT_ID = shell_output[0]
except IndexError:
PROJECT_ID = None
print("Project ID:", PROJECT_ID)
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! or % as shell commands, and it interpolates Python variables with $ or {} into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING") and not os.getenv(
"GOOGLE_APPLICATION_CREDENTIALS"
):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
REGION = "us-central1" # @param {type:"string"}
MODEL_ARTIFACT_DIR = "custom-container-prediction-model" # @param {type:"string"}
REPOSITORY = "custom-container-prediction" # @param {type:"string"}
IMAGE = "sklearn-fastapi-server" # @param {type:"string"}
MODEL_DISPLAY_NAME = "sklearn-custom-container" # @param {type:"string"}
Explanation: Configure project and resource names
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
Explanation: REGION - Used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
MODEL_ARTIFACT_DIR - Folder directory path to your model artifacts within a Cloud Storage bucket, for example: "my-models/fraud-detection/trial-4"
REPOSITORY - Name of the Artifact Repository to create or use.
IMAGE - Name of the container image that will be pushed.
MODEL_DISPLAY_NAME - Display name of Vertex AI Model resource.
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
To update your model artifacts without re-building the container, you must upload your model
artifacts and any custom code to Cloud Storage.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
%mkdir app
%%writefile app/preprocess.py
import numpy as np
class MySimpleScaler(object):
def __init__(self):
self._means = None
self._stds = None
def preprocess(self, data):
if self._means is None: # during training only
self._means = np.mean(data, axis=0)
if self._stds is None: # during training only
self._stds = np.std(data, axis=0)
if not self._stds.all():
raise ValueError("At least one column has standard deviation of 0.")
return (data - self._means) / self._stds
Explanation: Write your pre-processor
Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.
Create preprocess.py, which contains a class to do this scaling:
End of explanation
%cd app/
import pickle
import joblib
from preprocess import MySimpleScaler
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
iris = load_iris()
scaler = MySimpleScaler()
X = scaler.preprocess(iris.data)
y = iris.target
model = RandomForestClassifier()
model.fit(X, y)
joblib.dump(model, "model.joblib")
with open("preprocessor.pkl", "wb") as f:
pickle.dump(scaler, f)
Explanation: Train and store model with pre-processor
Next, use preprocess.MySimpleScaler to preprocess the iris data, then train a model using scikit-learn.
At the end, export your trained model as a joblib (.joblib) file and export your MySimpleScaler instance as a pickle (.pkl) file:
End of explanation
!gsutil cp model.joblib preprocessor.pkl {BUCKET_NAME}/{MODEL_ARTIFACT_DIR}/
%cd ..
Explanation: Upload model artifacts and custom code to Cloud Storage
Before you can deploy your model for serving, Vertex AI needs access to the following files in Cloud Storage:
model.joblib (model artifact)
preprocessor.pkl (model artifact)
Run the following commands to upload your files:
End of explanation
%%writefile app/main.py
from fastapi import FastAPI, Request
import joblib
import json
import numpy as np
import pickle
import os
from google.cloud import storage
from preprocess import MySimpleScaler
from sklearn.datasets import load_iris
app = FastAPI()
gcs_client = storage.Client()
with open("preprocessor.pkl", 'wb') as preprocessor_f, open("model.joblib", 'wb') as model_f:
gcs_client.download_blob_to_file(
f"{os.environ['AIP_STORAGE_URI']}/preprocessor.pkl", preprocessor_f
)
gcs_client.download_blob_to_file(
f"{os.environ['AIP_STORAGE_URI']}/model.joblib", model_f
)
with open("preprocessor.pkl", "rb") as f:
preprocessor = pickle.load(f)
_class_names = load_iris().target_names
_model = joblib.load("model.joblib")
_preprocessor = preprocessor
@app.get(os.environ['AIP_HEALTH_ROUTE'], status_code=200)
def health():
return {}
@app.post(os.environ['AIP_PREDICT_ROUTE'])
async def predict(request: Request):
body = await request.json()
instances = body["instances"]
inputs = np.asarray(instances)
preprocessed_inputs = _preprocessor.preprocess(inputs)
outputs = _model.predict(preprocessed_inputs)
return {"predictions": [_class_names[class_num] for class_num in outputs]}
Explanation: Build a FastAPI server
End of explanation
%%writefile app/prestart.sh
#!/bin/bash
export PORT=$AIP_HTTP_PORT
Explanation: Add pre-start script
FastAPI will execute this script before starting up the server. The PORT environment variable is set to equal AIP_HTTP_PORT in order to run FastAPI on same the port expected by Vertex AI.
End of explanation
%%writefile instances.json
{
"instances": [
[6.7, 3.1, 4.7, 1.5],
[4.6, 3.1, 1.5, 0.2]
]
}
Explanation: Store test instances to use later
To learn more about formatting input instances in JSON, read the documentation.
End of explanation
# NOTE: Copy in credentials to run locally, this step can be skipped for deployment
%cp $GOOGLE_APPLICATION_CREDENTIALS app/credentials.json
Explanation: Build and push container to Artifact Registry
Build your container
Optionally copy in your credentials to run the container locally.
End of explanation
%%writefile Dockerfile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY ./app /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
Explanation: Write the Dockerfile, using tiangolo/uvicorn-gunicorn-fastapi as a base image. This will automatically run FastAPI for you using Gunicorn and Uvicorn. Visit the FastAPI docs to read more about deploying FastAPI with Docker.
End of explanation
!docker build \
--tag={REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} \
.
Explanation: Build the image and tag the Artifact Registry path that you will push to.
End of explanation
!docker rm local-iris
!docker run -d -p 80:8080 \
--name=local-iris \
-e AIP_HTTP_PORT=8080 \
-e AIP_HEALTH_ROUTE=/health \
-e AIP_PREDICT_ROUTE=/predict \
-e AIP_STORAGE_URI={BUCKET_NAME}/{MODEL_ARTIFACT_DIR} \
-e GOOGLE_APPLICATION_CREDENTIALS=credentials.json \
{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
!curl localhost/health
!curl -X POST \
-d @instances.json \
-H "Content-Type: application/json; charset=utf-8" \
localhost/predict
!docker stop local-iris
Explanation: Run and test the container locally (optional)
Run the container locally in detached mode and provide the environment variables that the container requires. These env vars will be provided to the container by Vertex Prediction once deployed. Test the /health and /predict routes, then stop the running image.
End of explanation
!gcloud beta artifacts repositories create {REPOSITORY} \
--repository-format=docker \
--location=$REGION
!gcloud auth configure-docker {REGION}-docker.pkg.dev
!docker push {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
Explanation: Push the container to artifact registry
Configure Docker to access Artifact Registry. Then push your container image to your Artifact Registry repository.
End of explanation
from google.cloud import aiplatform
aiplatform.init(project=PROJECT, location=REGION)
model = aiplatform.Model.upload(
display_name=MODEL_DISPLAY_NAME,
artifact_uri=f"{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}",
serving_container_image_uri=f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}",
)
Explanation: Deploy to Vertex AI
Use the Python SDK to upload and deploy your model.
Upload the custom container model
End of explanation
endpoint = model.deploy(machine_type="n1-standard-4")
Explanation: Deploy the model on Vertex AI
After this step completes, the model is deployed and ready for online prediction.
End of explanation
endpoint.predict(instances=[[6.7, 3.1, 4.7, 1.5], [4.6, 3.1, 1.5, 0.2]])
Explanation: Send predictions
Using Python SDK
End of explanation
ENDPOINT_ID = endpoint.name
! curl \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
-d @instances.json \
https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:predict
Explanation: Using REST
End of explanation
!gcloud beta ai endpoints predict $ENDPOINT_ID \
--region=$REGION \
--json-request=instances.json
Explanation: Using gcloud CLI
End of explanation
# Undeploy model and delete endpoint
endpoint.delete(force=True)
# Delete the model resource
model.delete()
# Delete the container image from Artifact Registry
!gcloud artifacts docker images delete \
--quiet \
--delete-tags \
{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
9,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inferential Statistics
Let's say you have collected the height of 1,000 people living in Hong Kong. The mean of their height would be descriptive statistics, but their mean height does not indicate that it's the average height of whole of Hong Kong. Here, inferential statistics will help us in determining what the average height of whole of Hong Kong would be, which is described in depth in this chapter.
Inferential statistics is all about describing the larger picture of the analysis with a limited set of data and deriving conclusions from it.
Distributions Types
Normal Distribution
Most common distribution
"Gaussian curve", "bell curve" other names.
The numbers in the plot are the standard deviation numbers from the mean, which is zero.
A normal distribution from a binomial distribution
Step1: Poisson Distribution
Independent interval occurrences in an interval.
Used for count-based distributions.
$$
f(k;\lambda)=Pr(X = k)=\frac{\lambda^ke^{-k}}{k!}
$$
Here, e is the Euler's number, k is the number of occurrences for which the probability is going to be determined, and lambda is the mean number of occurrences.
Example
Step2: z-score
Expresses the value of a distribution in std with respect to mean.
$$
z = \frac{X - \mu}{\sigma}
$$
Here, X is the value in the distribution, μ is the mean of the distribution, and σ is the
standard deviation of the distribution.
Example
Step3: The score of each student can be converted to a z-score using the following functions
Step4: So, a student with a score of 60 out of 100 has a z-score of 1.334. To make more sense of the z-score, we'll use the standard normal table.
This table helps in determining the probability of a score.
We would like to know what the probability of getting a score above 60 would be.
The standard normal table can help us in determining the probability of the occurrence of the score, but we do not have to perform the cumbersome task of finding the value by looking through the table and finding the probability. This task is made simple by the cdf function, which is the cumulative distribution function
Step5: The cdf function gives the probability of getting values up to the z-score of 1.334, and doing a minus one of it will give us the probability of getting a z-score, which is above it. In other words, 0.09 is the probability of getting marks above 60.
Let's ask another question, "how many students made it to the top 20% of the class?"
Now, to get the z-score at which the top 20% score marks, we can use the ppf function in SciPy
Step6: The z-score for the preceding output that determines whether the top 20% marks are at 0.84 is as follows
Step7: We multiply the z-score with the standard deviation and then add the result with the mean of the distribution. This helps in converting the z-score to a value in the distribution. The 55.83 marks means that students who have marks more than this are in the top 20% of the distribution.
The z-score is an essential concept in statistics, which is widely used. Now you can understand that it is basically used in standardizing any distribution so that it can be compared or inferences can be derived from it.
### p-value
A p-value is the probability of rejecting a null-hypothesis when the hypothesis is proven true.
If the p-value is equal to or less than the significance level (α), then the null hypothesis is inconsistent and it needs to be rejected.
Let's understand this concept with an example where the null hypothesis is that it is common for students to score 68 marks in mathematics.
Let's define the significance level at 5%. If the p-value is less than 5%, then the null hypothesis is rejected and it is not common to score 68 marks in mathematics.
Let's get the z-score of 68 marks
Step8:
Step9: One-tailed and two-tailed tests
The example in the previous section was an instance of a one-tailed test where the null hypothesis is rejected or accepted based on one direction of the normal distribution.
In a two-tailed test, both the tails of the null hypothesis are used to test the hypothesis.
In a two-tailed test, when a significance level of 5% is used, then it is distributed equally in the both directions, that is, 2.5% of it in one direction and 2.5% in the other direction.
Let's understand this with an example. The mean score of the mathematics exam at a national level is 60 marks and the standard deviation is 3 marks.
The mean marks of a class are 53. The null hypothesis is that the mean marks of the class are similar to the national average. Let's test this hypothesis by first getting the z-score 60
Step10: Type 1 and Type 2 errors
Type 1 error is a type of error that occurs when there is a rejection of the null hypothesis when it is actually true. This kind of error is also called an error of the first kind and is equivalent to false positives.
Let's understand this concept using an example. There is a new drug that is being developed and it needs to be tested on whether it is effective in combating diseases. The null hypothesis is that it is not effective in combating diseases.
The significance level is kept at 5% so that the null hypothesis can be accepted confidently 95% of the time. However, 5% of the time, we'll accept the rejecttion of the hypothesis although it had to be accepted, which means that even though the drug is ineffective, it is assumed to be effective.
The Type 1 error is controlled by controlling the significance level, which is alpha. Alpha is the highest probability to have a Type 1 error. The lower the alpha, the lower will be the Type 1 error.
The Type 2 error is the kind of error that occurs when we do not reject a null hypothesis that is false. This error is also called the error of the second kind and is equivalent to a false negative.
This kind of error occurs in a drug scenario when the drug is assumed to be ineffective but is actually it is effective.
These errors can be controlled one at a time. If one of the errors is lowered, then the other one increases. It depends on the use case and the problem statement that the analysis is trying to address, and depending on it, the appropriate error should reduce. In the case of this drug scenario, typically, a Type 1 error should be lowered because it is better to ship a drug that is confidently effective.
Confidence Interval
A confidence interval is a type of interval statistics for a population parameter. The confidence interval helps in determining the interval at which the population mean can be defined.
Let's try to understand this concept by using an example. Let's take the height of every man in Kenya and determine with 95% confidence interval the average of height of Kenyan men at a national level.
Let's take 50 men and their height in centimeters
Step11: So, the average height of a man from the sample is 183.4 cm.
To determine the confidence interval, we'll now define the standard error of the mean.
The standard error of the mean is the deviation of the sample mean from the population mean. It is defined using the following formula
Step12: So, there is a standard error of the mean of 1.38 cm. The lower and upper limit of the confidence interval can be determined by using the following formula
Step13: You can observe that the mean ranges from 180 to 187 cm when we simulated the average height of 50 sample men, which was taken 30 times.
Let's see what happens when we sample 1000 men and repeat the process 30 times
Step14: As you can see, the height varies from 182.4 cm and to 183.5 cm. What does this mean?
It means that as the sample size increases, the standard error of the mean decreases, which also means that the confidence interval becomes narrower, and we can tell with certainty the interval that the population mean would lie on.
Correlation
In statistics, correlation defines the similarity between two random variables. The most commonly used correlation is the Pearson correlation and it is defined by the following
Step15: The first value of the output gives the correlation between the horsepower and the mileage
The second value gives the p-value.
So, the first value tells us that it is highly negatively correlated and the p-value tells us that there is significant correlation between them
Step16: Let's look into another correlation called the Spearman correlation. The Spearman correlation applies to the rank order of the values and so it provides a monotonic relation between the two distributions. It is useful for ordinal data (data that has an order, such as movie ratings or grades in class) and is not affected by outliers.
Let's get the Spearman correlation between the miles per gallon and horsepower. This can be achieved using the spearmanr() function in the SciPy package
Step17: We can see that the Spearman correlation is -0.89 and the p-value is significant.
Let's do an experiment in which we introduce a few outlier values in the data and see how the Pearson and Spearman correlation gets affected
Step18: From the plot, you can clearly make out the outlier values. Lets see how the correlations get affected for both the Pearson and Spearman correlation
Step19: We can clearly see that the Pearson correlation has been drastically affected due to the outliers, which are from a correlation of 0.89 to 0.47.
The Spearman correlation did not get affected much as it is based on the order rather than the actual value in the data.
Z-test vs T-test
We have already done a few Z-tests before where we validated our null hypothesis.
A T-distribution is similar to a Z-distribution—it is centered at zero and has a basic bell shape, but its shorter and flatter around the center than the Z-distribution.
The T-distributions' standard deviation is usually proportionally larger than the Z, because of which you see the fatter tails on each side.
The t distribution is usually used to analyze the population when the sample is small.
The Z-test is used to compare the population mean against a sample or compare the population mean of two distributions with a sample size greater than 30. An example of a Z-test would be comparing the heights of men from different ethnicity groups.
The T-test is used to compare the population mean against a sample, or compare the population mean of two distributions with a sample size less than 30, and when you don't know the population's standard deviation.
Let's do a T-test on two classes that are given a mathematics test and have 10 students in each class
Step20: The first value in the output is the calculated t-statistics, whereas the second value is the p-value and p-value shows that the two distributions are not identical.
The F distribution
The F distribution is also known as Snedecor's F distribution or the Fisher–Snedecor distribution.
An f statistic is given by the following formula
Step21: The null hypothesis in the chi-square test is that the observed value is similar to the
expected value.
The chi-square can be performed using the chisquare function in the SciPy package
Step22: The first value is the chi-square value and the second value is the p-value, which is very high. This means that the null hypothesis is valid and the observed value is similar to the expected value.
The chi-square test of independence is a statistical test used to determine whether two categorical variables are independent of each other or not.
Let's take the following example to see whether there is a preference for a book based on the gender of people reading it.
The Chi-Square test of independence can be performed using the chi2_contingency function in the SciPy package
Step23: The first value is the chi-square value,
The second value is the p-value, which is very small, and means that there is an
association between the gender of people and the genre of the book they read.
The third value is the degrees of freedom.
The fourth value, which is an array, is the expected frequencies.
Anova
Analysis of Variance (ANOVA) is a statistical method used to test differences between two or more means.This test basically compares the means between groups and determines whether any of these means are significantly different from each other | Python Code:
# Calling the binom module from scipy stats package
from scipy.stats import binom
# Plotting Function
import matplotlib.pyplot as plt
%matplotlib inline
x = list(range(7))
n, p = 6, 0.5
rv = binom(n, p)
plt.vlines(x, 0, rv.pmf(x), colors='r', linestyles='-', lw=1, label='Probability')
plt.legend(loc='best', frameon=False)
plt.xlabel("No. of instances")
plt.ylabel("Probability")
plt.show()
x = range(1001)
n, p = 1000, 0.4
rv = binom(n, p)
plt.vlines(x,0,rv.pmf(x), colors='g', linestyles='-', lw=1, label='Probability')
plt.legend(loc='best', frameon=True)
plt.xlabel("No. of instances")
plt.ylabel("Probability")
plt.show()
Explanation: Inferential Statistics
Let's say you have collected the height of 1,000 people living in Hong Kong. The mean of their height would be descriptive statistics, but their mean height does not indicate that it's the average height of whole of Hong Kong. Here, inferential statistics will help us in determining what the average height of whole of Hong Kong would be, which is described in depth in this chapter.
Inferential statistics is all about describing the larger picture of the analysis with a limited set of data and deriving conclusions from it.
Distributions Types
Normal Distribution
Most common distribution
"Gaussian curve", "bell curve" other names.
The numbers in the plot are the standard deviation numbers from the mean, which is zero.
A normal distribution from a binomial distribution:
Let's take a coin and flip it. The probability of getting a head or a tail is 50%. If you take the same coin and flip it six times, the probability of getting a head three times can be computed using the following formula:
$$
P(x) = \frac{n!}{x!(n-x)!}p^{x}q^{n-x}
$$
and x is the number of successes desired
In the preceding formula, n is the number of times the coin is flipped, p is the probability of success, and q is (1– p), which is the probability of failure.
End of explanation
from scipy.stats import bernoulli
bernoulli.rvs(0.7, size=100)
Explanation: Poisson Distribution
Independent interval occurrences in an interval.
Used for count-based distributions.
$$
f(k;\lambda)=Pr(X = k)=\frac{\lambda^ke^{-k}}{k!}
$$
Here, e is the Euler's number, k is the number of occurrences for which the probability is going to be determined, and lambda is the mean number of occurrences.
Example:
Let's understand this with an example. The number of cars that pass through a bridge in an hour is 20. What would be the probability of 23 cars passing through the bridge in an hour?
```Python
from scipy.stats import poisson
rv = poisson(20)
rv.pmf(23)
Result: 0.066881473662401172
```
With the Poisson function, we define the mean value, which is 20 cars. The rv.pmf function gives the probability, which is around 6%, that 23 cars will pass the bridge.
Bernoulli Distribution
Can perform an experiment with two possible outcomes: success or failure.
Success has a probability of p, and failure has a probability of 1 - p. A random variable that takes a 1 value in case of a success and 0 in case of failure is called a Bernoulli distribution. The probability distribution function can be written as:
$$
P(n)=\begin{cases}1-p & for & n = 0\p & for & n = 1\end{cases}
$$
It can also be written like this:
$$
P(n)=p^n(1-p)^{1-n}
$$
The distribution function can be written like this:
$$
D(n) = \begin{cases}1-p & for & n=0\1 & for & n=1\end{cases}
$$
Example: Voting in an election is a good example of the Bernoulli distribution. A Bernoulli distribution can be generated using the bernoulli.rvs() function of the SciPy package.
End of explanation
import numpy as np
class_score = np.random.normal(50, 10, 60).round()
plt.hist(class_score, 30, normed=True) # Number of breaks is 30
plt.show()
Explanation: z-score
Expresses the value of a distribution in std with respect to mean.
$$
z = \frac{X - \mu}{\sigma}
$$
Here, X is the value in the distribution, μ is the mean of the distribution, and σ is the
standard deviation of the distribution.
Example: A classroom has 60 students in it and they have just got their mathematics examination score. We simulate the score of these 60 students with a normal distribution using the following command:
End of explanation
from scipy import stats
stats.zscore(class_score)
Explanation: The score of each student can be converted to a z-score using the following functions:
End of explanation
prob = 1 - stats.norm.cdf(1.334)
prob
Explanation: So, a student with a score of 60 out of 100 has a z-score of 1.334. To make more sense of the z-score, we'll use the standard normal table.
This table helps in determining the probability of a score.
We would like to know what the probability of getting a score above 60 would be.
The standard normal table can help us in determining the probability of the occurrence of the score, but we do not have to perform the cumbersome task of finding the value by looking through the table and finding the probability. This task is made simple by the cdf function, which is the cumulative distribution function:
End of explanation
stats.norm.ppf(0.80)
Explanation: The cdf function gives the probability of getting values up to the z-score of 1.334, and doing a minus one of it will give us the probability of getting a z-score, which is above it. In other words, 0.09 is the probability of getting marks above 60.
Let's ask another question, "how many students made it to the top 20% of the class?"
Now, to get the z-score at which the top 20% score marks, we can use the ppf function in SciPy:
End of explanation
(0.84 * class_score.std()) + class_score.mean()
Explanation: The z-score for the preceding output that determines whether the top 20% marks are at 0.84 is as follows:
End of explanation
zscore = ( 68 - class_score.mean() ) / class_score.std()
zscore
Explanation: We multiply the z-score with the standard deviation and then add the result with the mean of the distribution. This helps in converting the z-score to a value in the distribution. The 55.83 marks means that students who have marks more than this are in the top 20% of the distribution.
The z-score is an essential concept in statistics, which is widely used. Now you can understand that it is basically used in standardizing any distribution so that it can be compared or inferences can be derived from it.
### p-value
A p-value is the probability of rejecting a null-hypothesis when the hypothesis is proven true.
If the p-value is equal to or less than the significance level (α), then the null hypothesis is inconsistent and it needs to be rejected.
Let's understand this concept with an example where the null hypothesis is that it is common for students to score 68 marks in mathematics.
Let's define the significance level at 5%. If the p-value is less than 5%, then the null hypothesis is rejected and it is not common to score 68 marks in mathematics.
Let's get the z-score of 68 marks:
End of explanation
prob = 1 - stats.norm.cdf(zscore)
prob
Explanation:
End of explanation
zscore = (53-50)/3.0
zscore
prob = stats.norm.cdf(zscore)
prob
Explanation: One-tailed and two-tailed tests
The example in the previous section was an instance of a one-tailed test where the null hypothesis is rejected or accepted based on one direction of the normal distribution.
In a two-tailed test, both the tails of the null hypothesis are used to test the hypothesis.
In a two-tailed test, when a significance level of 5% is used, then it is distributed equally in the both directions, that is, 2.5% of it in one direction and 2.5% in the other direction.
Let's understand this with an example. The mean score of the mathematics exam at a national level is 60 marks and the standard deviation is 3 marks.
The mean marks of a class are 53. The null hypothesis is that the mean marks of the class are similar to the national average. Let's test this hypothesis by first getting the z-score 60:
End of explanation
height_data = np.array([ 186.0, 180.0, 195.0, 189.0, 191.0,
177.0, 161.0, 177.0, 192.0, 182.0,
185.0, 192.0, 173.0, 172.0, 191.0,
184.0, 193.0, 182.0, 190.0, 185.0,
181.0,188.0, 179.0, 188.0, 170.0, 179.0,
180.0, 189.0, 188.0, 185.0, 170.0,
197.0, 187.0,182.0, 173.0, 179.0,184.0,
177.0, 190.0, 174.0, 203.0, 206.0, 173.0,
169.0, 178.0,201.0, 198.0, 166.0,171.0, 180.0])
plt.hist(height_data, 30, normed=True, color='r')
plt.show()
# The mean of the distribution
height_data.mean()
Explanation: Type 1 and Type 2 errors
Type 1 error is a type of error that occurs when there is a rejection of the null hypothesis when it is actually true. This kind of error is also called an error of the first kind and is equivalent to false positives.
Let's understand this concept using an example. There is a new drug that is being developed and it needs to be tested on whether it is effective in combating diseases. The null hypothesis is that it is not effective in combating diseases.
The significance level is kept at 5% so that the null hypothesis can be accepted confidently 95% of the time. However, 5% of the time, we'll accept the rejecttion of the hypothesis although it had to be accepted, which means that even though the drug is ineffective, it is assumed to be effective.
The Type 1 error is controlled by controlling the significance level, which is alpha. Alpha is the highest probability to have a Type 1 error. The lower the alpha, the lower will be the Type 1 error.
The Type 2 error is the kind of error that occurs when we do not reject a null hypothesis that is false. This error is also called the error of the second kind and is equivalent to a false negative.
This kind of error occurs in a drug scenario when the drug is assumed to be ineffective but is actually it is effective.
These errors can be controlled one at a time. If one of the errors is lowered, then the other one increases. It depends on the use case and the problem statement that the analysis is trying to address, and depending on it, the appropriate error should reduce. In the case of this drug scenario, typically, a Type 1 error should be lowered because it is better to ship a drug that is confidently effective.
Confidence Interval
A confidence interval is a type of interval statistics for a population parameter. The confidence interval helps in determining the interval at which the population mean can be defined.
Let's try to understand this concept by using an example. Let's take the height of every man in Kenya and determine with 95% confidence interval the average of height of Kenyan men at a national level.
Let's take 50 men and their height in centimeters:
End of explanation
stats.sem(height_data)
Explanation: So, the average height of a man from the sample is 183.4 cm.
To determine the confidence interval, we'll now define the standard error of the mean.
The standard error of the mean is the deviation of the sample mean from the population mean. It is defined using the following formula:
$$
SE_{\overline{x}} = \frac{s}{\sqrt{n}}
$$
Here, s is the standard deviation of the sample, and n is the number of elements of the sample.
This can be calculated using the sem() function of the SciPy package:
End of explanation
average_height = []
for i in range(30):
# Create a sample of 50 with mean 183 and standard deviation 10
sample50 = np.random.normal(183, 10, 50).round()
# Add the mean on sample of 50 into average_height list
average_height.append(sample50.mean())
# Plot it with 10 bars and normalization
plt.hist(average_height, 10, normed=True)
plt.show()
Explanation: So, there is a standard error of the mean of 1.38 cm. The lower and upper limit of the confidence interval can be determined by using the following formula:
Upper/Lower limit = mean(height) + / - sigma * SEmean(x)
For lower limit:
183.24 + (1.96 * 1.38) = 185.94
For upper limit:
183.24 - (1.96*1.38) = 180.53
A 1.96 standard deviation covers 95% of area in the normal distribution.
We can confidently say that the population mean lies between 180.53 cm and 185.94 cm of height.
New Example: Let's assume we take a sample of 50 people, record their height, and then repeat this process 30 times. We can then plot the averages of each sample and observe the distribution.
End of explanation
average_height = []
for i in range(30):
# Create a sample of 50 with mean 183 and standard deviation 10
sample1000 = np.random.normal(183, 10, 1000).round()
average_height.append(sample1000.mean())
plt.hist(average_height, 10, normed=True)
plt.show()
Explanation: You can observe that the mean ranges from 180 to 187 cm when we simulated the average height of 50 sample men, which was taken 30 times.
Let's see what happens when we sample 1000 men and repeat the process 30 times:
End of explanation
mpg = [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2, 17.8,
16.4, 17.3, 15.2, 10.4, 10.4, 14.7, 32.4, 30.4, 33.9, 21.5, 15.5,
15.2, 13.3, 19.2, 27.3, 26.0, 30.4, 15.8,19.7, 15.0, 21.4]
hp = [110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 180, 180,
205, 215, 230, 66, 52, 65, 97, 150, 150, 245, 175, 66, 91, 113, 264,
175, 335, 109]
stats.pearsonr(mpg,hp)
Explanation: As you can see, the height varies from 182.4 cm and to 183.5 cm. What does this mean?
It means that as the sample size increases, the standard error of the mean decreases, which also means that the confidence interval becomes narrower, and we can tell with certainty the interval that the population mean would lie on.
Correlation
In statistics, correlation defines the similarity between two random variables. The most commonly used correlation is the Pearson correlation and it is defined by the following:
$$
\rho_{X,Y} = \frac{cov(X,Y)}{\sigma_{x}\sigma_{y}} = \frac{E[(X - \mu_{X})(Y - \mu_{Y})]}{\sigma_{x}\sigma_{y}}
$$
The preceding formula defines the Pearson correlation as the covariance between X and Y, which is divided by the standard deviation of X and Y, or it can also be defined as the expected mean of the sum of multiplied difference of random variables with respect to the mean divided by the standard deviation of X and Y. Let's understand this with an example. Let's take the mileage and horsepower of various cars and see if there is a relation between the two. This can be achieved using the pearsonr function in the SciPy package:
End of explanation
plt.scatter(mpg, hp, color='r')
plt.show()
Explanation: The first value of the output gives the correlation between the horsepower and the mileage
The second value gives the p-value.
So, the first value tells us that it is highly negatively correlated and the p-value tells us that there is significant correlation between them:
End of explanation
stats.spearmanr(mpg, hp)
Explanation: Let's look into another correlation called the Spearman correlation. The Spearman correlation applies to the rank order of the values and so it provides a monotonic relation between the two distributions. It is useful for ordinal data (data that has an order, such as movie ratings or grades in class) and is not affected by outliers.
Let's get the Spearman correlation between the miles per gallon and horsepower. This can be achieved using the spearmanr() function in the SciPy package:
End of explanation
mpg = [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8,
19.2, 17.8, 16.4, 17.3, 15.2, 10.4, 10.4, 14.7, 32.4, 30.4,
33.9, 21.5, 15.5, 15.2, 13.3, 19.2, 27.3, 26.0, 30.4, 15.8,
19.7, 15.0, 21.4, 120, 3]
hp = [110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180,
180, 180, 205, 215, 230, 66, 52, 65, 97, 150, 150, 245,
175, 66, 91, 113, 264, 175, 335, 109, 30, 600]
plt.scatter(mpg, hp)
plt.show()
Explanation: We can see that the Spearman correlation is -0.89 and the p-value is significant.
Let's do an experiment in which we introduce a few outlier values in the data and see how the Pearson and Spearman correlation gets affected:
End of explanation
stats.pearsonr(mpg, hp)
stats.spearmanr(mpg, hp)
Explanation: From the plot, you can clearly make out the outlier values. Lets see how the correlations get affected for both the Pearson and Spearman correlation
End of explanation
class1_score = np.array([45.0, 40.0, 49.0, 52.0, 54.0, 64.0, 36.0, 41.0, 42.0, 34.0])
class2_score = np.array([75.0, 85.0, 53.0, 70.0, 72.0, 93.0, 61.0, 65.0, 65.0, 72.0])
stats.ttest_ind(class1_score,class2_score)
Explanation: We can clearly see that the Pearson correlation has been drastically affected due to the outliers, which are from a correlation of 0.89 to 0.47.
The Spearman correlation did not get affected much as it is based on the order rather than the actual value in the data.
Z-test vs T-test
We have already done a few Z-tests before where we validated our null hypothesis.
A T-distribution is similar to a Z-distribution—it is centered at zero and has a basic bell shape, but its shorter and flatter around the center than the Z-distribution.
The T-distributions' standard deviation is usually proportionally larger than the Z, because of which you see the fatter tails on each side.
The t distribution is usually used to analyze the population when the sample is small.
The Z-test is used to compare the population mean against a sample or compare the population mean of two distributions with a sample size greater than 30. An example of a Z-test would be comparing the heights of men from different ethnicity groups.
The T-test is used to compare the population mean against a sample, or compare the population mean of two distributions with a sample size less than 30, and when you don't know the population's standard deviation.
Let's do a T-test on two classes that are given a mathematics test and have 10 students in each class:
To perform the T-test, we can use the ttest_ind() function in the SciPy package:
End of explanation
expected = np.array([6,6,6,6,6,6])
observed = np.array([7, 5, 3, 9, 6, 6])
Explanation: The first value in the output is the calculated t-statistics, whereas the second value is the p-value and p-value shows that the two distributions are not identical.
The F distribution
The F distribution is also known as Snedecor's F distribution or the Fisher–Snedecor distribution.
An f statistic is given by the following formula:
$$
f = {[{s_1^2}/{\sigma_1^2}]}{[{s_2^2}/{\sigma_2^2}]}
$$
Here, s 1 is the standard deviation of a sample 1 with an $n_1$ size, $s_2$ is the standard deviation of a sample 2, where the size $n_2σ_1$ is the population standard deviation of a sample $1σ_2$ is the population standard deviation of a sample 12.
The distribution of all the possible values of f statistics is called F distribution. The d1 and d2 represent the degrees of freedom in the following chart:
The chi-square distribution
The chi-square statistics are defined by the following formula:
$$
X^2 = [(n-1)*s^2]/\sigma^2
$$
Here, n is the size of the sample, s is the standard deviation of the sample, and σ is the standard deviation of the population.
If we repeatedly take samples and define the chi-square statistics, then we can form a chi-square distribution, which is defined by the following probability density function:
$$
Y = Y_0 * (X^2)^{(v/2-1)} * e^{-X2/2}
$$
Here, $Y_0$ is a constant that depends on the number of degrees of freedom, $Χ_2$ is the chi-square statistic, $v = n - 1$ is the number of degrees of freedom, and e is a constant equal to the base of the natural logarithm system.
$Y_0$ is defined so that the area under the chi-square curve is equal to one.
The Chi-square test can be used to test whether the observed data differs significantly from the expected data. Let's take the example of a dice. The dice is rolled 36 times and the probability that each face should turn upwards is 1/6. So, the expected and observed distribution is as follows:
End of explanation
stats.chisquare(observed,expected)
Explanation: The null hypothesis in the chi-square test is that the observed value is similar to the
expected value.
The chi-square can be performed using the chisquare function in the SciPy package:
End of explanation
men_women = np.array([[100, 120, 60],[350, 200, 90]])
stats.chi2_contingency(men_women)
Explanation: The first value is the chi-square value and the second value is the p-value, which is very high. This means that the null hypothesis is valid and the observed value is similar to the expected value.
The chi-square test of independence is a statistical test used to determine whether two categorical variables are independent of each other or not.
Let's take the following example to see whether there is a preference for a book based on the gender of people reading it.
The Chi-Square test of independence can be performed using the chi2_contingency function in the SciPy package:
End of explanation
country1 = np.array([ 176., 201., 172., 179., 180., 188., 187., 184., 171.,
181., 192., 187., 178., 178., 180., 199., 185., 176.,
207., 177., 160., 174., 176., 192., 189., 187., 183.,
180., 181., 200., 190., 187., 175., 179., 181., 183.,
171., 181., 190., 186., 185., 188., 201., 192., 188.,
181., 172., 191., 201., 170., 170., 192., 185., 167.,
178., 179., 167., 183., 200., 185.])
country2 = np.array([177., 165., 185., 187., 175., 172.,179., 192.,169.,
167., 162., 165., 188., 194., 187., 175., 163., 178.,
197., 172., 175., 185., 176., 171., 172., 186., 168.,
178., 191., 192., 175., 189., 178., 181., 170., 182.,
166., 189., 196., 192., 189., 171., 185., 198., 181.,
167., 184., 179., 178., 193., 179., 177., 181., 174.,
171., 184., 156., 180., 181., 187.])
country3 = np.array([ 191.,173., 175., 200., 190.,191.,185.,190.,184.,190.,
191., 184., 167., 194., 195., 174., 171., 191.,
174., 177., 182., 184., 176., 180., 181., 186., 179.,
176., 186., 176., 184., 194., 179., 171., 174., 174.,
182., 198., 180., 178., 200., 200., 174., 202., 176.,
180., 163., 159., 194., 192., 163., 194., 183., 190.,
186., 178., 182., 174., 178., 182.])
stats.f_oneway(country1,country2,country3)
Explanation: The first value is the chi-square value,
The second value is the p-value, which is very small, and means that there is an
association between the gender of people and the genre of the book they read.
The third value is the degrees of freedom.
The fourth value, which is an array, is the expected frequencies.
Anova
Analysis of Variance (ANOVA) is a statistical method used to test differences between two or more means.This test basically compares the means between groups and determines whether any of these means are significantly different from each other:
$$
H_0 : \mu_1 = \mu_2 = \mu_3 = ... = \mu_k
$$
ANOVA is a test that can tell you which group is significantly different from each other. Let's take the height of men who are from three different countries and see if their heights are significantly different from others:
End of explanation |
9,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST digit recognition using SVC and PCA with RBF in scikit-learn
> Using optimal parameters, fit to BOTH original and deskewed data
Step1: Where's the data?
Step2: How much of the data will we use?
Step3: Read the training images and labels, both original and deskewed
Step4: Read the DESKEWED test images and labels
Step5: Use the smaller, fewer images for testing
Print a sample
Step6: PCA dimensionality reduction
Step7: SVC Parameter Settings
Step8: Fit the training data
Step10: Predict the test set and analyze the result
Step12: Learning Curves
see http | Python Code:
from __future__ import division
import os, time, math
import cPickle as pickle
import matplotlib.pyplot as plt
import numpy as np
import scipy
import csv
from operator import itemgetter
from tabulate import tabulate
from print_imgs import print_imgs # my own function to print a grid of square images
from sklearn.preprocessing import StandardScaler
from sklearn.utils import shuffle
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import RandomizedSearchCV
from sklearn.metrics import classification_report, confusion_matrix
np.random.seed(seed=1009)
%matplotlib inline
#%qtconsole
Explanation: MNIST digit recognition using SVC and PCA with RBF in scikit-learn
> Using optimal parameters, fit to BOTH original and deskewed data
End of explanation
file_path = '../data/'
train_img_deskewed_filename = 'train-images_deskewed.csv'
train_img_original_filename = 'train-images.csv'
test_img_deskewed_filename = 't10k-images_deskewed.csv'
test_img_original_filename = 't10k-images.csv'
train_label_filename = 'train-labels.csv'
test_label_filename = 't10k-labels.csv'
Explanation: Where's the data?
End of explanation
portion = 1.0 # set to less than 1.0 for testing; set to 1.0 to use the entire dataset
Explanation: How much of the data will we use?
End of explanation
# read both trainX files
with open(file_path + train_img_original_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainXo = np.ascontiguousarray(data, dtype = np.float64)
with open(file_path + train_img_deskewed_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainXd = np.ascontiguousarray(data, dtype = np.float64)
# vertically concatenate the two files
trainX = np.vstack((trainXo, trainXd))
trainXo = None
trainXd = None
# read trainY twice and vertically concatenate
with open(file_path + train_label_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainYo = np.ascontiguousarray(data, dtype = np.int8)
trainYd = np.ascontiguousarray(data, dtype = np.int8)
trainY = np.vstack((trainYo, trainYd)).ravel()
trainYo = None
trainYd = None
data = None
# shuffle trainX & trainY
trainX, trainY = shuffle(trainX, trainY, random_state=0)
# use less data if specified
if portion < 1.0:
trainX = trainX[:portion*trainX.shape[0]]
trainY = trainY[:portion*trainY.shape[0]]
print("trainX shape: {0}".format(trainX.shape))
print("trainY shape: {0}\n".format(trainY.shape))
print(trainX.flags)
Explanation: Read the training images and labels, both original and deskewed
End of explanation
# read testX
with open(file_path + test_img_deskewed_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
testX = np.ascontiguousarray(data, dtype = np.float64)
# read testY
with open(file_path + test_label_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
testY = np.ascontiguousarray(data, dtype = np.int8)
# shuffle testX, testY
testX, testY = shuffle(testX, testY, random_state=0)
# use a smaller dataset if specified
if portion < 1.0:
testX = testX[:portion*testX.shape[0]]
testY = testY[:portion*testY.shape[0]]
print("testX shape: {0}".format(testX.shape))
print("testY shape: {0}".format(testY.shape))
Explanation: Read the DESKEWED test images and labels
End of explanation
print_imgs(images = trainX,
actual_labels = trainY,
predicted_labels = trainY,
starting_index = np.random.randint(0, high=trainY.shape[0]-36, size=1)[0],
size = 6)
Explanation: Use the smaller, fewer images for testing
Print a sample
End of explanation
t0 = time.time()
pca = PCA(n_components=0.85, whiten=True)
trainX = pca.fit_transform(trainX)
testX = pca.transform(testX)
print("trainX shape: {0}".format(trainX.shape))
print("trainY shape: {0}\n".format(trainY.shape))
print("testX shape: {0}".format(testX.shape))
print("testY shape: {0}".format(testY.shape))
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
Explanation: PCA dimensionality reduction
End of explanation
# default parameters for SVC
# ==========================
default_svc_params = {}
default_svc_params['C'] = 1.0 # penalty
default_svc_params['class_weight'] = None # Set the parameter C of class i to class_weight[i]*C
# set to 'auto' for unbalanced classes
default_svc_params['gamma'] = 0.0 # Kernel coefficient for 'rbf', 'poly' and 'sigmoid'
default_svc_params['kernel'] = 'rbf' # 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed' or a callable
# use of 'sigmoid' is discouraged
default_svc_params['shrinking'] = True # Whether to use the shrinking heuristic.
default_svc_params['probability'] = False # Whether to enable probability estimates.
default_svc_params['tol'] = 0.001 # Tolerance for stopping criterion.
default_svc_params['cache_size'] = 200 # size of the kernel cache (in MB).
default_svc_params['max_iter'] = -1 # limit on iterations within solver, or -1 for no limit.
default_svc_params['verbose'] = False
default_svc_params['degree'] = 3 # 'poly' only
default_svc_params['coef0'] = 0.0 # 'poly' and 'sigmoid' only
# set the parameters for the classifier
# =====================================
svc_params = dict(default_svc_params)
svc_params['C'] = 2.9470517025518097
svc_params['gamma'] = 0.015998587196060572
svc_params['cache_size'] = 2000
# create the classifier itself
# ============================
svc_clf = SVC(**svc_params)
Explanation: SVC Parameter Settings
End of explanation
t0 = time.time()
svc_clf.fit(trainX, trainY)
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
Explanation: Fit the training data
End of explanation
target_names = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
predicted_values = svc_clf.predict(testX)
y_true, y_pred = testY, predicted_values
print(classification_report(y_true, y_pred, target_names=target_names))
def plot_confusion_matrix(cm,
target_names,
title='Proportional Confusion matrix',
cmap=plt.cm.Paired):
given a confusion matrix (cm), make a nice plot
see the skikit-learn documentation for the original done for the iris dataset
plt.figure(figsize=(8, 6))
plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm = confusion_matrix(y_true, y_pred)
print(cm)
model_accuracy = sum(cm.diagonal())/len(testY)
model_misclass = 1 - model_accuracy
print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass))
plot_confusion_matrix(cm, target_names)
Explanation: Predict the test set and analyze the result
End of explanation
t0 = time.time()
from sklearn.learning_curve import learning_curve
from sklearn.cross_validation import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
plt.figure(figsize=(8, 6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.tight_layout()
plt.legend(loc="best")
return plt
C_gamma = "C="+str(np.round(svc_params['C'],4))+", gamma="+str(np.round(svc_params['gamma'],6))
title = "Learning Curves (SVM, RBF, " + C_gamma + ")"
plot_learning_curve(estimator = svc_clf,
title = title,
X = trainX,
y = trainY,
ylim = (0.85, 1.01),
cv = ShuffleSplit(n = trainX.shape[0],
n_iter = 5,
test_size = 0.2,
random_state=0),
n_jobs = 8)
plt.show()
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
Explanation: Learning Curves
see http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html
The score is the model accuracy
The red line shows how well the model fits the data it was trained on:
a high score indicates low bias ... the model does fit the training data
it's not unusual for the red line to start at 1.00 and decline slightly
a low score indicates the model does not fit the training data ... more predictor variables are ususally indicated, or a different model
The green line shows how well the model predicts the test data: if it's rising then it means more data to train on will produce better predictions
End of explanation |
9,625 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using tensorflow 2.10.0. | Problem:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
network_layout = []
for i in range(3):
network_layout.append(8)
model = Sequential()
inputdim = 4
activation = 'relu'
outputdim = 2
opt='rmsprop'
epochs = 50
#Adding input layer and first hidden layer
model.add(Dense(network_layout[0],
name="Input",
input_dim=inputdim,
kernel_initializer='he_normal',
activation=activation))
#Adding the rest of hidden layer
for numneurons in network_layout[1:]:
model.add(Dense(numneurons,
kernel_initializer = 'he_normal',
activation=activation))
#Adding the output layer
model.add(Dense(outputdim,
name="Output",
kernel_initializer="he_normal",
activation="relu"))
#Compiling the model
model.compile(optimizer=opt,loss='mse',metrics=['mse','mae','mape'])
model.summary()
#Save the model in "export/1"
tms_model = tf.saved_model.save(model,"export/1") |
9,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DAT210x - Programming with Python for DS
Module5- Lab8
Step1: A Convenience Function
This convenience method will take care of plotting your test observations, comparing them to the regression line, and displaying the R2 coefficient
Step2: The Assignment
Load up the data here into a variable called X. As usual, do a .describe and a print of your dataset and compare it to the dataset loaded in a text file or in a spread sheet application
Step3: Create your linear regression model here and store it in a variable called model. Don't actually train or do anything else with it yet
Step4: Slice out your data manually (e.g. don't use train_test_split, but actually do the indexing yourself. Set X_train to be year values LESS than 1986, and y_train to be corresponding 'WhiteMale' age values. You might also want to read the note about slicing on the bottom of this document before proceeding
Step5: Train your model then pass it into drawLine with your training set and labels. You can title it 'WhiteMale'. drawLine will output to the console a 2014 extrapolation / approximation for what it believes the WhiteMale's life expectancy in the U.S. will be... given the pre-1986 data you trained it with. It'll also produce a 2030 and 2045 extrapolation
Step6: Print the actual 2014 'WhiteMale' life expectancy from your loaded dataset
Step7: Repeat the process, but instead of for WhiteMale, this time select BlackFemale. Create a slice for BlackFemales, fit your model, and then call drawLine. Lastly, print out the actual 2014 BlackFemale life expectancy
Step8: Lastly, print out a correlation matrix for your entire dataset, and display a visualization of the correlation matrix, just as we described in the visualization section of the course | Python Code:
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot') # Look Pretty
Explanation: DAT210x - Programming with Python for DS
Module5- Lab8
End of explanation
def drawLine(model, X_test, y_test, title):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(X_test, y_test, c='g', marker='o')
ax.plot(X_test, model.predict(X_test), color='orange', linewidth=1, alpha=0.7)
print("Est 2014 " + title + " Life Expectancy: ", model.predict([[2014]])[0])
print("Est 2030 " + title + " Life Expectancy: ", model.predict([[2030]])[0])
print("Est 2045 " + title + " Life Expectancy: ", model.predict([[2045]])[0])
score = model.score(X_test, y_test)
title += " R2: " + str(score)
ax.set_title(title)
plt.show()
Explanation: A Convenience Function
This convenience method will take care of plotting your test observations, comparing them to the regression line, and displaying the R2 coefficient
End of explanation
# .. your code here ..
Explanation: The Assignment
Load up the data here into a variable called X. As usual, do a .describe and a print of your dataset and compare it to the dataset loaded in a text file or in a spread sheet application:
End of explanation
# .. your code here ..
Explanation: Create your linear regression model here and store it in a variable called model. Don't actually train or do anything else with it yet:
End of explanation
# .. your code here ..
Explanation: Slice out your data manually (e.g. don't use train_test_split, but actually do the indexing yourself. Set X_train to be year values LESS than 1986, and y_train to be corresponding 'WhiteMale' age values. You might also want to read the note about slicing on the bottom of this document before proceeding:
End of explanation
# .. your code here ..
Explanation: Train your model then pass it into drawLine with your training set and labels. You can title it 'WhiteMale'. drawLine will output to the console a 2014 extrapolation / approximation for what it believes the WhiteMale's life expectancy in the U.S. will be... given the pre-1986 data you trained it with. It'll also produce a 2030 and 2045 extrapolation:
End of explanation
# .. your code here ..
Explanation: Print the actual 2014 'WhiteMale' life expectancy from your loaded dataset
End of explanation
# .. your code here ..
Explanation: Repeat the process, but instead of for WhiteMale, this time select BlackFemale. Create a slice for BlackFemales, fit your model, and then call drawLine. Lastly, print out the actual 2014 BlackFemale life expectancy:
End of explanation
# .. your code here ..
plt.show()
Explanation: Lastly, print out a correlation matrix for your entire dataset, and display a visualization of the correlation matrix, just as we described in the visualization section of the course:
End of explanation |
9,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practical PyTorch
Step1: The returned GloVe object includes attributes
Step3: Finding closest vectors
Going from word → vector is easy enough, but to go from vector → word takes more work. Here I'm (naively) calculating the distance for each word in the vocabulary, and sorting based on that distance
Step4: This will return a list of (word, distance) tuple pairs. Here's a helper function to print that list
Step5: Now using a known word vector we can see which other vectors are closest
Step6: Word analogies with vector arithmetic
The most interesting feature of a well-trained word vector space is that certain semantic relationships (beyond just close-ness of words) can be captured with regular vector arithmetic.
(image borrowed from a slide from Omer Levy and Yoav Goldberg)
Step7: The classic example
Step8: Now let's explore the word space and see what stereotypes we can uncover | Python Code:
import torch
import torchtext.vocab as vocab
glove = vocab.GloVe(name='6B', dim=100)
print('Loaded {} words'.format(len(glove.itos)))
Explanation: Practical PyTorch: Exploring Word Vectors with GloVe
When working with words, dealing with the huge but sparse domain of language can be challenging. Even for a small corpus, your neural network (or any type of model) needs to support many thousands of discrete inputs and outputs.
Besides the raw number words, the standard technique of representing words as one-hot vectors (e.g. "the" = [0 0 0 1 0 0 0 0 ...]) does not capture any information about relationships between words.
Word vectors address this problem by representing words in a multi-dimensional vector space. This can bring the dimensionality of the problem from hundreds-of-thousands to just hundreds. Plus, the vector space is able to capture semantic relationships between words in terms of distance and vector arithmetic.
There are a few techniques for creating word vectors. The word2vec algorithm predicts words in a context (e.g. what is the most likely word to appear in "the cat ? the mouse"), while GloVe vectors are based on global counts across the corpus — see How is GloVe different from word2vec? on Quora for some better explanations.
In my opinion the best feature of GloVe is that multiple sets of pre-trained vectors are easily available for download, so that's what we'll use here.
Recommended reading
https://blog.acolyer.org/2016/04/21/the-amazing-power-of-word-vectors/
https://blog.acolyer.org/2016/04/22/glove-global-vectors-for-word-representation/
https://levyomer.wordpress.com/2014/04/25/linguistic-regularities-in-sparse-and-explicit-word-representations/
Installing torchtext
The torchtext package is not currently on the PIP or Conda package managers, but it's easy to install manually:
git clone https://github.com/pytorch/text pytorch-text
cd pytorch-text
python setup.py install
Loading word vectors
Torchtext includes functions to download GloVe (and other) embeddings
End of explanation
def get_word(word):
return glove.vectors[glove.stoi[word]]
Explanation: The returned GloVe object includes attributes:
- stoi string-to-index returns a dictionary of words to indexes
- itos index-to-string returns an array of words by index
- vectors returns the actual vectors. To get a word vector get the index to get the vector:
End of explanation
def closest(vec, n=10):
Find the closest words for a given vector
all_dists = [(w, torch.dist(vec, get_word(w))) for w in glove.itos]
return sorted(all_dists, key=lambda t: t[1])[:n]
Explanation: Finding closest vectors
Going from word → vector is easy enough, but to go from vector → word takes more work. Here I'm (naively) calculating the distance for each word in the vocabulary, and sorting based on that distance:
Anyone with a suggestion for optimizing this, please let me know!
End of explanation
def print_tuples(tuples):
for tuple in tuples:
print('(%.4f) %s' % (tuple[1], tuple[0]))
Explanation: This will return a list of (word, distance) tuple pairs. Here's a helper function to print that list:
End of explanation
print_tuples(closest(get_word('google')))
Explanation: Now using a known word vector we can see which other vectors are closest:
End of explanation
# In the form w1 : w2 :: w3 : ?
def analogy(w1, w2, w3, n=5, filter_given=True):
print('\n[%s : %s :: %s : ?]' % (w1, w2, w3))
# w2 - w1 + w3 = w4
closest_words = closest(get_word(w2) - get_word(w1) + get_word(w3))
# Optionally filter out given words
if filter_given:
closest_words = [t for t in closest_words if t[0] not in [w1, w2, w3]]
print_tuples(closest_words[:n])
Explanation: Word analogies with vector arithmetic
The most interesting feature of a well-trained word vector space is that certain semantic relationships (beyond just close-ness of words) can be captured with regular vector arithmetic.
(image borrowed from a slide from Omer Levy and Yoav Goldberg)
End of explanation
analogy('king', 'man', 'queen')
Explanation: The classic example:
End of explanation
analogy('man', 'actor', 'woman')
analogy('cat', 'kitten', 'dog')
analogy('dog', 'puppy', 'cat')
analogy('russia', 'moscow', 'france')
analogy('obama', 'president', 'trump')
analogy('rich', 'mansion', 'poor')
analogy('elvis', 'rock', 'eminem')
analogy('paper', 'newspaper', 'screen')
analogy('monet', 'paint', 'michelangelo')
analogy('beer', 'barley', 'wine')
analogy('earth', 'moon', 'sun') # Interesting failure mode
analogy('house', 'roof', 'castle')
analogy('building', 'architect', 'software')
analogy('boston', 'bruins', 'phoenix')
analogy('good', 'heaven', 'bad')
analogy('jordan', 'basketball', 'woods')
Explanation: Now let's explore the word space and see what stereotypes we can uncover:
End of explanation |
9,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step2: Inspecting Quantization Errors with Quantization Debugger
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: We can see that the original model has a much higher top-5 accuracy for our
small dataset, while the quantized model has a significant accuracy loss.
Step 1. Debugger preparation
Easiest way to use the quantization debugger is to provide
tf.lite.TFLiteConverter that you have been using to quantize the model.
Step4: Step 2. Running the debugger and getting the results
When you call QuantizationDebugger.run(), the debugger will log differences
between float tensors and quantized tensors for the same op location, and
process them with given metrics.
Step5: The processed metrics can be accessed with
QuantizationDebugger.layer_statistics, or can be dumped to a text file in CSV
format with QuantizationDebugger.layer_statistics_dump().
Step6: For each row in the dump, the op name and index comes first, followed by
quantization parameters and error metrics (including
user-defined error metrics, if any). The resulting CSV file
can be used to pick problematic layers with large quantization error metrics.
With pandas or other data processing libraries, we can inspect detailed
per-layer error metrics.
Step7: Step 3. Data analysis
There are various ways to analyze the resulting. First, let's add some useful
metrics derived from the debugger's outputs. (scale means the quantization
scale factor for each tensor.)
Range (256 / scale)
RMSE / scale (sqrt(mean_squared_error) / scale)
The RMSE / scale is close to 1 / sqrt(12) (~ 0.289) when quantized
distribution is similar to the original float distribution, indicating a good
quantized model. The larger the value is, it's more likely for the layer not
being quantized well.
Step8: There are many layers with wide ranges, and some layers that have high
RMSE/scale values. Let's get the layers with high error metrics.
Step9: With these layers, you can try selective quantization to see if not quantizing
those layers improves model quality.
Step10: In addition to these, skipping quantization for the first few layers also helps
improving quantized model's quality.
Step11: Selective Quantization
Selective quantization skips quantization for some nodes, so that the
calculation can happen in the original floating-point domain. When correct
layers are skipped, we can expect some model quality recovery at the cost of
increased latency and model size.
However, if you're planning to run quantized models on integer-only accelerators
(e.g. Hexagon DSP, EdgeTPU), selective quantization would cause fragmentation of
the model and would result in slower inference latency mainly caused by data
transfer cost between CPU and those accelerators. To prevent this, you can
consider running
quantization aware training
to keep all the layers in integer while preserving the model accuracy.
Quantization debugger's option accepts denylisted_nodes and denylisted_ops
options for skipping quantization for specific layers, or all instances of
specific ops. Using suspected_layers we prepared from the previous step, we
can use quantization debugger to get a selectively quantized model.
Step12: The accuracy is still lower compared to the original float model, but we have
notable improvement from the whole quantized model by skipping quantization for
~10 layers out of 111 layers.
You can also try to not quantized all ops in the same class. For example, to
skip quantization for all mean ops, you can pass MEAN to denylisted_ops.
Step13: With these techniques, we are able to improve the quantized MobileNet V3 model
accuracy. Next we'll explore advanced techniques to improve the model accuracy
even more.
Advanced usages
Whith following features, you can futher customize your debugging pipeline.
Custom metrics
By default, the quantization debugger emits five metrics for each float-quant
difference
Step14: The result of model_debug_metrics can be separately seen from
debugger.model_statistics.
Step15: Using (internal) mlir_quantize API to access in-depth features
Note
Step16: Whole model verify mode
The default behavior for the debug model generation is per-layer verify. In this
mode, the input for float and quantize op pair is from the same source (previous
quantized op). Another mode is whole-model verify, where the float and quantize
models are separated. This mode would be useful to observe how the error is
being propagated down the model. To enable, enable_whole_model_verify=True to
convert.mlir_quantize while generating the debug model manually.
Step17: Selective quantization from an already calibrated model
You can directly call convert.mlir_quantize to get the selective quantized
model from already calibrated model. This would be particularly useful when you
want to calibrate the model once, and experiment with various denylist
combinations. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
# Quantization debugger is available from TensorFlow 2.7.0
!pip uninstall -y tensorflow
!pip install tf-nightly
!pip install tensorflow_datasets --upgrade # imagenet_v2 needs latest checksum
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
#@title Boilerplates and helpers
MODEL_URI = 'https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/classification/5'
def process_image(data):
data['image'] = tf.image.resize(data['image'], (224, 224)) / 255.0
return data
# Representative dataset
def representative_dataset(dataset):
def _data_gen():
for data in dataset.batch(1):
yield [data['image']]
return _data_gen
def eval_tflite(tflite_model, dataset):
Evaluates tensorflow lite classification model with the given dataset.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_idx = interpreter.get_input_details()[0]['index']
output_idx = interpreter.get_output_details()[0]['index']
results = []
for data in representative_dataset(dataset)():
interpreter.set_tensor(input_idx, data[0])
interpreter.invoke()
results.append(interpreter.get_tensor(output_idx).flatten())
results = np.array(results)
gt_labels = np.array(list(dataset.map(lambda data: data['label'] + 1)))
accuracy = (
np.sum(np.argsort(results, axis=1)[:, -5:] == gt_labels.reshape(-1, 1)) /
gt_labels.size)
print(f'Top-5 accuracy (quantized): {accuracy * 100:.2f}%')
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(224, 224, 3), batch_size=1),
hub.KerasLayer(MODEL_URI)
])
model.compile(
loss='sparse_categorical_crossentropy',
metrics='sparse_top_k_categorical_accuracy')
model.build([1, 224, 224, 3])
# Prepare dataset with 100 examples
ds = tfds.load('imagenet_v2', split='test[:1%]')
ds = ds.map(process_image)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.representative_dataset = representative_dataset(ds)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_model = converter.convert()
test_ds = ds.map(lambda data: (data['image'], data['label'] + 1)).batch(16)
loss, acc = model.evaluate(test_ds)
print(f'Top-5 accuracy (float): {acc * 100:.2f}%')
eval_tflite(quantized_model, ds)
Explanation: Inspecting Quantization Errors with Quantization Debugger
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/quantization_debugger"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/classification/5"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Although full-integer quantization provides improved model size and latency, the
quantized model won't always work as expected. It's usually expected for the
model quality (e.g. accuracy, mAP, WER) to be slightly lower than the original
float model. However, there are cases where the model quality can go below your
expectation or generated completely wrong results.
When this problem happens, it's tricky and painful to spot the root cause of the
quantization error, and it's even more difficult to fix the quantization error.
To assist this model inspection process, quantization debugger can be used
to identify problematic layers, and selective quantization can leave those
problematic layers in float so that the model accuracy can be recovered at the
cost of reduced benefit from quantization.
Note: This API is experimental, and there might be breaking changes in the API
in the course of improvements.
Quantization Debugger
Quantization debugger makes it possible to do quantization quality metric
analysis in the existing model. Quantization debugger can automate processes for
running model with a debug dataset, and collecting quantization quality metrics
for each tensors.
Note: Quantization debugger and selective quantization currently only works for
full-integer quantization with int8 activations.
Prerequisites
If you already have a pipeline to quantize a model, you have all necessary
pieces to run quantization debugger!
Model to quantize
Representative dataset
In addition to model and data, you will need to use a data processing framework
(e.g. pandas, Google Sheets) to analyze the exported results.
Setup
This section prepares libraries, MobileNet v3 model, and test dataset of 100
images.
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset(ds)
# my_debug_dataset should have the same format as my_representative_dataset
debugger = tf.lite.experimental.QuantizationDebugger(
converter=converter, debug_dataset=representative_dataset(ds))
Explanation: We can see that the original model has a much higher top-5 accuracy for our
small dataset, while the quantized model has a significant accuracy loss.
Step 1. Debugger preparation
Easiest way to use the quantization debugger is to provide
tf.lite.TFLiteConverter that you have been using to quantize the model.
End of explanation
debugger.run()
Explanation: Step 2. Running the debugger and getting the results
When you call QuantizationDebugger.run(), the debugger will log differences
between float tensors and quantized tensors for the same op location, and
process them with given metrics.
End of explanation
RESULTS_FILE = '/tmp/debugger_results.csv'
with open(RESULTS_FILE, 'w') as f:
debugger.layer_statistics_dump(f)
!head /tmp/debugger_results.csv
Explanation: The processed metrics can be accessed with
QuantizationDebugger.layer_statistics, or can be dumped to a text file in CSV
format with QuantizationDebugger.layer_statistics_dump().
End of explanation
layer_stats = pd.read_csv(RESULTS_FILE)
layer_stats.head()
Explanation: For each row in the dump, the op name and index comes first, followed by
quantization parameters and error metrics (including
user-defined error metrics, if any). The resulting CSV file
can be used to pick problematic layers with large quantization error metrics.
With pandas or other data processing libraries, we can inspect detailed
per-layer error metrics.
End of explanation
layer_stats['range'] = 255.0 * layer_stats['scale']
layer_stats['rmse/scale'] = layer_stats.apply(
lambda row: np.sqrt(row['mean_squared_error']) / row['scale'], axis=1)
layer_stats[['op_name', 'range', 'rmse/scale']].head()
plt.figure(figsize=(15, 5))
ax1 = plt.subplot(121)
ax1.bar(np.arange(len(layer_stats)), layer_stats['range'])
ax1.set_ylabel('range')
ax2 = plt.subplot(122)
ax2.bar(np.arange(len(layer_stats)), layer_stats['rmse/scale'])
ax2.set_ylabel('rmse/scale')
plt.show()
Explanation: Step 3. Data analysis
There are various ways to analyze the resulting. First, let's add some useful
metrics derived from the debugger's outputs. (scale means the quantization
scale factor for each tensor.)
Range (256 / scale)
RMSE / scale (sqrt(mean_squared_error) / scale)
The RMSE / scale is close to 1 / sqrt(12) (~ 0.289) when quantized
distribution is similar to the original float distribution, indicating a good
quantized model. The larger the value is, it's more likely for the layer not
being quantized well.
End of explanation
layer_stats[layer_stats['rmse/scale'] > 0.7][[
'op_name', 'range', 'rmse/scale', 'tensor_name'
]]
Explanation: There are many layers with wide ranges, and some layers that have high
RMSE/scale values. Let's get the layers with high error metrics.
End of explanation
suspected_layers = list(
layer_stats[layer_stats['rmse/scale'] > 0.7]['tensor_name'])
Explanation: With these layers, you can try selective quantization to see if not quantizing
those layers improves model quality.
End of explanation
suspected_layers.extend(list(layer_stats[:5]['tensor_name']))
Explanation: In addition to these, skipping quantization for the first few layers also helps
improving quantized model's quality.
End of explanation
debug_options = tf.lite.experimental.QuantizationDebugOptions(
denylisted_nodes=suspected_layers)
debugger = tf.lite.experimental.QuantizationDebugger(
converter=converter,
debug_dataset=representative_dataset(ds),
debug_options=debug_options)
selective_quantized_model = debugger.get_nondebug_quantized_model()
eval_tflite(selective_quantized_model, ds)
Explanation: Selective Quantization
Selective quantization skips quantization for some nodes, so that the
calculation can happen in the original floating-point domain. When correct
layers are skipped, we can expect some model quality recovery at the cost of
increased latency and model size.
However, if you're planning to run quantized models on integer-only accelerators
(e.g. Hexagon DSP, EdgeTPU), selective quantization would cause fragmentation of
the model and would result in slower inference latency mainly caused by data
transfer cost between CPU and those accelerators. To prevent this, you can
consider running
quantization aware training
to keep all the layers in integer while preserving the model accuracy.
Quantization debugger's option accepts denylisted_nodes and denylisted_ops
options for skipping quantization for specific layers, or all instances of
specific ops. Using suspected_layers we prepared from the previous step, we
can use quantization debugger to get a selectively quantized model.
End of explanation
debug_options = tf.lite.experimental.QuantizationDebugOptions(
denylisted_ops=['MEAN'])
debugger = tf.lite.experimental.QuantizationDebugger(
converter=converter,
debug_dataset=representative_dataset(ds),
debug_options=debug_options)
selective_quantized_model = debugger.get_nondebug_quantized_model()
eval_tflite(selective_quantized_model, ds)
Explanation: The accuracy is still lower compared to the original float model, but we have
notable improvement from the whole quantized model by skipping quantization for
~10 layers out of 111 layers.
You can also try to not quantized all ops in the same class. For example, to
skip quantization for all mean ops, you can pass MEAN to denylisted_ops.
End of explanation
debug_options = tf.lite.experimental.QuantizationDebugOptions(
layer_debug_metrics={
'mean_abs_error': (lambda diff: np.mean(np.abs(diff)))
},
layer_direct_compare_metrics={
'correlation':
lambda f, q, s, zp: (np.corrcoef(f.flatten(),
(q.flatten() - zp) / s)[0, 1])
},
model_debug_metrics={
'argmax_accuracy': (lambda f, q: np.mean(np.argmax(f) == np.argmax(q)))
})
debugger = tf.lite.experimental.QuantizationDebugger(
converter=converter,
debug_dataset=representative_dataset(ds),
debug_options=debug_options)
debugger.run()
CUSTOM_RESULTS_FILE = '/tmp/debugger_results.csv'
with open(CUSTOM_RESULTS_FILE, 'w') as f:
debugger.layer_statistics_dump(f)
custom_layer_stats = pd.read_csv(CUSTOM_RESULTS_FILE)
custom_layer_stats[['op_name', 'mean_abs_error', 'correlation']].tail()
Explanation: With these techniques, we are able to improve the quantized MobileNet V3 model
accuracy. Next we'll explore advanced techniques to improve the model accuracy
even more.
Advanced usages
Whith following features, you can futher customize your debugging pipeline.
Custom metrics
By default, the quantization debugger emits five metrics for each float-quant
difference: tensor size, standard deviation, mean error, max absolute error, and
mean squared error. You can add more custom metrics by passing them to options.
For each metrics, the result should be a single float value and the resulting
metric will be an average of metrics from all examples.
layer_debug_metrics: calculate metric based on diff for each op outputs
from float and quantized op outputs.
layer_direct_compare_metrics: rather than getting diff only, this will
calculate metric based on raw float and quantized tensors, and its
quantization parameters (scale, zero point)
model_debug_metrics: only used when float_model_(path|content) is
passed to the debugger. In addition to the op-level metrics, final layer
output is compared to the reference output from the original float model.
End of explanation
debugger.model_statistics
Explanation: The result of model_debug_metrics can be separately seen from
debugger.model_statistics.
End of explanation
from tensorflow.lite.python import convert
Explanation: Using (internal) mlir_quantize API to access in-depth features
Note: Some features in the folowing section,
TFLiteConverter._experimental_calibrate_only and converter.mlir_quantize are
experimental internal APIs, and subject to change in a non-backward compatible
way.
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.representative_dataset = representative_dataset(ds)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter._experimental_calibrate_only = True
calibrated_model = converter.convert()
# Note that enable_numeric_verify and enable_whole_model_verify are set.
quantized_model = convert.mlir_quantize(
calibrated_model,
enable_numeric_verify=True,
enable_whole_model_verify=True)
debugger = tf.lite.experimental.QuantizationDebugger(
quant_debug_model_content=quantized_model,
debug_dataset=representative_dataset(ds))
Explanation: Whole model verify mode
The default behavior for the debug model generation is per-layer verify. In this
mode, the input for float and quantize op pair is from the same source (previous
quantized op). Another mode is whole-model verify, where the float and quantize
models are separated. This mode would be useful to observe how the error is
being propagated down the model. To enable, enable_whole_model_verify=True to
convert.mlir_quantize while generating the debug model manually.
End of explanation
selective_quantized_model = convert.mlir_quantize(
calibrated_model, denylisted_nodes=suspected_layers)
eval_tflite(selective_quantized_model, ds)
Explanation: Selective quantization from an already calibrated model
You can directly call convert.mlir_quantize to get the selective quantized
model from already calibrated model. This would be particularly useful when you
want to calibrate the model once, and experiment with various denylist
combinations.
End of explanation |
9,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Nets
Training a generative adversarial network to sample from a Gaussian distribution. This is a toy problem, takes < 3 minutes to run on a modest 1.2GHz CPU.
Step1: Target distribution $p_{data}$
Step2: Pre-train Decision Surface
If decider is reasonably accurate to start, we get much faster convergence.
Step3: Build Net
Now to build the actual generative adversarial network | Python Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
%matplotlib inline
Explanation: Generative Adversarial Nets
Training a generative adversarial network to sample from a Gaussian distribution. This is a toy problem, takes < 3 minutes to run on a modest 1.2GHz CPU.
End of explanation
mu,sigma=-1,1
xs=np.linspace(-5,5,1000)
plt.plot(xs, norm.pdf(xs,loc=mu,scale=sigma))
#plt.savefig('fig0.png')
TRAIN_ITERS=10000
M=200 # minibatch size
# MLP - used for D_pre, D1, D2, G networks
def mlp(input, output_dim):
# construct learnable parameters within local scope
w1=tf.get_variable("w0", [input.get_shape()[1], 6], initializer=tf.random_normal_initializer())
b1=tf.get_variable("b0", [6], initializer=tf.constant_initializer(0.0))
w2=tf.get_variable("w1", [6, 5], initializer=tf.random_normal_initializer())
b2=tf.get_variable("b1", [5], initializer=tf.constant_initializer(0.0))
w3=tf.get_variable("w2", [5,output_dim], initializer=tf.random_normal_initializer())
b3=tf.get_variable("b2", [output_dim], initializer=tf.constant_initializer(0.0))
# nn operators
fc1=tf.nn.tanh(tf.matmul(input,w1)+b1)
fc2=tf.nn.tanh(tf.matmul(fc1,w2)+b2)
fc3=tf.nn.tanh(tf.matmul(fc2,w3)+b3)
return fc3, [w1,b1,w2,b2,w3,b3]
# re-used for optimizing all networks
def momentum_optimizer(loss,var_list):
batch = tf.Variable(0)
learning_rate = tf.train.exponential_decay(
0.001, # Base learning rate.
batch, # Current index into the dataset.
TRAIN_ITERS // 4, # Decay step - this decays 4 times throughout training process.
0.95, # Decay rate.
staircase=True)
#optimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=batch,var_list=var_list)
optimizer=tf.train.MomentumOptimizer(learning_rate,0.6).minimize(loss,global_step=batch,var_list=var_list)
return optimizer
Explanation: Target distribution $p_{data}$
End of explanation
with tf.variable_scope("D_pre"):
input_node=tf.placeholder(tf.float32, shape=(M,1))
train_labels=tf.placeholder(tf.float32,shape=(M,1))
D,theta=mlp(input_node,1)
loss=tf.reduce_mean(tf.square(D-train_labels))
optimizer=momentum_optimizer(loss,None)
sess=tf.InteractiveSession()
tf.global_variables_initializer().run()
# plot decision surface
def plot_d0(D,input_node):
f,ax=plt.subplots(1)
# p_data
xs=np.linspace(-5,5,1000)
ax.plot(xs, norm.pdf(xs,loc=mu,scale=sigma), label='p_data')
# decision boundary
r=1000 # resolution (number of points)
xs=np.linspace(-5,5,r)
ds=np.zeros((r,1)) # decision surface
# process multiple points in parallel in a minibatch
for i in range(r/M):
x=np.reshape(xs[M*i:M*(i+1)],(int(M),1))
ds[M*i:M*(i+1)]=sess.run(D,{input_node: x})
ax.plot(xs, ds, label='decision boundary')
ax.set_ylim(0,1.1)
plt.legend()
plot_d0(D,input_node)
plt.title('Initial Decision Boundary')
#plt.savefig('fig1.png')
lh=np.zeros(1000)
for i in range(1000):
#d=np.random.normal(mu,sigma,M)
d=(np.random.random(M)-0.5) * 10.0 # instead of sampling only from gaussian, want the domain to be covered as uniformly as possible
labels=norm.pdf(d,loc=mu,scale=sigma)
lh[i],_=sess.run([loss,optimizer], {input_node: np.reshape(d,(M,1)), train_labels: np.reshape(labels,(M,1))})
# training loss
plt.plot(lh)
plt.title('Training Loss')
plot_d0(D,input_node)
#plt.savefig('fig2.png')
# copy the learned weights over into a tmp array
weightsD=sess.run(theta)
# close the pre-training session
sess.close()
Explanation: Pre-train Decision Surface
If decider is reasonably accurate to start, we get much faster convergence.
End of explanation
with tf.variable_scope("G"):
z_node=tf.placeholder(tf.float32, shape=(M,1)) # M uniform01 floats
G,theta_g=mlp(z_node,1) # generate normal transformation of Z
G=tf.multiply(5.0,G) # scale up by 5 to match range
with tf.variable_scope("D") as scope:
# D(x)
x_node=tf.placeholder(tf.float32, shape=(M,1)) # input M normally distributed floats
fc,theta_d=mlp(x_node,1) # output likelihood of being normally distributed
D1=tf.maximum(tf.minimum(fc,.99), 0.01) # clamp as a probability
# make a copy of D that uses the same variables, but takes in G as input
scope.reuse_variables()
fc,theta_d=mlp(G,1)
D2=tf.maximum(tf.minimum(fc,.99), 0.01)
obj_d=tf.reduce_mean(tf.log(D1)+tf.log(1-D2))
obj_g=tf.reduce_mean(tf.log(D2))
# set up optimizer for G,D
opt_d=momentum_optimizer(1-obj_d, theta_d)
opt_g=momentum_optimizer(1-obj_g, theta_g) # maximize log(D(G(z)))
sess=tf.InteractiveSession()
tf.global_variables_initializer().run()
# copy weights from pre-training over to new D network
for i,v in enumerate(theta_d):
sess.run(v.assign(weightsD[i]))
def plot_fig():
# plots pg, pdata, decision boundary
f,ax=plt.subplots(1)
# p_data
xs=np.linspace(-5,5,1000)
ax.plot(xs, norm.pdf(xs,loc=mu,scale=sigma), label='p_data')
# decision boundary
r=5000 # resolution (number of points)
xs=np.linspace(-5,5,r)
ds=np.zeros((r,1)) # decision surface
# process multiple points in parallel in same minibatch
for i in range(r/M):
x=np.reshape(xs[M*i:M*(i+1)],(M,1))
ds[M*i:M*(i+1)]=sess.run(D1,{x_node: x})
ax.plot(xs, ds, label='decision boundary')
# distribution of inverse-mapped points
zs=np.linspace(-5,5,r)
gs=np.zeros((r,1)) # generator function
for i in range(r/M):
z=np.reshape(zs[M*i:M*(i+1)],(M,1))
gs[M*i:M*(i+1)]=sess.run(G,{z_node: z})
histc, edges = np.histogram(gs, bins = 10)
ax.plot(np.linspace(-5,5,10), histc/float(r), label='p_g')
# ylim, legend
ax.set_ylim(0,1.1)
plt.legend()
# initial conditions
plot_fig()
plt.title('Before Training')
#plt.savefig('fig3.png')
# Algorithm 1 of Goodfellow et al 2014
k=1
histd, histg= np.zeros(TRAIN_ITERS), np.zeros(TRAIN_ITERS)
for i in range(TRAIN_ITERS):
for j in range(k):
x= np.random.normal(mu,sigma,M) # sampled m-batch from p_data
x.sort()
z= np.linspace(-5.0,5.0,M)+np.random.random(M)*0.01 # sample m-batch from noise prior
histd[i],_=sess.run([obj_d,opt_d], {x_node: np.reshape(x,(M,1)), z_node: np.reshape(z,(M,1))})
z= np.linspace(-5.0,5.0,M)+np.random.random(M)*0.01 # sample noise prior
histg[i],_=sess.run([obj_g,opt_g], {z_node: np.reshape(z,(M,1))}) # update generator
if i % (TRAIN_ITERS//10) == 0:
print(float(i)/float(TRAIN_ITERS))
plt.plot(range(TRAIN_ITERS),histd, label='obj_d')
plt.plot(range(TRAIN_ITERS), 1-histg, label='obj_g')
plt.legend()
#plt.savefig('fig4.png')
plot_fig()
#plt.savefig('fig5.png')
Explanation: Build Net
Now to build the actual generative adversarial network
End of explanation |
9,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate a functional label from source estimates
Threshold source estimates and produce a functional label. The label
is typically the region of interest that contains high values.
Here we compare the average time course in the anatomical label obtained
by FreeSurfer segmentation and the average time course from the
functional label. As expected the time course in the functional
label yields higher values.
Step1: plot the time courses....
Step2: plot brain in 3D with mne.viz.Brain if available | Python Code:
# Author: Luke Bloy <luke.bloy@gmail.com>
# Alex Gramfort <alexandre.gramfort@inria.fr>
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.minimum_norm import read_inverse_operator, apply_inverse
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname_inv = (
data_path / 'MEG' / 'sample' / 'sample_audvis-meg-oct-6-meg-inv.fif')
fname_evoked = data_path / 'MEG' / 'sample' / 'sample_audvis-ave.fif'
subjects_dir = data_path / 'subjects'
subject = 'sample'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Compute a label/ROI based on the peak power between 80 and 120 ms.
# The label bankssts-lh is used for the comparison.
aparc_label_name = 'bankssts-lh'
tmin, tmax = 0.080, 0.120
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src'] # get the source space
# Compute inverse solution
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori='normal')
# Make an STC in the time interval of interest and take the mean
stc_mean = stc.copy().crop(tmin, tmax).mean()
# use the stc_mean to generate a functional label
# region growing is halted at 60% of the peak value within the
# anatomical label / ROI specified by aparc_label_name
label = mne.read_labels_from_annot(subject, parc='aparc',
subjects_dir=subjects_dir,
regexp=aparc_label_name)[0]
stc_mean_label = stc_mean.in_label(label)
data = np.abs(stc_mean_label.data)
stc_mean_label.data[data < 0.6 * np.max(data)] = 0.
# 8.5% of original source space vertices were omitted during forward
# calculation, suppress the warning here with verbose='error'
func_labels, _ = mne.stc_to_label(stc_mean_label, src=src, smooth=True,
subjects_dir=subjects_dir, connected=True,
verbose='error')
# take first as func_labels are ordered based on maximum values in stc
func_label = func_labels[0]
# load the anatomical ROI for comparison
anat_label = mne.read_labels_from_annot(subject, parc='aparc',
subjects_dir=subjects_dir,
regexp=aparc_label_name)[0]
# extract the anatomical time course for each label
stc_anat_label = stc.in_label(anat_label)
pca_anat = stc.extract_label_time_course(anat_label, src, mode='pca_flip')[0]
stc_func_label = stc.in_label(func_label)
pca_func = stc.extract_label_time_course(func_label, src, mode='pca_flip')[0]
# flip the pca so that the max power between tmin and tmax is positive
pca_anat *= np.sign(pca_anat[np.argmax(np.abs(pca_anat))])
pca_func *= np.sign(pca_func[np.argmax(np.abs(pca_anat))])
Explanation: Generate a functional label from source estimates
Threshold source estimates and produce a functional label. The label
is typically the region of interest that contains high values.
Here we compare the average time course in the anatomical label obtained
by FreeSurfer segmentation and the average time course from the
functional label. As expected the time course in the functional
label yields higher values.
End of explanation
plt.figure()
plt.plot(1e3 * stc_anat_label.times, pca_anat, 'k',
label='Anatomical %s' % aparc_label_name)
plt.plot(1e3 * stc_func_label.times, pca_func, 'b',
label='Functional %s' % aparc_label_name)
plt.legend()
plt.show()
Explanation: plot the time courses....
End of explanation
brain = stc_mean.plot(hemi='lh', subjects_dir=subjects_dir)
brain.show_view('lateral')
# show both labels
brain.add_label(anat_label, borders=True, color='k')
brain.add_label(func_label, borders=True, color='b')
Explanation: plot brain in 3D with mne.viz.Brain if available
End of explanation |
9,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Monte Carlo Methods
Step1: Integration
If we have an ugly function, say
$$
\begin{equation}
f(x) = \sin^2 \left(\frac{1}{x (2-x)}\right),
\end{equation}
$$
then it can be very difficult to integrate. To see this, just do a quick plot.
Step2: We see that as the function oscillates infinitely often, integrating this with standard methods is going to be very inaccurate.
However, we note that the function is bounded, so the integral (given by the shaded area below) must itself be bounded - less than the total area in the plot, which is $2$ in this case.
Step4: So if we scattered (using a uniform random distribution) a large number of points within this box, the fraction of them falling below the curve is approximately the integral we want to compute, divided by the area of the box
Step6: Accuracy
To check the accuracy of the method, let's apply this to calculate $\pi$.
The area of a circle of radius $2$ is $4\pi$, so the area of the quarter circle in $x, y \in [0, 2]$ is just $\pi$
Step8: Mean Value Method
Monte Carlo integration is pretty inaccurate, as seen above
Step9: Let's look at the accuracy of this method again applied to computing $\pi$.
Step12: The convergence rate is the same (only roughly, typically), but the Mean Value method is expected to be better in terms of its absolute error.
Dimensionality
Compared to standard integration methods (Gauss quadrature, Simpson's rule, etc) the convergence rate for Monte Carlo methods is very slow. However, there is one crucial advantage
Step13: This is a plot of my calculated values of the Integral
Step14: This is a plot of my calculated values of the integral in blue and the actual values in red
Step15: This is a plot of the error of my calculated values, thier deviance from the actual values
Lets try with domains from -2 to 2 to make sure the domain has little effect on the result. It increased the error (makes sence since it is less likely for the random points to fall in the hypersphere => I am trying using 10 million points as opposed to 1 million.
Step17: This is a plot of my calculated values of the integral in blue and the actual values in red. This plot however uses the domain -2 to 2 in each dimension but calculated for the same unit hypersphere, the values diverge at higher dimensionality, why?
As dimensionality increases so does the effective hypervolume of the domain by a factor $2^{dimensionality}$, this means that the probability of a random point being inside the hypersphere is proportional to $2^{-dimensionality}$ so as dimensionality is increased the points are much less likely to fall inside the hypersphere and so you need a larger N, number of random points, for the answer to converge.
Now let us repeat this across multiple dimensions.
The errors clearly vary over a range, but the convergence remains roughly as $N^{-1/2}$ independent of the dimension; using other techniques such as Gauss quadrature would see the points required scaling geometrically with the dimension.
Importance sampling
Consider the integral (which arises, for example, in the theory of Fermi gases)
$$
\begin{equation}
I = \int_0^1 \frac{x^{-1/2}}{e^x + 1} \, dx.
\end{equation}
$$
This has a finite value, but the integrand diverges as $x \to 0$. This may cause a problem for Monte Carlo integration when a single value may give a spuriously large contribution to the sum.
We can get around this by changing the points at which the integrand is sampled. Choose a weighting function $w(x)$. Then a weighted average of any function $g(x)$ can be
$$
\begin{equation}
<g>_w = \frac{\int_a^b w(x) g(x) \, dx}{\int_a^b w(x) \, dx}.
\end{equation}
$$
As our integral is
$$
\begin{equation}
I = \int_a^b f(x) \, dx
\end{equation}
$$
we can, by setting $g(x) = f(x) / w(x)$ get
$$
\begin{equation}
I = \int_a^b f(x) \, dx = \left< \frac{f(x)}{w(x)} \right>_w \int_a^b w(x) \, dx.
\end{equation}
$$
This gives
$$
\begin{equation}
I \simeq \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{w(x_i)} \int_a^b w(x) \, dx,
\end{equation}
$$
where the points $x_i$ are now chosen from a non-uniform probability distribution with pdf
$$
\begin{equation}
p(x) = \frac{w(x)}{\int_a^b w(x) \, dx}.
\end{equation}
$$
This is a generalization of the mean value method - we clearly recover the mean value method when the weighting function $w(x) \equiv 1$. A careful choice of the weighting function can mitigate problematic regions of the integrand; e.g., in the example above we could choose $w(x) = x^{-1/2}$, giving $p(x) = x^{-1/2}/2$.
So, let's try to solve the integral above. The expected solution is around 0.84. | Python Code:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
Explanation: Monte Carlo Methods: Lab 1
Take a look at Chapter 10 of Newman's Computational Physics with Python where much of this material is drawn from.
End of explanation
%matplotlib inline
import numpy
from matplotlib import pyplot
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
from __future__ import division
def f(x):
return numpy.sin(1.0/(x*(2.0-x)))**2
x = numpy.linspace(0.0, 2.0, 10000)
pyplot.plot(x, f(x))
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\sin^2([x(x-2)]^{-1})$");
Explanation: Integration
If we have an ugly function, say
$$
\begin{equation}
f(x) = \sin^2 \left(\frac{1}{x (2-x)}\right),
\end{equation}
$$
then it can be very difficult to integrate. To see this, just do a quick plot.
End of explanation
pyplot.fill_between(x, f(x))
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\sin^2([x(x-2)]^{-1})$");
Explanation: We see that as the function oscillates infinitely often, integrating this with standard methods is going to be very inaccurate.
However, we note that the function is bounded, so the integral (given by the shaded area below) must itself be bounded - less than the total area in the plot, which is $2$ in this case.
End of explanation
def mc_integrate(f, domain_x, domain_y, N = 10000):
Monte Carlo integration function: to be completed. Result, for the given f, should be around 1.46.
import numpy.random
domain_xrange = (domain_x[1]-domain_x[0])
domain_yrange = (domain_y[1]-domain_y[0])
randomXArray = numpy.random.random(N) * (domain_xrange + domain_x[0])
randomYArray = numpy.random.random(N) * (domain_yrange + domain_y[0])
funcValuesArray = f(randomXArray)
#funcValuesArray = numpy.array([f(x) for x in randomXArray])
#print(f(2))
#print(randomXArray, randomYArray, funcValuesArray)
boolArray = randomYArray <= funcValuesArray
Hits = sum(boolArray)
#print(randomXArray, randomYArray, funcValuesArray, boolArray, Hits)
I=(Hits*domain_xrange*domain_yrange)/N
return I
mc_integrate(lambda x: x**3, (0,2), (0,8), 1000000)
mc_integrate(f, (0,2), (0,1), 1000)
%timeit mc_integrate(f, [0,2], [0,2], 100000)
Explanation: So if we scattered (using a uniform random distribution) a large number of points within this box, the fraction of them falling below the curve is approximately the integral we want to compute, divided by the area of the box:
$$
\begin{equation}
I = \int_a^b f(x) \, dx \quad \implies \quad I \simeq \frac{k A}{N}
\end{equation}
$$
where $N$ is the total number of points considered, $k$ is the number falling below the curve, and $A$ is the area of the box. We can choose the box, but we need $y \in [\min_{x \in [a, b]} (f(x)), \max_{x \in [a, b]} (f(x))] = [c, d]$, giving $A = (d-c)(b-a)$.
So let's apply this technique to the function above, where the box in $y$ is $[0,1]$.
End of explanation
def g(x):
return (4-x**2)**0.5
Nlist = []
PiList = []
import matplotlib.pyplot as mplot
for i in range(0,20):
#print(i, mc_integrate(g, (0,2), (0,8), N = 100*2**i))
Nlist.append(100*2**i)
PiList.append(mc_integrate(g, (0,2), (0,8), N = 100*2**i))
NlogList = []
abserrlogList = []
for i in range(len(Nlist)):
NlogList.append(numpy.log(Nlist[i]))
absError = abs(numpy.pi-PiList[i])
abserrlogList.append(numpy.log(absError))
(p1, p0) = numpy.polyfit(NlogList, abserrlogList, 1) # y = p1*x + p0
print(p1, p0)
def fittedLine(x):
return p1*x+p0
def create_plot_data(f, xmin, xmax, n):
Takes xmin, a min value of x, and xmax, a max value of x\n
and creates a sequence xs, with values between xmin and xmax\n
inclusive, with n elements. It also creates a sequence ys\n
with the values of the function f at the corresponding\n
values in the sequence xs. It then returns a tuple containing\n
these 2 seqeuences i.e. (xs, ys)
xsArray = numpy.linspace(start=xmin, stop=xmax, num=n)
ysArray = numpy.zeros(n)
i = 0
for element in numpy.nditer(xsArray):
# print element
ysArray[i] = f(element)
i += 1
return (list(xsArray), list(ysArray))
(xFit, yFit) = create_plot_data(fittedLine, 4, 19, 5)
#print(xyFit)
#print(NlogList, abserrlogList)
mplot.clf()
plot1 = mplot.plot(NlogList, abserrlogList, 'bo', label='')
plot2 = mplot.plot(xFit, yFit, '--r', label='Best fit Line: y={}*x+{}'.format(p1,p0))
mplot.xlabel("log(N)")
mplot.ylabel("log(|Error|)")
mplot.legend(prop={'size':12})
mplot.show()
Explanation: Accuracy
To check the accuracy of the method, let's apply this to calculate $\pi$.
The area of a circle of radius $2$ is $4\pi$, so the area of the quarter circle in $x, y \in [0, 2]$ is just $\pi$:
$$
\begin{equation}
\pi = \int_0^2 \sqrt{4 - x^2} \, dx.
\end{equation}
$$
Check the convergence of the Monte Carlo integration with $N$. (I suggest using $N = 100 \times 2^i$ for $i = 0, \dots, 19$; you should find the error scales roughly as $N^{-1/2}$)
End of explanation
def mv_integrate(f, domain_x, N = 10000):
Mean value Monte Carlo integration: to be completed
import numpy.random
a = domain_x[0]
b = domain_x[1]
domain_xrange = (domain_x[1]-domain_x[0])
xArray = numpy.random.random(N) * (domain_xrange + domain_x[0])
funcArray = f(xArray)
sum_ = sum(funcArray)
I = ((b-a)/N)*sum_
return I
Explanation: Mean Value Method
Monte Carlo integration is pretty inaccurate, as seen above: it converges slowly, and has poor accuracy at all $N$. An alternative is the mean value method, where we note that by definition the average value of $f$ over the interval $[a, b]$ is precisely the integral multiplied by the width of the interval.
Hence we can just choose our $N$ random points in $x$ as above, but now just compute
$$
\begin{equation}
I \simeq \frac{b-a}{N} \sum_{i=1}^N f(x_i).
\end{equation}
$$
End of explanation
def g(x):
return (4-x**2)**0.5
mv_integrate(g, (0, 2), 10000000)
Nlist2 = []
PiList2 = []
import matplotlib.pyplot as mplot
for i in range(0,20):
#print(i, mv_integrate(g, (0,2), (0,8), N = 100*2**i))
Nlist2.append(100*2**i)
PiList2.append(mv_integrate(g, (0,2), N = 100*2**i))
NlogList2 = []
abserrlogList2 = []
for i in range(len(Nlist2)):
NlogList2.append(numpy.log(Nlist2[i]))
absError2 = abs(numpy.pi-PiList2[i])
abserrlogList2.append(numpy.log(absError2))
(p1_2, p0_2) = numpy.polyfit(NlogList2, abserrlogList2, 1) # y = p1*x + p0
print(p1_2, p0_2)
def fittedLine_2(x):
return p1_2*x+p0_2
(xFit2, yFit2) = create_plot_data(fittedLine_2, 4, 19, 5)
#print(xyFit)
#print(NlogList, abserrlogList)
mplot.clf()
plot1_2 = mplot.plot(NlogList2, abserrlogList2, 'bo', label='')
plot2_2 = mplot.plot(xFit2, yFit2, '--r', label='Best fit Line: y={}*x+{}'.format(p1_2,p0_2))
mplot.xlabel("log(N)")
mplot.ylabel("log(|Error|)")
mplot.legend(prop={'size':12})
mplot.show()
Explanation: Let's look at the accuracy of this method again applied to computing $\pi$.
End of explanation
def mc_integrate_multid(f, domain, N = 10000):
Monte Carlo integration in arbitrary dimensions (read from the size of the domain): to be completed
import numpy.random
dimensionality = len(domain) #dimensionality of the hypersphere - determined by the length of the domain list
#initialisation of lists/arrays
domain_range = numpy.zeros(shape=(dimensionality, 1))
funcValuesArray = numpy.zeros(N)
for i in range(0, dimensionality):
domain_range[i] = (domain[i][1]-domain[i][0]) #calculates domain range for each dimension
for i in range(0, N):
XYZ = []
for j in range(0, dimensionality):
XYZ.append(float(numpy.random.rand() * (domain_range[j] + domain[j][0])))
#print XYZ
funcValuesArray[i] = f(numpy.array(XYZ)) #creates an N element list of function values at random points (x, y, z ...)
#print(funcValuesArray[i])
#print(funcValuesArray)
zeroes = numpy.zeros(N)
boolArray = funcValuesArray <= zeroes #compares the array of function values with an of array of zeroes
Hits = sum(boolArray) #sums the number of hits (i.e. points inside or on the surface of the hypersphere)
hyperVolume = 1 #initialises hypervolume
for i in range(0, dimensionality):
hyperVolume *= domain_range[i] #calculates hypervolume of hypercuboid domain
I = (Hits*hyperVolume)/N #calculates integral's value
return float(I)#*2**dimensionality
def MakeHyperSphereOfRadiusR(R):
def HyperSphere(ArrayOfDimensionalVariables):
defines a hypersphere of radius R with variables in the Array e.g. for a 3d
sphere with radius R pass an array [x, y, z] and the radius R. The function returns a value
which is <= 0 if the value is inside the function and >0 if it's outside the function
#print(ArrayOfDimensionalVariables)
return sum(ArrayOfDimensionalVariables**2)-R**2 #e.g. for 3d sphere for points inside or on the surface of the sphere x**2+y**2+z**2-R**2 will be < or = to 0
return HyperSphere
HyperS_1 = MakeHyperSphereOfRadiusR(1)
domainD = []
dimensionList = []
integralList = []
for i in range(0,10):
domainD.append([-1,1])
dimensionList.append(i)
integralList.append(mc_integrate_multid(HyperS_1, domainD, N = 1000000))
mplot.clf()
plot1_3 = mplot.plot(dimensionList, integralList, 'bo', label='')
#plot2_2 = mplot.plot(xFit2, yFit2, '--r', label='Best fit Line: y={}*x+{}'.format(p1_2,p0_2))
mplot.xlabel("dimensionality")
mplot.ylabel("Integral value")
#mplot.legend(prop={'size':12})
mplot.show()
Explanation: The convergence rate is the same (only roughly, typically), but the Mean Value method is expected to be better in terms of its absolute error.
Dimensionality
Compared to standard integration methods (Gauss quadrature, Simpson's rule, etc) the convergence rate for Monte Carlo methods is very slow. However, there is one crucial advantage: as you change dimension, the amount of calculation required is unchanged, whereas for standard methods it grows geometrically with the dimension.
Try to compute the volume of an $n$-dimensional unit hypersphere, which is the object in $\mathbb{R}^n$ such that
$$
\begin{equation}
\sum_{i=1}^n x_i^2 \le 1.
\end{equation}
$$
The volume of the hypersphere can be found in closed form, but can rapidly be computed using the Monte Carlo method above, by counting the $k$ points that randomly fall within the hypersphere and using the standard formula $I \simeq V k / N$.
End of explanation
from scipy import special
def volume_hypersphere(ndim=3):
return numpy.pi**(float(ndim)/2.0) / special.gamma(float(ndim)/2.0 + 1.0)*1**ndim
volume_hypersphere(ndim=5)
volList = []
for i in range(0,10):
volList.append(volume_hypersphere(ndim=i+1))
mplot.clf()
plot1_3 = mplot.plot(dimensionList, integralList, 'bo', label='')
plot2_2 = mplot.plot(dimensionList, volList, 'ro', label='')
mplot.xlabel("dimensionality")
mplot.ylabel("Integral value")
#mplot.legend(prop={'size':12})
mplot.show()
Explanation: This is a plot of my calculated values of the Integral
End of explanation
print(volList, "\n\n" ,integralList)
volArray = numpy.array(volList)
integralArray = numpy.array(integralList)
errorArray = abs(volArray-integralArray)
mplot.clf()
plot1_3 = mplot.plot(dimensionList, errorArray, 'bo', label='')
#plot2_2 = mplot.plot(xFit2, yFit2, '--r', label='Best fit Line: y={}*x+{}'.format(p1_2,p0_2))
mplot.xlabel("dimensionality")
mplot.ylabel("Absolute Error in Integral value")
#mplot.legend(prop={'size':12})
mplot.show()
Explanation: This is a plot of my calculated values of the integral in blue and the actual values in red
End of explanation
HyperS_1 = MakeHyperSphereOfRadiusR(1)
domainD = []
dimensionList = []
integralList = []
for i in range(0,10):
domainD.append([-2,2])
dimensionList.append(i)
integralList.append(mc_integrate_multid(HyperS_1, domainD, N = 10000000))
volList = []
for i in range(0,10):
volList.append(volume_hypersphere(ndim=i+1))
mplot.clf()
plot1_3 = mplot.plot(dimensionList, integralList, 'bo', label='')
plot2_2 = mplot.plot(dimensionList, volList, 'ro', label='')
mplot.xlabel("dimensionality")
mplot.ylabel("Integral value")
#mplot.legend(prop={'size':12})
mplot.show()
Explanation: This is a plot of the error of my calculated values, thier deviance from the actual values
Lets try with domains from -2 to 2 to make sure the domain has little effect on the result. It increased the error (makes sence since it is less likely for the random points to fall in the hypersphere => I am trying using 10 million points as opposed to 1 million.
End of explanation
def ic_integrate(f, domain_x, N = 10000):
Mean value Monte Carlo integration: to be completed
Using w(x)=x**(-0.5)
import numpy.random
def w(x):
return x**(-0.5)
a = domain_x[0]
b = domain_x[1]
lower_limit = (a**0.5)
upper_limit = (b**0.5)
domain_yrange = upper_limit-lower_limit
yArray = numpy.random.random(N) * (domain_yrange + lower_limit) #y = unformly distributed random no.s
xArray = yArray**(2) #x = random numbers distributed with the probability distribution p(x)=x**(-0.5)/2
#print(xArray)
funcOverWeightingArray = f(xArray)/w(xArray)
sum_ = sum(funcOverWeightingArray)
#print(sum_, N, domain_x[0], domain_x[1])
I = (sum_/N)*2*(b**0.5-a**0.5)
return I
def f_fermi(x):
return x**(-0.5)/(numpy.exp(x)+1)
ic_integrate(f_fermi, [0,1], N=1000000)
Explanation: This is a plot of my calculated values of the integral in blue and the actual values in red. This plot however uses the domain -2 to 2 in each dimension but calculated for the same unit hypersphere, the values diverge at higher dimensionality, why?
As dimensionality increases so does the effective hypervolume of the domain by a factor $2^{dimensionality}$, this means that the probability of a random point being inside the hypersphere is proportional to $2^{-dimensionality}$ so as dimensionality is increased the points are much less likely to fall inside the hypersphere and so you need a larger N, number of random points, for the answer to converge.
Now let us repeat this across multiple dimensions.
The errors clearly vary over a range, but the convergence remains roughly as $N^{-1/2}$ independent of the dimension; using other techniques such as Gauss quadrature would see the points required scaling geometrically with the dimension.
Importance sampling
Consider the integral (which arises, for example, in the theory of Fermi gases)
$$
\begin{equation}
I = \int_0^1 \frac{x^{-1/2}}{e^x + 1} \, dx.
\end{equation}
$$
This has a finite value, but the integrand diverges as $x \to 0$. This may cause a problem for Monte Carlo integration when a single value may give a spuriously large contribution to the sum.
We can get around this by changing the points at which the integrand is sampled. Choose a weighting function $w(x)$. Then a weighted average of any function $g(x)$ can be
$$
\begin{equation}
<g>_w = \frac{\int_a^b w(x) g(x) \, dx}{\int_a^b w(x) \, dx}.
\end{equation}
$$
As our integral is
$$
\begin{equation}
I = \int_a^b f(x) \, dx
\end{equation}
$$
we can, by setting $g(x) = f(x) / w(x)$ get
$$
\begin{equation}
I = \int_a^b f(x) \, dx = \left< \frac{f(x)}{w(x)} \right>_w \int_a^b w(x) \, dx.
\end{equation}
$$
This gives
$$
\begin{equation}
I \simeq \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{w(x_i)} \int_a^b w(x) \, dx,
\end{equation}
$$
where the points $x_i$ are now chosen from a non-uniform probability distribution with pdf
$$
\begin{equation}
p(x) = \frac{w(x)}{\int_a^b w(x) \, dx}.
\end{equation}
$$
This is a generalization of the mean value method - we clearly recover the mean value method when the weighting function $w(x) \equiv 1$. A careful choice of the weighting function can mitigate problematic regions of the integrand; e.g., in the example above we could choose $w(x) = x^{-1/2}$, giving $p(x) = x^{-1/2}/2$.
So, let's try to solve the integral above. The expected solution is around 0.84.
End of explanation |
9,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: tf.data を使って NumPy データをロードする
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: .npz ファイルからのロード
Step3: tf.data.Dataset を使って NumPy 配列をロード
サンプルの配列と対応するラベルの配列があるとします。 tf.data.Dataset.from_tensor_slices にこれら2つの配列をタプルとして入力し、tf.data.Dataset を作成します。
Step4: データセットの使用
データセットのシャッフルとバッチ化
Step5: モデルの構築と訓練 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
Explanation: tf.data を使って NumPy データをロードする
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/numpy"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/numpy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/numpy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/load_data/numpy.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td>
</table>
このチュートリアルでは、NumPy 配列から tf.data.Dataset にデータを読み込む例を示します。
この例では、MNIST データセットを .npz ファイルから読み込みますが、 NumPy 配列がどこに入っているかは重要ではありません。
設定
End of explanation
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
Explanation: .npz ファイルからのロード
End of explanation
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
Explanation: tf.data.Dataset を使って NumPy 配列をロード
サンプルの配列と対応するラベルの配列があるとします。 tf.data.Dataset.from_tensor_slices にこれら2つの配列をタプルとして入力し、tf.data.Dataset を作成します。
End of explanation
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
Explanation: データセットの使用
データセットのシャッフルとバッチ化
End of explanation
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
model.evaluate(test_dataset)
Explanation: モデルの構築と訓練
End of explanation |
9,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BCC Text Dataset
Here we will build a classification model for the following dataset.
https
Step1: Class Imbalance Check
Step2: Let's use the Google Developer Guide to figure out which kind of model to use (https
Step3: This is a pretty low number, so according to the recommendation we should use an n-gram model.
Step4: Split Data for Train and Test
Next we split the data for train and test.`
Step5: Build Scikit Learn Model
We will combine a CountVectorizer and a RandomForestClassifier in a pipeline to build a simple text classification model. We will use 1 and 2-grams and also use the binary mode (presence / absence of number of tokens).
Step6: This looks pretty good, so we will evaluate the model using the test set and produce a classification report. | Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use('ggplot')
bbc_df = pd.read_csv(r"https://storage.googleapis.com/dataset-uploader/bbc/bbc-text.csv")
bbc_df.head()
Explanation: BCC Text Dataset
Here we will build a classification model for the following dataset.
https://storage.googleapis.com/dataset-uploader/bbc/bbc-text.csv
End of explanation
counts = bbc_df.category.value_counts()
counts.plot(kind='bar', rot=30)
bbc_df.shape
Explanation: Class Imbalance Check
End of explanation
from nltk.tokenize import word_tokenize
word_counts = bbc_df.text.apply(lambda v: len(word_tokenize(v)))
word_counts.median()
bbc_df.shape[0] / word_counts.median()
Explanation: Let's use the Google Developer Guide to figure out which kind of model to use (https://developers.google.com/machine-learning/guides/text-classification). We have 2225 samples, so let's
calculate median number of words per sample.
End of explanation
bbc_df.head()
Explanation: This is a pretty low number, so according to the recommendation we should use an n-gram model.
End of explanation
categories = bbc_df.category.astype('category')
bbc_df.loc[:, 'category'] = categories.cat.codes
train_df = bbc_df.groupby('category', as_index=False).sample(frac=0.7)
train_df.shape
train_df.category.value_counts()
test_df = bbc_df.loc[bbc_df.index.difference(train_df.index), :]
test_df.category.value_counts()
Explanation: Split Data for Train and Test
Next we split the data for train and test.`
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer
model = Pipeline([
('count', CountVectorizer(ngram_range=(1, 2), max_features=5000, binary=True)),
('rf', RandomForestClassifier(n_estimators=90, max_depth=4))
])
cv = StratifiedKFold(n_splits=5, random_state=12345, shuffle=True)
scores = cross_val_score(model, train_df.text.values, train_df.category.values, cv=cv)
scores = pd.Series(scores)
_ = scores.plot(kind='bar', rot=90)
scores.mean()
Explanation: Build Scikit Learn Model
We will combine a CountVectorizer and a RandomForestClassifier in a pipeline to build a simple text classification model. We will use 1 and 2-grams and also use the binary mode (presence / absence of number of tokens).
End of explanation
model = Pipeline([
('count', CountVectorizer(ngram_range=(1, 2), max_features=5000, binary=True)),
('rf', RandomForestClassifier(n_estimators=90, max_depth=4))
])
model = model.fit(train_df.text.values, train_df.category.values)
predictions = model.predict(test_df.text.values)
from sklearn.metrics import classification_report
print(classification_report(predictions, test_df.category))
Explanation: This looks pretty good, so we will evaluate the model using the test set and produce a classification report.
End of explanation |
9,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spektren Berechnen
Date
Step1: Intervalle
Step2: Integration
Um die Trezbandsignalen auf die Intervalle zu integrieren sind die 2 folgende funktionen im functions definiert
Step3: Beispeiel
Daten und Signale Importieren
Step4: Auswahl einer Vorbeifahrt und zusammenstellung der Daten
Folgende Grössen werden zu den für Abschnitte Q1 und Q2 hinzugefügt
Step5: Plotten der Mikrophon Signal
Step6: Bemerkung
Step7: Graphische Darstellung der Spektren für unterschiedlichen Auswertung Intervallen
LEQ Spektren und levels | Python Code:
%reset -f
%matplotlib notebook
%load_ext autoreload
%autoreload 1
%aimport functions
import numpy as np
import copy
import acoustics
from functions import *
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
mpl.rcParams['lines.linewidth']=0.5
# uncomment next line to connect a qtconsole to the same session
# %qtconsole
Explanation: Spektren Berechnen
Date: october 2015
Author: ESR
In Dieser Abschnitt wird die Berechnung der Spektren diskutiert.
Die Spektren sind wie folgt berechnet:
Mikrophon Signal mit Terzband-Filterbank zerlegen (ab 100Hz, betrachte Bemerkung über Nahfeldeffekten )
output der Filterbank auf einen definierten Intervall integrieren/mitteln (leq/sel)
Für die Abschnitte Q1 und Q4 ist die Vorbeifahrtszeit jedes Drehgestell (sihe auswertungLS) bekannt und damit kann man unterschiedliche Integrationsintervalle (z.b: Achsweise) definieren. Damit ist es möglich:
der Anzahl der Spektren zu erhöhen.
den Einfluss der Übersteuerung auf die Spektern , welche am Anfang der Vorbeifahrt entsteht, kann genauer untersucht werden.
Notwendige Modulen
End of explanation
def defIntervals(tp):
Intervals = {
'full': (-np.inf,np.inf)
#'vorbei': (tp.min(),tp.max()),
}
t = tp.reshape(len(tp)//4,4).mean(1)
for n, (t1,t2) in enumerate(zip(t[:-1],t[1:])):
Intervals[int(n+1)] = (t1,t2)
return Intervals
Explanation: Intervalle: Aufteilung des Zug definieren
full: gesamte signal
n: Wagenmitte der n-te Wagen bis zur Wagenmitte der n+1 Wagen
Die nächste funktion implementiert die Intervalle aus die Vorbeifahrtszeiten jedes Drehgestell
End of explanation
%psource cut_third_oct_spectrum
%psource level_from_octBank
Explanation: Integration
Um die Trezbandsignalen auf die Intervalle zu integrieren sind die 2 folgende funktionen im functions definiert
End of explanation
%%capture c
import json
passby = json.load(open('Tabellen\passby.json','r+'))
fill_passby_with_signals(passby)
Explanation: Beispeiel
Daten und Signale Importieren
End of explanation
passbyID = '5'
pb = copy.deepcopy({k:passby[passbyID][k] for k in ['Q1','Q4']})
#
for k, v in pb.items():
v['signals']['bandpass'] = v['signals']['MIC'].bandpass(20,20000)
v['signals']['A'] = v['signals']['MIC'].weigh()
v['tPeaks'] = detect_weel_times(v['signals']['LS'])
v.update( {k:v for k,v in zip(['vAv', 'dv', 'ti_vi'], train_speed(v['tPeaks'], axleDistance=2))} )
# Intervalle
v['intervals'] = defIntervals(v['tPeaks'])
Explanation: Auswahl einer Vorbeifahrt und zusammenstellung der Daten
Folgende Grössen werden zu den für Abschnitte Q1 und Q2 hinzugefügt:
berecne bandpass and A-gewichtet Signal
berechne tPeaks
berechne Speeds
passby mit Intervalle füllen
End of explanation
f, ax = plt.subplots(nrows=2,sharey=True)
ax2 = []
for n,(k,v) in enumerate(pb.items()):
sn = v['signals']
axis = ax[n]
ax2.append(axis.twinx())
ax2[n].grid(False)
sn['A'].plot(ax = ax2[n], label = 'A', title='', alpha = 0.6, lw = 0.5 )
sn['A'].plot_levels(ax = axis,color = 'grey', label = 'LAF' ,lw = 2)
for tb in v['tPeaks']:
axis.axvline(tb, color = 'red',alpha = 0.8 )
for k,(t1,t2) in v['intervals'].items():
if isinstance(k,int):
axis.axvline(t1, color = 'blue', alpha = 1 )
axis.axvline(t2, color = 'blue', alpha = 1 )
axis.set_xbound(v['tPeaks'].min()-1,v['tPeaks'].max()+1)
axis.set_title('Abschnitt {}'.format(k))
axis.legend()
ax2[n].set_ybound(30,-30)
ax[0].set_xlabel('')
Explanation: Plotten der Mikrophon Signal
End of explanation
%%time
Bands = acoustics.signal.OctaveBand(fstart = 100,fstop=20000, fraction=3)
for absch ,v in pb.items():
# calc Octave
sn = v['signals']['bandpass']
f , octFilterBank = sn.third_octaves(frequencies = Bands.nominal)
# sel
spektrum, sel = cut_third_oct_spectrum( octFilterBank, v['intervals'],lType= 'leq')
v.setdefault('spektrum_sel',{}).update(spektrum)
v.setdefault('sel',{}).update(sel)
v['spektrum_sel']['f'] = f.nominal
# leq
spektrum, leq = cut_third_oct_spectrum( octFilterBank, v['intervals'], lType= 'leq')
v['spektrum'].update(spektrum)
v.setdefault('leq',{}).update(leq)
v['spektrum']['f'] = f.nominal
Explanation: Bemerkung: Die zeitachse der signale sind nicht sinkronisiert
Spektren für unterschiedliche Intervalle berechnen
passby mit SEL Spektren für die definierte intervalle füllen
passby mit leq Spektren für die definierte intervalle füllen
End of explanation
#leq
hexcol = ['#332288', '#88CCEE', '#44AA99', '#117733', '#999933', '#DDCC77',
'#CC6677', '#882255', '#AA4499', '#661100', '#6699CC', '#AA4466',
'#4477AA']
f, axes= plt.subplots(ncols = 2, sharey = True)
f.suptitle('Spektrum leq')
for n,(a,v) in enumerate(pb.items()):
ax = axes[n]
ax.set_xscale('log')
spektrum = v['spektrum']
level = v['leq']
for name in list(v['intervals'].keys()):
if name == 'full':
opt = {'ls':':', 'color':'b','lw' : 2 ,'alpha' : 0.8, 'label': str(name)}
l, = ax.plot(spektrum['f'], spektrum[name] , **opt )
ax.axhline(y = level[name], xmin = 100 , xmax = 2000, color = l.get_color(), lw= 1.1, alpha = 1 )
elif type(name)==int:
opt= {'ls':'-', 'color' : hexcol[int(name)],'lw' : 1.5,'label': 'int. {}'.format(name)}
l, = ax.plot(spektrum['f'], spektrum[name] , **opt )
ax.axhline(y = level[name], xmin = 0 , xmax = 0.1, color = l.get_color(), lw= 1.1, alpha = 1 )
ax.set_xbound(100,10000)
ax.set_xlabel('f Hz')
ax.set_title('Abschnitt {}'.format(a))
axes[0].set_ybound(65,105)
axes[1].legend(loc='upper center', bbox_to_anchor=(1.19, 1),
ncol=1, fancybox=True, shadow=True)
axes[0].set_ylabel('leq dB')
Explanation: Graphische Darstellung der Spektren für unterschiedlichen Auswertung Intervallen
LEQ Spektren und levels
End of explanation |
9,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
keras_lesson1.ipynb -- CodeAlong of fastai/courses/dl1/keras_lesson1.ipynb
Wayne H Nixalo
Using TensorFlow backend
# pip install tensorflow-gpu keras
Introduction to our first task
Step1: Data Augmentation parameters copied from fastai --> copied from Keras Docs
Instead of creating a single data object, in Keras we have to define a data generator to specfy how to generate the data. We have to tell it what kind of dat aug, and what kind of normalization.
Generally, copy-pasting Keras code from the internet works.
Keras uses the same directory structure as FastAI.
2 possible outcomes
Step2: In Keras you have to manually specify the base model and construct on top the layers you want to add.
Step3: Specify the model. There's no concept of automatically freezing layers in Keras, so you have to loop through the layers you want to freeze and call .trainable=False
Keras also has a concept of compiling a model, which DNE in FastAI / PyTorch
Step4: Keras expects to be told how many batches there are per epoch. num_batches = size of generator / batch_size
Keras also defaults to zero workers. For good speed
Step5: There isn't a concept of differential learning rates or layer groups in Keras or partial unfreezing, so you'll have to decide manually. In this case | Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
PATH = "data/dogscats/"
sz = 224
batch_size=64
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.layers import Dropout, Flatten, Dense
from keras.models import Model, Sequential
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
train_data_dir = f'{PATH}train'
valid_data_Dir = f'{PATH}valid'
Explanation: keras_lesson1.ipynb -- CodeAlong of fastai/courses/dl1/keras_lesson1.ipynb
Wayne H Nixalo
Using TensorFlow backend
# pip install tensorflow-gpu keras
Introduction to our first task: 'Dogs vs Cats'
End of explanation
train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2,
zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_data_dir,
target_size=(sz,sz),
batch_size=batch_size,
class_mode='binary')
validation_generator = train_datagen.flow_from_directory(valid_data_dir,
shuffle=False,
target_size=(sz,sz),
batch_size=batch_size,
class_mode='binary')
Explanation: Data Augmentation parameters copied from fastai --> copied from Keras Docs
Instead of creating a single data object, in Keras we have to define a data generator to specfy how to generate the data. We have to tell it what kind of dat aug, and what kind of normalization.
Generally, copy-pasting Keras code from the internet works.
Keras uses the same directory structure as FastAI.
2 possible outcomes: class_mode='binary'. Multipe: 'categorical'
In Keras you have to specify a data generator without augmentation for the testing set.
Important to NOT shuffle the validation set, or else accuracy tracking can't be done.
End of explanation
base_model = ResNet50(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
Explanation: In Keras you have to manually specify the base model and construct on top the layers you want to add.
End of explanation
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers: layer.trainable = False
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
Explanation: Specify the model. There's no concept of automatically freezing layers in Keras, so you have to loop through the layers you want to freeze and call .trainable=False
Keras also has a concept of compiling a model, which DNE in FastAI / PyTorch
End of explanation
%%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=3, workers=4,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
Explanation: Keras expects to be told how many batches there are per epoch. num_batches = size of generator / batch_size
Keras also defaults to zero workers. For good speed: include num workers.
End of explanation
split_at = 140
for layer in model.layers[:split_at]: layer.trainable = False
for layer in model.layers[:split_at]: layer.trainable = True
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
%%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=1,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
Explanation: There isn't a concept of differential learning rates or layer groups in Keras or partial unfreezing, so you'll have to decide manually. In this case: printing out to take a look, and starting from layer 140 onwards. You'll have to recompile the model after this.
End of explanation |
9,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep learning on video titles
In this notebook we do some data learning using the titles and the number of subscribers of the videos.
Step1: Database selection
We can choose on which database we want to do our learning. To test our neural network we created 3 databases, one on the theme "animals", an other on "cars", and one with random videos. We want to see if we get different results depending on the dataset. More databases can easily be created by using the notebook "create_videos_database".
Step2: Train_data creation
For our train_data set we use the Bag of Words and Term Frequency-Inverse Document Frequency (TF-IDF) method. It is usually used in sentiment analysis but this method should give interesting results in our case because we think that some particular words in a video title may attract more viewers.
We also append the normalized number of subscribers to each vector.
Step3: The labels
Each of our label corresponds to a range of views. We have 8 labels
Step4: Test set extraction
We randomly extract 100 items from our data set to construct our test set.
Step5: Neural Network Classifier
We tried different networks, with 2, 3 or even 4 layers, fully connected or not, and different activations. In the end the classic 2 layer with ReLu activation works just as well as the others, or better.
$$
y=\textrm{softmax}(ReLU(xW_1+b_1)W_2+b_2)
$$ | Python Code:
import requests
import json
import pandas as pd
from math import *
import numpy as np
import tensorflow as tf
import time
import collections
import os
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from IPython.display import display
from random import randint
Explanation: Deep learning on video titles
In this notebook we do some data learning using the titles and the number of subscribers of the videos.
End of explanation
folder = os.path.join('sql_database_animaux')
#folder = os.path.join('sql_database_voitures')
#folder = os.path.join('sql_database_random')
videos_database = pd.read_sql('videos', 'sqlite:///' + os.path.join(folder, 'videos.sqlite'), index_col='index')
videos_database = videos_database.drop_duplicates('id')
videos_database = videos_database.reset_index(drop=True)
display(videos_database)
print("Length of the video database :",len(videos_database))
Explanation: Database selection
We can choose on which database we want to do our learning. To test our neural network we created 3 databases, one on the theme "animals", an other on "cars", and one with random videos. We want to see if we get different results depending on the dataset. More databases can easily be created by using the notebook "create_videos_database".
End of explanation
#maximal number of words to extract, it is also the maximal size of our vectors
#we played a little with this value and 2000 seems to give good results
nwords = 2000
#the stopwords are the words such as "the" or "is" that are everywhere and does not give any information
#we don't want those words in our vocabulary
#we get them from the file "stopwords.txt" found on the internet
stopwords = [line.rstrip('\n') for line in open('stopwords.txt')]
#print('stopwords:',stopwords)
def compute_bag_of_words(text, nwords):
vectorizer = CountVectorizer(max_features=nwords)
vectors = vectorizer.fit_transform(text)
vocabulary = vectorizer.get_feature_names()
return vectors, vocabulary
#we concatenate the titles to extract the words from them
concatenated_titles=[]
for titles in videos_database['title']:
concatenated_titles += [' ', titles]
#create a vocabulary from the titles
title_bow, titles_vocab = compute_bag_of_words(concatenated_titles, nwords)
del concatenated_titles
titles_list = videos_database['title'].tolist()
#we apply the TF-IDF method to the titles
vect = TfidfVectorizer(sublinear_tf=True, max_df=0.5, analyzer='word', stop_words=stopwords, vocabulary=titles_vocab)
vect.fit(titles_list)
#create a sparse TF-IDF matrix
titles_tfidf = vect.transform(titles_list)
del titles_list
train_data = titles_tfidf.todense()
print(train_data.shape)
def print_most_frequent(bow, vocab, n=100):
idx = np.argsort(bow.sum(axis=0))
for i in range(n):
j = idx[0, -i]
print(vocab[j],': ',title_bow.sum(axis=0)[0,j])
print('most used words:')
print_most_frequent(title_bow,titles_vocab)
#print(len(title_vocab))
#print(train_data.shape)
#add the sub count to data_train
subsCountTemp = videos_database['subsCount'].tolist()
maxSubs = max(subsCountTemp)
print(max(subsCountTemp))
#divide all the subs count by the maximal number of subs.
#it is to have values in the range of the values created by the tf-idf algorithm
subsCount = []
for x in subsCountTemp:
subsCount.append(x/maxSubs)
del subsCountTemp
#add the subs to our train_data
subsCount = np.asarray(subsCount)
subsCount = np.reshape(subsCount, [len(subsCount),1]);
train_data = np.append(train_data, np.array(subsCount), 1)
del subsCount
print(train_data.shape)
Explanation: Train_data creation
For our train_data set we use the Bag of Words and Term Frequency-Inverse Document Frequency (TF-IDF) method. It is usually used in sentiment analysis but this method should give interesting results in our case because we think that some particular words in a video title may attract more viewers.
We also append the normalized number of subscribers to each vector.
End of explanation
nbr_labels = 8
nbr_video = len(videos_database['title'])
train_labels = np.zeros([train_data.shape[0],nbr_labels])
for i in range(nbr_video):
views = int(videos_database['viewCount'][i])
if views < 99:
train_labels[i] = [1,0,0,0,0,0,0,0]
elif views < 999:
train_labels[i] = [0,1,0,0,0,0,0,0]
elif views < 9999:
train_labels[i] = [0,0,1,0,0,0,0,0]
elif views < 99999:
train_labels[i] = [0,0,0,1,0,0,0,0]
elif views < 999999:
train_labels[i] = [0,0,0,0,1,0,0,0]
elif views < 9999999:
train_labels[i] = [0,0,0,0,0,1,0,0]
elif views < 99999999:
train_labels[i] = [0,0,0,0,0,0,1,0]
else:
train_labels[i] = [0,0,0,0,0,0,0,1]
print('train_labels shape :', train_labels.shape)
Explanation: The labels
Each of our label corresponds to a range of views. We have 8 labels:
+ 0 to 99 views
+ 100 to 999 views
+ 1'000 to 9'999 views
+ 10'000 to 99'999 views
+ 100'000 to 999'999 views
+ 1'000'000 to 9'999'999 views
+ 10'000'000 to 99'999'999 views
+ more than 99'999'999 views
End of explanation
testset = 100
test_data = np.zeros([testset,train_data.shape[1]])
test_labels = np.zeros([testset,nbr_labels])
for i in range(len(test_data)):
x = randint(0,len(test_data))
test_data[i] = train_data[x]
test_labels[i] = train_labels[x]
train_data=np.delete(train_data,x,axis=0)
train_labels=np.delete(train_labels,x,axis=0)
print('train data shape ', train_data.shape)
print('train labels shape', train_labels.shape)
print('train test shape ', test_data.shape)
print('train labels shape', test_labels.shape)
Explanation: Test set extraction
We randomly extract 100 items from our data set to construct our test set.
End of explanation
# Define computational graph (CG)
batch_size = testset # batch size
d = train_data.shape[1] # data dimensionality
nc = nbr_labels # number of classes
# CG inputs
xin = tf.placeholder(tf.float32,[batch_size,d]);
y_label = tf.placeholder(tf.float32,[batch_size,nc]);
# 1st Fully Connected layer
nfc1 = 300
Wfc1 = tf.Variable(tf.truncated_normal([d,nfc1], stddev=tf.sqrt(5./tf.to_float(d+nfc1)) ));
bfc1 = tf.Variable(tf.zeros([nfc1]));
y = tf.matmul(xin, Wfc1);
y += bfc1;
# ReLU activation
y = tf.nn.relu(y)
# dropout
y = tf.nn.dropout(y, 0.25)
# 2nd layer
nfc2 = nc
#nfc2 = 100
Wfc2 = tf.Variable(tf.truncated_normal([nfc1,nfc2], stddev=tf.sqrt(5./tf.to_float(nfc1+nc)) ));
bfc2 = tf.Variable(tf.zeros([nfc2]));
y = tf.matmul(y, Wfc2);
y += bfc2;
#y = tf.nn.relu(y)
# 3rd layer
#nfc3 = nc
#Wfc3 = tf.Variable(tf.truncated_normal([nfc2,nfc3], stddev=tf.sqrt(5./tf.to_float(nfc1+nc)) ));
#bfc3 = tf.Variable(tf.zeros([nfc3]));
#y = tf.matmul(y, Wfc3);
#y += bfc3;
# Softmax
y = tf.nn.softmax(y);
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
# L2 Regularization
reg_loss = tf.nn.l2_loss(Wfc1)
reg_loss += tf.nn.l2_loss(bfc1)
reg_loss += tf.nn.l2_loss(Wfc2)
reg_loss += tf.nn.l2_loss(bfc2)
reg_par = 4*1e-3
total_loss = cross_entropy + reg_par*reg_loss
# Optimization scheme
train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(10001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
# Run CG for variable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y})
# Run CG for test set
if not i%100:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict={xin: test_data, y_label: test_labels})
print('test accuracy=',acc_test)
Explanation: Neural Network Classifier
We tried different networks, with 2, 3 or even 4 layers, fully connected or not, and different activations. In the end the classic 2 layer with ReLu activation works just as well as the others, or better.
$$
y=\textrm{softmax}(ReLU(xW_1+b_1)W_2+b_2)
$$
End of explanation |
9,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Download
This notebook download all FITS data.
List of files is in file ondrejov-labeled-spectra.csv.
These spectra has been classified with
Spectral View tool.
Step1: Read CSV with Labels
Step2: Simple Spectral Access protocol
This is not much revlevant now since only datalink is used
to download normalized spectra.
SSAP, SSA defines a uniform intreface to remotely discover
and access one dimenisonal spectra. Spectral data access
mmay involve active transformation of data. SSA also
defines complete metadata to describe the available
datasets. It makes use of VOTable for metadata exchange.
Architecture
A query is used for data discovery and to negotiate the
details of the static or dynamically created dataset
to be retrieved. SSA allows to mediate not only dataset
metadata but the actual dataset itself. Direct access to
data is also provided.
A single service may support multiple operation to perform
various functions. The current interface use an HTTP GET
request to submit parametrized requests with responses
being returned as for example FITS or VOTable. Defined
operations are the following
Step3: Datalink
Datalink is a service for working with spectra.
For information about the one which is used here see http
Step4: Show fluxcalib Parameters
To show how to work with datalink
and what it offers.
From this is obvious that the 'normalized' setting is the desired.
Step5: FITS Download | Python Code:
%matplotlib inline
import urllib.request
import urllib.parse
import io
import os
import csv
import glob
from functools import partial
from itertools import count
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
LABELS_FILE = 'data/ondrejov-dataset.csv'
!head $LABELS_FILE
Explanation: Data Download
This notebook download all FITS data.
List of files is in file ondrejov-labeled-spectra.csv.
These spectra has been classified with
Spectral View tool.
End of explanation
with open(LABELS_FILE, newline='') as f:
reader = csv.DictReader(f)
# each public id is unique and set operation will be usefull later
spectra_idents = set(map(lambda x: x['id'], reader))
len(spectra_idents)
Explanation: Read CSV with Labels
End of explanation
def request_url(url):
'''Make HTTP request and return response data.'''
try:
with urllib.request.urlopen(url) as response:
data = response.read()
except Exception as e:
print(e)
return None
return data
Explanation: Simple Spectral Access protocol
This is not much revlevant now since only datalink is used
to download normalized spectra.
SSAP, SSA defines a uniform intreface to remotely discover
and access one dimenisonal spectra. Spectral data access
mmay involve active transformation of data. SSA also
defines complete metadata to describe the available
datasets. It makes use of VOTable for metadata exchange.
Architecture
A query is used for data discovery and to negotiate the
details of the static or dynamically created dataset
to be retrieved. SSA allows to mediate not only dataset
metadata but the actual dataset itself. Direct access to
data is also provided.
A single service may support multiple operation to perform
various functions. The current interface use an HTTP GET
request to submit parametrized requests with responses
being returned as for example FITS or VOTable. Defined
operations are the following:
A queryData operation return a VOTable describing
candidate datasets.
A getData operation is used to access an individual
dataset.
End of explanation
datalink_service = 'http://voarchive.asu.cas.cz/ccd700/q/sdl/dlget'
def make_datalink_url(
pub_id, fluxcalib=None, wave_min=None, wave_max=None,
file_format='application/fits', url=datalink_service
):
url_parameters = {'ID': pub_id}
if fluxcalib:
url_parameters['FLUXCALIB'] = fluxcalib
if wave_min and wave_max:
url_parameters['BAND'] = str(wave_min) + ' ' + str(wave_max)
if file_format:
url_parameters['FORMAT'] = file_format
return url + '?' + urllib.parse.urlencode(url_parameters)
make_datalink_url(
'ivo://asu.cas.cz/stel/ccd700/sh270028',
fluxcalib='normalized',
wave_min=6500e-10, wave_max=6600e-10
)
Explanation: Datalink
Datalink is a service for working with spectra.
For information about the one which is used here see http://voarchive.asu.cas.cz/ccd700/q/sdl/info.
End of explanation
def plot_fluxcalib(fluxcalib, ax):
# create the datalink service URL
datalink_url = make_datalink_url('ivo://asu.cas.cz/stel/ccd700/sh270028', fluxcalib=fluxcalib)
# download the data
fits_data = request_url(datalink_url)
# open the data as file
hdulist = fits.open(io.BytesIO(fits_data))
# plot it
ax.set_title('fluxcalib is ' + str(fluxcalib))
ax.plot(hdulist[1].data['spectral'], hdulist[1].data['flux'])
fluxcalibs = [None, 'normalized', 'relative', 'UNCALIBRATED']
fif, axs = plt.subplots(4, 1)
for fluxcalib, ax in zip(fluxcalibs, axs):
plot_fluxcalib(fluxcalib, ax)
fig.tight_layout()
Explanation: Show fluxcalib Parameters
To show how to work with datalink
and what it offers.
From this is obvious that the 'normalized' setting is the desired.
End of explanation
def download_spectrum(pub_id, n, directory, fluxcalib, minimum=None, maximum=None):
# get the name from public id
name = pub_id.split('/')[-1]
# directory HAS TO end with '/'
path = directory + name + '.fits'
url = make_datalink_url(pub_id, fluxcalib, minimum, maximum)
print('{:5} downloading {}'.format(n, name))
try:
data = request_url(url)
except Exception as e:
print(e)
return name
with open(path, 'wb') as f:
f.write(data)
FITS_DIR = 'data/ondrejov/'
%mkdir $FITS_DIR 2> /dev/null
ondrejov_downloader = partial(
download_spectrum,
directory=FITS_DIR,
fluxcalib='normalized'
)
ccd700_prefix = 'ivo://asu.cas.cz/stel/ccd700/'
def get_pub_id(path, prefix=ccd700_prefix):
return prefix + os.path.splitext(os.path.split(path)[-1])[0]
get_pub_id('ssap/uh260033.fits')
spectra_idents -= set(map(get_pub_id, glob.glob(FITS_DIR + '*.fits')))
if len(spectra_idents) != 0:
donwload_info = list(map(ondrejov_downloader, spectra_idents, count(start=1)))
print('All spectra downloaded.')
Explanation: FITS Download
End of explanation |
9,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSV Data to MySQL for use in VISDOM
This notebook can be used to construct an 'account' table and a 'meter_data' table in mysql, based on the csv files with data extracted from the prop39schools xml files, and an 'intervention' table based on the PEPS_Data.xlsx file available on the prop39 data site.
The processed csv files can be downloaded as a zip file from here. They should be unzipped locally before running this script. Note that you may also need to pip install xlrd to be able to run pandas.read_excel for the intervention table data.
You must first create a database in mysql with your desired name (e.g. visdom_data_PGE), and create a data_db.cfg file to point to it, as described in the next section below.
The script will likely take about 40 minutes to complete on most modern laptops.
For your database to be ready for use in VISDOM, you will also need to load a local_weather table into your database. To do so you can follow the instructions in the local-weather repo and use the prop39_config.csv file found here, which was prepared using the accompanying notebook. Or, if you want to get to the end point faster, you can download this csv file, which was constructed using that repo, and then simply modify this sql query to point to that csv file and run it. Running the sql script will take about 20 minutes.
Once the database is set up, you can set it up as a DATA_SOURCE in VISDOM with the accompanying prop39_visdom_data_source.R file and test it via the sanitycheck function as follows
Step1: Read your database connection details from a data_db.cfg file with the following format
Step2: Notebook config
Step3: Reading the account table data from the _BILL.csv files
Step4: Creating the account table in the desired format and writing to it
Step5: Quick test to make sure it's working
Step6: Creating the intervention table
The following will read the interventions data from the PEPS_data.xlsx file, add the appropriate account_uuid to each entry (if applicable, Null otherwise) by left-merging with the accounts_df table, then edits column names for a few columns to keep them under the requisite 64 characters, and then writes it to a mysql table.
Step7: Creating the meter_data table in the desired format
Note
Step8: Fill the meter table with data from the csv files
Step9: Quick tests to make sure it's done so properly
Step10: Coordinating between tables to make sure they match
Step11: Minor cludge that could be done better
This meter_uuid had only one meter_data day record associated with it for some reason, which would cause errors if allowed to stay in the database. | Python Code:
csv_dir = "PGE_csv"
Explanation: CSV Data to MySQL for use in VISDOM
This notebook can be used to construct an 'account' table and a 'meter_data' table in mysql, based on the csv files with data extracted from the prop39schools xml files, and an 'intervention' table based on the PEPS_Data.xlsx file available on the prop39 data site.
The processed csv files can be downloaded as a zip file from here. They should be unzipped locally before running this script. Note that you may also need to pip install xlrd to be able to run pandas.read_excel for the intervention table data.
You must first create a database in mysql with your desired name (e.g. visdom_data_PGE), and create a data_db.cfg file to point to it, as described in the next section below.
The script will likely take about 40 minutes to complete on most modern laptops.
For your database to be ready for use in VISDOM, you will also need to load a local_weather table into your database. To do so you can follow the instructions in the local-weather repo and use the prop39_config.csv file found here, which was prepared using the accompanying notebook. Or, if you want to get to the end point faster, you can download this csv file, which was constructed using that repo, and then simply modify this sql query to point to that csv file and run it. Running the sql script will take about 20 minutes.
Once the database is set up, you can set it up as a DATA_SOURCE in VISDOM with the accompanying prop39_visdom_data_source.R file and test it via the sanitycheck function as follows:
source("prop39_visdom_data_source.R")
DATA_SOURCE = MyDataSource()
sanityCheckDataSource(DATA_SOURCE)
As described in more detail in the ID_mapping notebook in the same folder as this notebook, an account_uuid generally corresponds with an individual school and may have one or multiple meters associated with it, and may also have multiple interventions (or no interventions) associated with it.
File locations, database config
Point this to the directory with the meter data csv files:
End of explanation
db_pass = ""
with open('data_db.cfg','r') as f:
for line in f:
s = line.split("=")
if s[0].strip() == "user":
db_user = s[1].strip()
if s[0].strip() == "pass":
db_pass = ":" + s[1].strip()
if s[0].strip() == "db":
db_db = s[1].strip()
Explanation: Read your database connection details from a data_db.cfg file with the following format:
dbType=MySQL
user=[database user]
pass=[password (if applicable)]
db=[name of database]
End of explanation
import pandas as pd
import numpy as np
import mysql.connector, os, datetime
from sqlalchemy import create_engine
engine = create_engine('mysql+mysqlconnector://' + db_user + db_pass + '@localhost/' + db_db, echo=False)
conn = engine.connect()
Explanation: Notebook config
End of explanation
usecols = [
'utility',
'customer_name',
'customer_city',
'customer_zip',
'customer_account',
'lea_customer',
'cds_code',
'school_site_name',
'school_city',
'school_site_zip',
'agreement',
'rate_schedule_id'
]
accounts_list = []
for root, dirs, files in os.walk(csv_dir):
for f in files:
if f.endswith('_BILL.csv'):
df = pd.read_csv(os.path.join(root,f), usecols=usecols)
df = df.drop_duplicates()
accounts_list.extend(df.to_dict(orient='records'))
accounts_df = df.from_records(accounts_list)
accounts_df = accounts_df.drop_duplicates()
print len(accounts_df)
accounts_df.head(3)
accounts_df.columns.tolist()
accounts_df.columns = [
'meter_uuid',
'account_uuid',
'customer_account',
'customer_city',
'customer_name',
'customer_zip',
'lea_customer',
'rate_schedule_id',
'school_city',
'school_site_name',
'school_site_zip',
'utility_name'
]
accounts_df['zip5'] = accounts_df['school_site_zip'].str[:5]
accounts_df.head(3)
reals = accounts_df[['account_uuid']].applymap(np.isreal)
accounts_df = accounts_df[reals['account_uuid']]
len(accounts_df)
accounts_df = accounts_df.drop_duplicates(subset=['meter_uuid'], keep='last')
len(accounts_df)
accounts_df = accounts_df.dropna(subset=['zip5'])
len(accounts_df)
Explanation: Reading the account table data from the _BILL.csv files
End of explanation
create_table_sql = '''
CREATE TABLE `account` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`account_uuid` varchar(20) DEFAULT NULL,
`meter_uuid` varchar(20) DEFAULT NULL,
`zip5` varchar(5) DEFAULT NULL,
`customer_account` varchar(50) DEFAULT NULL,
`customer_city` varchar(50) DEFAULT NULL,
`customer_name` varchar(50) DEFAULT NULL,
`customer_zip` varchar(10) DEFAULT NULL,
`lea_customer` varchar(50) DEFAULT NULL,
`rate_schedule_id` varchar(50) DEFAULT NULL,
`school_city` varchar(50) DEFAULT NULL,
`school_site_name` varchar(100) DEFAULT NULL,
`school_site_zip` varchar(10) DEFAULT NULL,
`utility_name` varchar(50) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `zip5_meter_uuid_idx` (`METER_UUID`,`zip5`),
KEY `account_uuid_idx` (`ACCOUNT_UUID`),
KEY `meter_uuid_idx` (`METER_UUID`)
)
'''
conn.execute('DROP TABLE IF EXISTS `account`;')
conn.execute(create_table_sql)
accounts_df.to_sql(name='account', con=engine, if_exists='append', index=False)
Explanation: Creating the account table in the desired format and writing to it
End of explanation
pd.read_sql('SELECT * FROM account LIMIT 3;', con=engine)
pd.read_sql('SELECT COUNT(*) FROM account;', con=engine)
pd.read_sql('SELECT COUNT(DISTINCT(meter_uuid)) FROM account;', con=engine)
Explanation: Quick test to make sure it's working:
End of explanation
accounts_df['site_pair'] = accounts_df['school_city'] + "_" + accounts_df['school_site_name']
interventions_df = pd.read_excel('PEPS_Data.xlsx', sheetname='Data- Approved EEPs')
interventions_df['site_pair'] = interventions_df['Site City'] + "_" + interventions_df['Site Name']
interventions_df = pd.merge(interventions_df,accounts_df[['site_pair','account_uuid']],how='left',on='site_pair')
del interventions_df['site_pair']
replacement_column_names = [
'Grant Amount Req Based on Single or Multiple Years Allocation',
'Grant Amount Req',
'Were Planning Funds Requested from CA Department of Education',
'Budget for Screening and Energy Audits Over Program Life',
'Budget for Prop 39 Program Assistance Over Program Life',
'Est First Yr Annual Electricity Production of PV Measure',
'Est Total Rebates Plus Oth Non-Repayable Funds for PV Measure',
'Est First Year Elec Prod of PPA Measure Generation System',
'Est PPA Measure Elec Gen as Percent of Baseline Elec Usage'
]
k = 0
for j,i in enumerate(interventions_df.columns):
if len(i) > 64:
interventions_df.columns.values[j] = replacement_column_names[k]
k += 1
conn.execute('DROP TABLE IF EXISTS `intervention`;')
interventions_df.to_sql(name='intervention', con=engine, chunksize=100)
conn.execute('ALTER TABLE intervention MODIFY account_uuid VARCHAR(20);')
Explanation: Creating the intervention table
The following will read the interventions data from the PEPS_data.xlsx file, add the appropriate account_uuid to each entry (if applicable, Null otherwise) by left-merging with the accounts_df table, then edits column names for a few columns to keep them under the requisite 64 characters, and then writes it to a mysql table.
End of explanation
create_table_sql = '''
CREATE TABLE `meter_data` (
`meter_uuid` varchar(20) NOT NULL,
`account_uuid` varchar(20) NOT NULL,
`date` DATE NOT NULL,
`zip5` varchar(5) DEFAULT NULL,
'''
for i in range(1,97):
create_table_sql += "`h" + str(i) + "` int(11) DEFAULT NULL,\n"
create_table_sql += '''
PRIMARY KEY (`meter_uuid`,`date`),
KEY `meter_uuid_idx` (`meter_uuid`),
KEY `account_uuid_idx` (`account_uuid`),
KEY `zip_Date_idx` (`date`,`zip5`),
KEY `zip_idx` (`zip5`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
'''
conn.execute('DROP TABLE IF EXISTS `meter_data`;')
conn.execute(create_table_sql)
Explanation: Creating the meter_data table in the desired format
Note: this is currently treating the meter_uuid, account_uuid, date and zip5 as integers, but they should more likely be treated as varchar, varchar, datetime and varchar, respectively.
End of explanation
usecols = ['agreement', 'start']
for i in range(1,97):
usecols.append('d' + str(i))
colnames = ['meter_uuid', 'date']
for i in range(1,97):
colnames.append('h' + str(i))
for root, dirs, files in os.walk(csv_dir):
for f in files:
if f.endswith('_INTERVAL.csv'):
df = pd.read_csv(os.path.join(root,f), usecols=usecols)
if len(df) > 0:
df.columns = colnames
df = df.drop_duplicates()
for i in range(1,97):
df['h' + str(i)] = df['h' + str(i)] * 1000
df = pd.merge(df,accounts_df[['meter_uuid','account_uuid','zip5']],on='meter_uuid')
df['date'] = pd.to_datetime(df['date'], unit='s')
try:
df.to_sql(name='meter_data', con=engine, if_exists='append', index=False)
except:
print "failed sql insert. meter_uuid:" + str(df['meter_uuid'][0]) + ", filename: " + f.split("_Pacific")[0] + "..."
Explanation: Fill the meter table with data from the csv files
End of explanation
pd.read_sql('SELECT * FROM meter_data LIMIT 3;', con=engine)
pd.read_sql('SELECT COUNT(*) FROM meter_data;', con=engine)
Explanation: Quick tests to make sure it's done so properly
End of explanation
conn.execute('DELETE FROM account WHERE meter_uuid NOT IN (SELECT DISTINCT(meter_uuid) FROM meter_data);')
pd.read_sql('SELECT COUNT(*) FROM account;', con=engine)
Explanation: Coordinating between tables to make sure they match
End of explanation
conn.execute("DELETE FROM account WHERE meter_uuid = '1021789005';")
conn.execute("DELETE FROM meter_data WHERE meter_uuid = '1021789005';")
Explanation: Minor cludge that could be done better
This meter_uuid had only one meter_data day record associated with it for some reason, which would cause errors if allowed to stay in the database.
End of explanation |
9,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<hr style="height
Step1: Defining your placeholders
A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders.
Step2: Initializing Variables
After definining our placeholders we must proceed to initialize all the variables and define additional functions using tensorflow library to compute define a feedforward algorithm, cost function, optimization algorithms, and estimate our accuracy. At this point none of these operations would be executed, just defined.
Step3: Creating and Starting your Session
Now that we have defined all of your elements, we can proceed to create and start the session that will execute all operations previous declarations. In this seccion we will first proceed to train our model with data extracted from the data.tar.gz, then test our output model by feeding it with test data, and finally calculating the accuracy of our model. | Python Code:
# import statements
from __future__ import division
import tensorflow as tf
import numpy as np
import tarfile
import os
import matplotlib.pyplot as plt
import time
# Display plots inline
%matplotlib inline
# import email data
def csv_to_numpy_array(filePath, delimiter):
return np.genfromtxt(filePath, delimiter=delimiter, dtype=None)
def import_data():
if "data" not in os.listdir(os.getcwd()):
# Untar directory of data if we haven't already
tarObject = tarfile.open("/home/gonzalo/tensorflow-tutorial/data.tar.gz")
tarObject.extractall()
tarObject.close()
print("Extracted tar to current directory")
else:
# we've already extracted the files
pass
print("loading training data")
trainX = csv_to_numpy_array("data/trainX.csv", delimiter="\t")
trainY = csv_to_numpy_array("data/trainY.csv", delimiter="\t")
print("loading test data")
testX = csv_to_numpy_array("data/testX.csv", delimiter="\t")
testY = csv_to_numpy_array("data/testY.csv", delimiter="\t")
return trainX,trainY,testX,testY
trainX,trainY,testX,testY = import_data()
# set parameters for training
# features, labels
numFeatures = trainX.shape[1]
numLabels = trainY.shape[1]
Explanation: <hr style="height:3px;border:none;color:#333;background-color:#333;" />
<img style=" float:right; display:inline" src="http://opencloud.utsa.edu/wp-content/themes/utsa-oci/images/logo.png"/>
University of Texas at San Antonio
<br/>
<br/>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 2.5em;"> Open Cloud Institute </span>
<hr style="height:3px;border:none;color:#333;background-color:#333;" />
Email Classification
<br/>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> Paul Rad, Ph.D. </span>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> Gonzalo De La Torre, Ph.D. Student </span>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.4em;"> Open Cloud Institute, University of Texas at San Antonio, San Antonio, Texas, USA </span>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.4em;"> gonzalo.delatorreparra@utsa.edu, paul.rad@utsa.edu </span>
<hr style="height:3px;border:none;color:#333;background-color:#333;" />
Email Classification using Machine Learning
Email classification is a common beginner problem from Natural Language Processing (NLP). The idea is simple - given an email you’ve never seen before, determine whether or not that email is Spam or not (aka Ham).
While the classification between spam and non-spam email task is easy for humans, it’s much harder to write a program that can correctly classify an email as Spam or Ham. In the following program, instead of telling the program which words we think are important, we will proceed to let the program learn which words are actually important.
To tackle this problem, we start with a collection of sample emails (i.e. a text corpus). In this corpus, each email has already been labeled as Spam or Ham. Since we are making use of these labels in the training phase, this is a supervised learning task. This is called supervised learning because we are (in a sense) supervising the program as it learns what Spam emails look like and what Ham email look like.
During the training phase, we present these emails and their labels to the program. For each email, the program says whether it thought the email was Spam or Ham. After the program makes a prediction, we tell the program what the label of the email actually was. The program then changes its configuration so as to make a better prediction the next time around. This process is done iteratively until either the program can’t do any better or we get impatient and just tell the program to stop.
Initial Steps
In this section we will start by importing the necessary libraries into our machine learning program. One of the main libraries we are importing is tensorflow which is the library we will be using to perform many of our deep learning computations. In addition, we will also import the pre-labeled email data contained in the data.tar.gz file and set variables where the number of words within an email will be saved numFeatures and the and classification (ham or spam) is stated numLabels.
End of explanation
# define placeholders and variables for use in training
X = tf.placeholder(tf.float32, [None, numFeatures])
yGold = tf.placeholder(tf.float32, [None, numLabels])
weights = tf.Variable(tf.random_normal([numFeatures,numLabels],
mean=0,
stddev=(np.sqrt(6/numFeatures+
numLabels+1)),
name="weights"))
bias = tf.Variable(tf.random_normal([1,numLabels],
mean=0,
stddev=(np.sqrt(6/numFeatures+numLabels+1)),
name="bias"))
Explanation: Defining your placeholders
A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders.
End of explanation
# initialize variables
init_OP = tf.initialize_all_variables()
# define feedforward algorithm
y = tf.nn.sigmoid(tf.add(tf.matmul(X, weights, name="apply_weights"), bias, name="add_bias"), name="activation")
# define cost function and optimization algorithm (gradient descent)
learningRate = tf.train.exponential_decay(learning_rate=0.0008,
global_step= 1,
decay_steps=trainX.shape[0],
decay_rate= 0.95,
staircase=True)
cost_OP = tf.nn.l2_loss(y-yGold, name="squared_error_cost")
training_OP = tf.train.GradientDescentOptimizer(learningRate).minimize(cost_OP)
# accuracy function
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(yGold,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: Initializing Variables
After definining our placeholders we must proceed to initialize all the variables and define additional functions using tensorflow library to compute define a feedforward algorithm, cost function, optimization algorithms, and estimate our accuracy. At this point none of these operations would be executed, just defined.
End of explanation
numEpochs = 10000
learningRate = tf.train.exponential_decay(learning_rate=0.0008,
global_step= 1,
decay_steps=trainX.shape[0],
decay_rate= 0.95,
staircase=True)
# Launch the graph
errors = []
with tf.Session() as sess:
sess.run(init_OP )
print('Initialized Session.')
for step in range(numEpochs):
# run optimizer at each step in training
sess.run(training_OP, feed_dict={X: trainX, yGold: trainY})
# fill errors array with updated error values
accuracy_value = accuracy.eval(feed_dict={X: trainX, yGold: trainY})
errors.append(1 - accuracy_value)
print('Optimization Finished!')
# output final error
print("Final error found during training: ", errors[-1])
# output accuracy
print("Final accuracy on test set: %s" %str(sess.run(accuracy,
feed_dict={X: testX,
yGold: testY})))
# plot errors array to see how it decreased
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
Explanation: Creating and Starting your Session
Now that we have defined all of your elements, we can proceed to create and start the session that will execute all operations previous declarations. In this seccion we will first proceed to train our model with data extracted from the data.tar.gz, then test our output model by feeding it with test data, and finally calculating the accuracy of our model.
End of explanation |
9,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Repeatable splitting </h1>
In this notebook, we will explore the impact of different ways of creating machine learning datasets.
<p>
Repeatability is important in machine learning. If you do the same thing now and 5 minutes from now and get different answers, then it makes experimentation difficult. In other words, you will find it difficult to gauge whether a change you made has resulted in an improvement or not.
Step2: <h3> Create a simple machine learning model </h3>
The dataset that we will use is <a href="https
Step4: <h3> What is wrong with calculating RMSE on the training and test data as follows? </h3>
Step6: Hint
Step8: <h2> Using HASH of date to split the data </h2>
Let's split by date and train.
Step10: We can now use the alpha to compute RMSE. Because the alpha value is repeatable, we don't need to worry that the alpha in the compute_rmse will be different from the alpha computed in the compute_alpha. | Python Code:
from google.cloud import bigquery
Explanation: <h1> Repeatable splitting </h1>
In this notebook, we will explore the impact of different ways of creating machine learning datasets.
<p>
Repeatability is important in machine learning. If you do the same thing now and 5 minutes from now and get different answers, then it makes experimentation difficult. In other words, you will find it difficult to gauge whether a change you made has resulted in an improvement or not.
End of explanation
compute_alpha =
#standardSQL
SELECT
SAFE_DIVIDE(SUM(arrival_delay * departure_delay), SUM(departure_delay * departure_delay)) AS alpha
FROM
(
SELECT RAND() AS splitfield,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN' AND arrival_airport = 'LAX'
)
WHERE
splitfield < 0.8
results = bigquery.Client().query(compute_alpha).to_dataframe()
alpha = results['alpha'][0]
print(alpha)
Explanation: <h3> Create a simple machine learning model </h3>
The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/bigquery-samples:airline_ontime_data.flights">a BigQuery public dataset</a> of airline arrival data. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is 70 million, and then switch to the Preview tab to look at a few rows.
<p>
We want to predict the arrival delay of an airline based on the departure delay. The model that we will use is a zero-bias linear model:
$$ delay_{arrival} = \alpha * delay_{departure} $$
<p>
To train the model is to estimate a good value for $\alpha$.
<p>
One approach to estimate alpha is to use this formula:
$$ \alpha = \frac{\sum delay_{departure} delay_{arrival} }{ \sum delay_{departure}^2 } $$
Because we'd like to capture the idea that this relationship is different for flights from New York to Los Angeles vs. flights from Austin to Indianapolis (shorter flight, less busy airports), we'd compute a different $alpha$ for each airport-pair. For simplicity, we'll do this model only for flights between Denver and Los Angeles.
<h2> Naive random split (not repeatable) </h2>
End of explanation
compute_rmse =
#standardSQL
SELECT
dataset,
SQRT(AVG((arrival_delay - ALPHA * departure_delay)*(arrival_delay - ALPHA * departure_delay))) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM (
SELECT
IF (RAND() < 0.8, 'train', 'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' )
GROUP BY
dataset
bigquery.Client().query(compute_rmse.replace('ALPHA', str(alpha))).to_dataframe()
Explanation: <h3> What is wrong with calculating RMSE on the training and test data as follows? </h3>
End of explanation
train_and_eval_rand =
#standardSQL
WITH
alldata AS (
SELECT
IF (RAND() < 0.8,
'train',
'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' ),
training AS (
SELECT
SAFE_DIVIDE( SUM(arrival_delay * departure_delay) , SUM(departure_delay * departure_delay)) AS alpha
FROM
alldata
WHERE
dataset = 'train' )
SELECT
MAX(alpha) AS alpha,
dataset,
SQRT(AVG((arrival_delay - alpha * departure_delay)*(arrival_delay - alpha * departure_delay))) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
alldata,
training
GROUP BY
dataset
bigquery.Client().query(train_and_eval_rand).to_dataframe()
Explanation: Hint:
* Are you really getting the same training data in the compute_rmse query as in the compute_alpha query?
* Do you get the same answers each time you rerun the compute_alpha and compute_rmse blocks?
<h3> How do we correctly train and evaluate? </h3>
<br/>
Here's the right way to compute the RMSE using the actual training and held-out (evaluation) data. Note how much harder this feels.
Although the calculations are now correct, the experiment is still not repeatable.
Try running it several times; do you get the same answer?
End of explanation
compute_alpha =
#standardSQL
SELECT
SAFE_DIVIDE(SUM(arrival_delay * departure_delay), SUM(departure_delay * departure_delay)) AS alpha
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN' AND arrival_airport = 'LAX'
AND ABS(MOD(FARM_FINGERPRINT(date), 10)) < 8
results = bigquery.Client().query(compute_alpha).to_dataframe()
alpha = results['alpha'][0]
print(alpha)
Explanation: <h2> Using HASH of date to split the data </h2>
Let's split by date and train.
End of explanation
compute_rmse =
#standardSQL
SELECT
IF(ABS(MOD(FARM_FINGERPRINT(date), 10)) < 8, 'train', 'eval') AS dataset,
SQRT(AVG((arrival_delay - ALPHA * departure_delay)*(arrival_delay - ALPHA * departure_delay))) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX'
GROUP BY
dataset
print(bigquery.Client().query(compute_rmse.replace('ALPHA', str(alpha))).to_dataframe().head())
Explanation: We can now use the alpha to compute RMSE. Because the alpha value is repeatable, we don't need to worry that the alpha in the compute_rmse will be different from the alpha computed in the compute_alpha.
End of explanation |
9,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo of Brunel on Cars Data
The Data
We read the data into a pandas data frame. In this case we are grabbing some data that represents cars.
We read it in and call the brunel use method to ensure the names are usable
Step1: Basics
We import the Brunel module and create a couple of simple scatterplots.
We use the brunel magic to do so
The basic format of each call to Brunel is simple; whether it is a single line or a set of lines (a cell magic),
they are concatenated together, and the result interprested as one command.
This command must start with an ACTION, but may have a set of options at the end specified as ACTION
Step2: Using the Dataframe
Since Brunel uses the data frame, we can modify or add to that object to show data in different ways. In the following example we apply a function that takes a name and sees if it matches one of a set of sub-strings. We map this function to the car names to create a new column consisting of the names that match either "Ford" or "Buick", and use that in our Brunel action.
Because the Brunel action is long -- we are adding some CSS styling, we split it into two parts for convenience. | Python Code:
import pandas as pd
import ibmcognitive
cars = pd.read_csv("data/Cars.csv")
cars.head(6)
Explanation: Demo of Brunel on Cars Data
The Data
We read the data into a pandas data frame. In this case we are grabbing some data that represents cars.
We read it in and call the brunel use method to ensure the names are usable
End of explanation
brunel x(mpg) y(horsepower) color(origin) :: width=800, height=200, output=d3
brunel x(horsepower) y(weight) color(origin) tooltip(name) filter(year) :: width=800, height=200, output=d3
brunel bar x(mpg) y(#count) filter(mpg):: data=cars, width=900, height=400, output=d3
brunel chord y(origin, year) size(#count) color(origin) :: width=500, height=400, output=d3
brunel treemap y(origin, year, cylinders) color(mpg) mean(mpg) size(#count) label(cylinders) :: width=900, height=600
Explanation: Basics
We import the Brunel module and create a couple of simple scatterplots.
We use the brunel magic to do so
The basic format of each call to Brunel is simple; whether it is a single line or a set of lines (a cell magic),
they are concatenated together, and the result interprested as one command.
This command must start with an ACTION, but may have a set of options at the end specified as ACTION :: OPTIONS.
ACTION is the Brunel action string; OPTIONS are key=value pairs:
* data defines the pandas dataframe to use. If not specified, the pandas data that best fits the action command will be used
* width and height may be supplied to set the resulting size
For details on the Brunel Action languages, see the Online Docs on Bluemix
End of explanation
def identify(x, search):
for y in search:
if y.lower() in x.lower(): return y
return None
cars['Type'] = cars.name.map(lambda x: identify(x, ["Ford", "Buick"]))
%%brunel x(engine) y(mpg) color(Type) style('size:50%; fill:#eee') +
text x(engine) y(mpg) color(Type) label(Type) style('text {font-size:14; font-weight:bold; fill:darker}')
:: width=800, height=800, output=d3
brunel x(mpg) y(horsepower) color(origin) tooltip(mpg) filter(mpg) :: width=800, height=200
from random import randint;
randint(2,9)
brunel x(mpg) y(horsepower) color(origin) tooltip(mpg) filter(acceleration) :: width=800, height=200
Explanation: Using the Dataframe
Since Brunel uses the data frame, we can modify or add to that object to show data in different ways. In the following example we apply a function that takes a name and sees if it matches one of a set of sub-strings. We map this function to the car names to create a new column consisting of the names that match either "Ford" or "Buick", and use that in our Brunel action.
Because the Brunel action is long -- we are adding some CSS styling, we split it into two parts for convenience.
End of explanation |
9,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
[BONUS] Problem 3
Step7: Model
We need to take the advatange of a CNN structure which (implicitly) understands image contents and styles. Rather than training a completely new model from scratch, we will use a pre-trained model to achieve our purpose - called "transfer learning".
We will use the VGG19 model. Since the model itself is very large (>500Mb) you will need to download the VGG-19 model and put it under the model/ folder. The comments below describes the dimensions of the VGG19 model. We will replace the max pooling layers with average pooling layers as the paper suggests, and discard all fully connected layers.
Step8: Input Images
Here we define some constants for the inputs. For this notebook, we will be uisng RGB images with 640 x 480 resolution, but you can easily modify the code to accommodate different sizes.
Step9: Now we load the input images. The vgg model expects image data with MEAN_VALUES subtracted to function correctly. "load_image" already handles this. The subtracted images will look funny.
Step11: Random Image Generator
The first step of style tranfer is to generate a starting image. The model will then gradually adjust this starting image towards target content/style. We will need a random image generator.
The generated image can be arbitrary and doesn't necessarily have anything to do with the content image. But, generating something similar to the content image will reduce our computing time.
Step12: Now let's check by visualize images you generated. Keep in mind noise_ratio = 0.0 produces the original subtracted image, while noise_ratio = 1.0 produces a complete random noise.
Step15: Notice that the visulized images are not necessarily more clear with lower noise_ratio.
Inline Question(No points, just something interesting if you're curious)
Step19: Style Loss
Now we can tackle the style loss of equation (5) from the paper. For a given layer $\ell$, the style loss is defined as follows
Step20: Create an TensorFlow session.
Step21: Build the model now.
Step22: Total loss
$$L = \alpha L_c + \beta L_s$$
Step23: Finally! Run!
Now we run the model which outputs the painted image every 100 iterations. You can find those intemediate results under output/ folder. Notice on CPU it usually takes almost an hour to run 1000 iterations. Take your time!
Step24: This is our final art for 500 iterations.
Further Assessments
Loss Function
Now that we have some good results of our style transfer algorithm, we might want to take a deeper look at our settings.
First of all, let's take a look at the total loss function
Step25: Inline Question
Step26: Inline Question | Python Code:
# Import what we need
import os
import sys
import numpy as np
import scipy.io
import scipy.misc
import tensorflow as tf # Import TensorFlow after Scipy or Scipy will break
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
%matplotlib inline
Explanation: [BONUS] Problem 3: Style Transfer
NOTE: THIS PROBLEM IS A BONUS PROBLEM WHICH IS NOT REQUIRED.
In this notebook we will implement the style transfer technique from "A Neural Algorithm of Artistic Style".
Read the paper first before starting doing the assginment!
Also make sure you spare enough time for running the code. Sections after 'Finally! Run!' shouldn't take much time coding, but does need ~3 hours to process if you don't have tensorflow-gpu enabled. You can finish all sections once and leave it there for runing. Take your time.
The general idea is to take two images, and produce a new image that reflects the content of one but the artistic "style" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself.
Please follow the assignment guide to setup python enviroments such as TensorFlow, Scipy, and Numpy.
<b>Learning Objective:</b> In this assignment, we will show you how to implement a basic style-transfer model in tensorflow.
<b>Provided Codes:</b> We provide the code framework for loading and processing a pre-trained CNN model.
<b>TODOs:</b> Design the loss functions and processes of a style tranfer network.
Setup
End of explanation
# Pick the VGG 19-layer model by from the paper "Very Deep Convolutional
# Networks for Large-Scale Image Recognition".
VGG_MODEL = 'model/imagenet-vgg-verydeep-19.mat'
# The mean to subtract from the input to the VGG model. This is the mean that
# when the VGG was used to train. Minor changes to this will make a lot of
# difference to the performance of model.
MEAN_VALUES = np.array([123.68, 116.779, 103.939]).reshape((1,1,3))
def load_vgg_model(path):
Returns a model for the purpose of 'painting' the picture.
Takes only the convolution layer weights and wrap using the TensorFlow
Conv2d, Relu and AveragePooling layer. VGG actually uses maxpool but
the paper indicates that using AveragePooling yields better results.
The last few fully connected layers are not used.
Here is the detailed configuration of the VGG model:
0 is conv1_1 (3, 3, 3, 64)
1 is relu
2 is conv1_2 (3, 3, 64, 64)
3 is relu
4 is maxpool
5 is conv2_1 (3, 3, 64, 128)
6 is relu
7 is conv2_2 (3, 3, 128, 128)
8 is relu
9 is maxpool
10 is conv3_1 (3, 3, 128, 256)
11 is relu
12 is conv3_2 (3, 3, 256, 256)
13 is relu
14 is conv3_3 (3, 3, 256, 256)
15 is relu
16 is conv3_4 (3, 3, 256, 256)
17 is relu
18 is maxpool
19 is conv4_1 (3, 3, 256, 512)
20 is relu
21 is conv4_2 (3, 3, 512, 512)
22 is relu
23 is conv4_3 (3, 3, 512, 512)
24 is relu
25 is conv4_4 (3, 3, 512, 512)
26 is relu
27 is maxpool
28 is conv5_1 (3, 3, 512, 512)
29 is relu
30 is conv5_2 (3, 3, 512, 512)
31 is relu
32 is conv5_3 (3, 3, 512, 512)
33 is relu
34 is conv5_4 (3, 3, 512, 512)
35 is relu
36 is maxpool
37 is fullyconnected (7, 7, 512, 4096)
38 is relu
39 is fullyconnected (1, 1, 4096, 4096)
40 is relu
41 is fullyconnected (1, 1, 4096, 1000)
42 is softmax
vgg = scipy.io.loadmat(path)
vgg_layers = vgg['layers']
def _weights(layer, expected_layer_name):
Return the weights and bias from the VGG model for a given layer.
W = vgg_layers[0][layer][0][0][2][0][0]
b = vgg_layers[0][layer][0][0][2][0][1]
layer_name = vgg_layers[0][layer][0][0][0]
assert layer_name == expected_layer_name
return W, b
def _relu(conv2d_layer):
Return the RELU function wrapped over a TensorFlow layer. Expects a
Conv2d layer input.
return tf.nn.relu(conv2d_layer)
def _conv2d(prev_layer, layer, layer_name):
Return the Conv2D layer using the weights, biases from the VGG
model at 'layer'.
W, b = _weights(layer, layer_name)
W = tf.constant(W)
b = tf.constant(np.reshape(b, (b.size)))
return tf.nn.conv2d(
prev_layer, filter=W, strides=[1, 1, 1, 1], padding='SAME') + b
def _conv2d_relu(prev_layer, layer, layer_name):
Return the Conv2D + RELU layer using the weights, biases from the VGG
model at 'layer'.
return _relu(_conv2d(prev_layer, layer, layer_name))
def _avgpool(prev_layer):
Return the AveragePooling layer.
return tf.nn.avg_pool(prev_layer, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Constructs the graph model.
graph = {}
graph['input'] = tf.Variable(np.zeros((1, IMAGE_HEIGHT, IMAGE_WIDTH, COLOR_CHANNELS)), dtype = 'float32')
graph['conv1_1'] = _conv2d_relu(graph['input'], 0, 'conv1_1')
graph['conv1_2'] = _conv2d_relu(graph['conv1_1'], 2, 'conv1_2')
graph['avgpool1'] = _avgpool(graph['conv1_2'])
graph['conv2_1'] = _conv2d_relu(graph['avgpool1'], 5, 'conv2_1')
graph['conv2_2'] = _conv2d_relu(graph['conv2_1'], 7, 'conv2_2')
graph['avgpool2'] = _avgpool(graph['conv2_2'])
graph['conv3_1'] = _conv2d_relu(graph['avgpool2'], 10, 'conv3_1')
graph['conv3_2'] = _conv2d_relu(graph['conv3_1'], 12, 'conv3_2')
graph['conv3_3'] = _conv2d_relu(graph['conv3_2'], 14, 'conv3_3')
graph['conv3_4'] = _conv2d_relu(graph['conv3_3'], 16, 'conv3_4')
graph['avgpool3'] = _avgpool(graph['conv3_4'])
graph['conv4_1'] = _conv2d_relu(graph['avgpool3'], 19, 'conv4_1')
graph['conv4_2'] = _conv2d_relu(graph['conv4_1'], 21, 'conv4_2')
graph['conv4_3'] = _conv2d_relu(graph['conv4_2'], 23, 'conv4_3')
graph['conv4_4'] = _conv2d_relu(graph['conv4_3'], 25, 'conv4_4')
graph['avgpool4'] = _avgpool(graph['conv4_4'])
graph['conv5_1'] = _conv2d_relu(graph['avgpool4'], 28, 'conv5_1')
graph['conv5_2'] = _conv2d_relu(graph['conv5_1'], 30, 'conv5_2')
graph['conv5_3'] = _conv2d_relu(graph['conv5_2'], 32, 'conv5_3')
graph['conv5_4'] = _conv2d_relu(graph['conv5_3'], 34, 'conv5_4')
graph['avgpool5'] = _avgpool(graph['conv5_4'])
return graph
Explanation: Model
We need to take the advatange of a CNN structure which (implicitly) understands image contents and styles. Rather than training a completely new model from scratch, we will use a pre-trained model to achieve our purpose - called "transfer learning".
We will use the VGG19 model. Since the model itself is very large (>500Mb) you will need to download the VGG-19 model and put it under the model/ folder. The comments below describes the dimensions of the VGG19 model. We will replace the max pooling layers with average pooling layers as the paper suggests, and discard all fully connected layers.
End of explanation
# Output folder for the images.
OUTPUT_DIR = 'output/'
# Style image to use.
STYLE_IMAGE = 'images/muse.jpg'
# Content image to use.
CONTENT_IMAGE = 'images/trojan_shrine.jpg'
# Image dimensions constants.
IMAGE_WIDTH = 640
IMAGE_HEIGHT = 480
COLOR_CHANNELS = 3
def load_image(path):
image_raw = scipy.misc.imread(path)
# Resize the image for convnet input and add an extra dimension
image_raw = scipy.misc.imresize(image_raw, (IMAGE_HEIGHT, IMAGE_WIDTH))
# Input to the VGG model expects the mean to be subtracted.
#############################################################################
# TODO: Substract the image with mean value #
#############################################################################
image = None
#############################################################################
# END OF YOUR CODE #
#############################################################################
return [image_raw, image]
def recover_image(image):
#############################################################################
# TODO: Recover the image with mean value #
# HINT: Check value boundaries #
#############################################################################
image_raw = None
#############################################################################
# END OF YOUR CODE #
#############################################################################
return image_raw
def save_image(path, image):
# Output should add back the mean.
image = recover_image(image)
scipy.misc.imsave(path, image)
Explanation: Input Images
Here we define some constants for the inputs. For this notebook, we will be uisng RGB images with 640 x 480 resolution, but you can easily modify the code to accommodate different sizes.
End of explanation
[content_image_raw, content_image] = load_image(CONTENT_IMAGE)
[style_image_raw, style_image] = load_image(STYLE_IMAGE)
fig = plt.figure(figsize=(10,10))
ax1 = plt.subplot(221)
ax2 = plt.subplot(222)
ax3 = plt.subplot(223)
ax4 = plt.subplot(224)
ax1.imshow(content_image_raw)
ax1.set_title('Content Image')
ax2.imshow(content_image)
ax2.set_title('Content Image Subtracted')
ax3.imshow(style_image_raw)
ax3.set_title('Style Image')
ax4.imshow(style_image)
ax4.set_title('Style Image Subtracted')
# Show the resulting image
plt.show()
Explanation: Now we load the input images. The vgg model expects image data with MEAN_VALUES subtracted to function correctly. "load_image" already handles this. The subtracted images will look funny.
End of explanation
def generate_noise_image(content_image, noise_ratio):
Returns a noise image intermixed with the content image at a certain ratio.
#############################################################################
# TODO: Create a noise image which will be mixed with the content image #
#############################################################################
noise_image = None
#############################################################################
# END OF YOUR CODE #
#############################################################################
#Take a weighted average of the values
gen_image = noise_image * noise_ratio + content_image * (1.0 - noise_ratio)
return gen_image
Explanation: Random Image Generator
The first step of style tranfer is to generate a starting image. The model will then gradually adjust this starting image towards target content/style. We will need a random image generator.
The generated image can be arbitrary and doesn't necessarily have anything to do with the content image. But, generating something similar to the content image will reduce our computing time.
End of explanation
fig = plt.figure(figsize=(10,10))
ax1 = plt.subplot(221)
ax2 = plt.subplot(222)
ax3 = plt.subplot(223)
ax4 = plt.subplot(224)
gen_image = generate_noise_image(content_image, 0.0)
ax1.imshow(gen_image)
ax1.set_title('Noise ratio: 0.0')
gen_image = generate_noise_image(content_image, 0.25)
ax2.imshow(gen_image)
ax2.set_title('Noise ratio: 0.25')
gen_image = generate_noise_image(content_image, 0.50)
ax3.imshow(gen_image)
ax3.set_title('Noise ratio: 0.50')
gen_image = generate_noise_image(content_image, 0.75)
ax4.imshow(gen_image)
ax4.set_title('Noise ratio: 0.75')
Explanation: Now let's check by visualize images you generated. Keep in mind noise_ratio = 0.0 produces the original subtracted image, while noise_ratio = 1.0 produces a complete random noise.
End of explanation
CONTENT_LAYER = 'conv4_2'
def content_loss_func(sess, model):
Content loss function as defined in the paper.
def _content_loss(current_feat, content_feat):
Inputs:
- current_feat: features of the current image, Tensor with shape [1, height, width, channels]
- content_feat: features of the content image, Tensor with shape [1, height, width, channels]
Returns:
- scalar content loss
#############################################################################
# TODO: Compute content loss function #
#############################################################################
loss = None
#############################################################################
# END OF YOUR CODE #
#############################################################################
return loss
return _content_loss(sess.run(model[CONTENT_LAYER]), model[CONTENT_LAYER])
Explanation: Notice that the visulized images are not necessarily more clear with lower noise_ratio.
Inline Question(No points, just something interesting if you're curious): Why does the image sometimes look sharper when added some intermediate level of noise?
Ans:
Loss Functions
Once we generate a new image, we would like to evaluate it by how much it maintains contents while approaches the target style.
This can be defined by a loss function. The loss function is a weighted sum of two terms: content loss + style loss.
You'll fill in the functions that compute these weighted terms below.
Content Loss
Let's first write the content loss function of equation (1) from the paper. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\ell$), that has feature maps $A^\ell \in \mathbb{R}^{1 \times H_\ell \times W_\ell \times N_\ell}$. $N_\ell$ is the number of filters/channels in layer $\ell$, $H_\ell$ and $W_\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\ell \in \mathbb{R}^{M_\ell \times N_\ell}$ be the feature map for the current image and $P^\ell \in \mathbb{R}^{M_\ell \times N_\ell}$ be the feature map for the content source image where $M_\ell=H_\ell\times W_\ell$ is the number of elements in each feature map. Each row of $F^\ell$ or $P^\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image.
Then the content loss is given by:
$L_c = \frac{1}{2} \sum_{i,j} (F_{ij}^{\ell} - P_{ij}^{\ell})^2$
We are only concerned with the "conv4_2" layer of the model.
End of explanation
# Layers to use. We will use these layers as advised in the paper.
# To have softer features, increase the weight of the higher layers
# (conv5_1) and decrease the weight of the lower layers (conv1_1).
# To have harder features, decrease the weight of the higher layers
# (conv5_1) and increase the weight of the lower layers (conv1_1).
STYLE_LAYERS = [
('conv1_1', 0.5),
('conv2_1', 0.5),
('conv3_1', 0.5),
('conv4_1', 0.5),
('conv5_1', 0.5),
]
def style_loss_func(sess, model):
Style loss function as defined in the paper.
def _gram_matrix(feat):
Compute the Gram matrix from features.
Inputs:
- feat: Tensor of shape (1, H, W, C) giving features for a single image.
Returns:
- gram: Tensor of shape (C, C) giving the (optionally normalized) Gram matrices for the input image.
#############################################################################
# TODO: Compute gram matrix #
#############################################################################
gram = None
#############################################################################
# END OF YOUR CODE #
#############################################################################
return gram
def _style_loss(current_feat, style_feat):
Inputs:
- current_feat: features of the current image, Tensor with shape [1, height, width, channels]
- style_feat: features of the style image, Tensor with shape [1, height, width, channels]
Returns:
- scalar style loss
assert (current_feat.shape == style_feat.shape)
#############################################################################
# TODO: Compute style loss function #
# HINT: Call the _gram_matrix function you just finished #
#############################################################################
loss = None
#############################################################################
# END OF YOUR CODE #
#############################################################################
return loss
E = [_style_loss(sess.run(model[layer_name]), model[layer_name]) for layer_name, _ in STYLE_LAYERS]
W = [w for _, w in STYLE_LAYERS]
loss = sum([W[l] * E[l] for l in range(len(STYLE_LAYERS))])
return loss
Explanation: Style Loss
Now we can tackle the style loss of equation (5) from the paper. For a given layer $\ell$, the style loss is defined as follows:
First, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results.
Given a feature map $F^\ell$ of shape $(1, M_\ell, N_\ell)$, the Gram matrix has shape $(1, N_\ell, N_\ell)$ and its elements are given by:
$$G_{ij}^\ell = \sum_k F^{\ell}{ik} F^{\ell}{jk}$$
Assuming $G^\ell$ is the Gram matrix from the feature map of the current image, $A^\ell$ is the Gram Matrix from the feature map of the source style image, then the style loss for the layer $\ell$ is simply the Euclidean distance between the two Gram matrices:
$$E_\ell = \frac{1}{4 N^2_\ell M^2_\ell} \sum_{i, j} \left(G^\ell_{ij} - A^\ell_{ij}\right)^2$$
In practice we usually compute the style loss at a set of layers $\mathcal{L}$ rather than just a single layer $\ell$; then the total style loss is the weighted sum of style losses at each layer by $w_\ell$:
$$L_s = \sum_{\ell \in \mathcal{L}} w_\ell E_\ell$$
In our case it is a summation from conv1_1 (lower layer) to conv5_1 (higher layer). Intuitively, the style loss across multiple layers captures lower level features (hard strokes, points, etc) to higher level features (styles, patterns, even objects).
End of explanation
sess = tf.InteractiveSession()
# Load VGG model
model = load_vgg_model(VGG_MODEL)
Explanation: Create an TensorFlow session.
End of explanation
# Construct content_loss using content_image.
content_image_list = np.reshape(content_image, ((1,) + content_image.shape))
sess.run(model['input'].assign(content_image_list))
content_loss = content_loss_func(sess, model)
# Construct style_loss using style_image.
style_image_list = np.reshape(style_image, ((1,) + style_image.shape))
sess.run(model['input'].assign(style_image_list))
style_loss = style_loss_func(sess, model)
Explanation: Build the model now.
End of explanation
# Constant to put more emphasis on content loss.
ALPHA = 0.0025
# Constant to put more emphasis on style loss.
BETA = 1
# Instantiate equation 7 of the paper.
total_loss = ALPHA * content_loss + BETA * style_loss
# We minimize the total_loss, which is the equation 7.
optimizer = tf.train.AdamOptimizer(2.0)
train_step = optimizer.minimize(total_loss)
Explanation: Total loss
$$L = \alpha L_c + \beta L_s$$
End of explanation
# Number of iterations to run.
ITERATIONS = 500
sess.run(tf.global_variables_initializer())
input_image = np.reshape(gen_image, ((1,) + gen_image.shape))
sess.run(model['input'].assign(input_image))
for it in range(ITERATIONS):
sess.run(train_step)
if it%50 == 0:
# Print every 100 iteration.
mixed_image = sess.run(model['input'])
print('Iteration %d' % (it))
print('cost: ', sess.run(total_loss))
if not os.path.exists(OUTPUT_DIR):
os.mkdir(OUTPUT_DIR)
filename = 'output/%d.png' % (it)
save_image(filename, mixed_image[0])
final_image = recover_image(mixed_image[0]);
imshow(final_image)
plt.show()
Explanation: Finally! Run!
Now we run the model which outputs the painted image every 100 iterations. You can find those intemediate results under output/ folder. Notice on CPU it usually takes almost an hour to run 1000 iterations. Take your time!
End of explanation
#############################################################################
# TODO: Change AlPHA to some value you desire #
#############################################################################
ALPHA1 = 0.025
BETA1 = 1
#############################################################################
# END OF YOUR CODE #
#############################################################################
# Instantiate equation 7 of the paper.
total_loss_1 = ALPHA1 * content_loss + BETA1 * style_loss
train_step_1 = optimizer.minimize(total_loss_1)
sess.run(tf.global_variables_initializer())
input_image = np.reshape(gen_image, ((1,) + gen_image.shape))
sess.run(model['input'].assign(input_image))
for it in range(ITERATIONS):
sess.run(train_step_1)
if it%50 == 0:
# Print every 100 iteration.
mixed_image = sess.run(model['input'])
print('Iteration %d' % (it))
print('cost: ', sess.run(total_loss))
if not os.path.exists(OUTPUT_DIR):
os.mkdir(OUTPUT_DIR)
filename = 'output/%d.png' % (it)
save_image(filename, mixed_image[0])
final_image = recover_image(mixed_image[0]);
imshow(final_image)
plt.show()
Explanation: This is our final art for 500 iterations.
Further Assessments
Loss Function
Now that we have some good results of our style transfer algorithm, we might want to take a deeper look at our settings.
First of all, let's take a look at the total loss function: what does the ratio of content / style loss do with the result?
End of explanation
#############################################################################
# TODO: Change CONTENT_LAYER to one of the conv layers before conv4 #
#############################################################################
CONTENT_LAYER = 'conv2_2' # for example
#############################################################################
# END OF YOUR CODE #
#############################################################################
content_loss_2 = content_loss_func(sess, model)
total_loss_2 = ALPHA * content_loss_2 + BETA * style_loss
train_step_2 = optimizer.minimize(total_loss_2)
sess.run(tf.global_variables_initializer())
input_image = np.reshape(gen_image, ((1,) + gen_image.shape))
sess.run(model['input'].assign(input_image))
for it in range(ITERATIONS):
sess.run(train_step_2)
if it%50 == 0:
# Print every 100 iteration.
mixed_image = sess.run(model['input'])
print('Iteration %d' % (it))
print('cost: ', sess.run(total_loss))
if not os.path.exists(OUTPUT_DIR):
os.mkdir(OUTPUT_DIR)
filename = 'output/%d.png' % (it)
save_image(filename, mixed_image[0])
final_image = recover_image(mixed_image[0]);
imshow(final_image)
plt.show()
Explanation: Inline Question: Write down your insights on the roles of alpha and beta parameters.
Ans:
Layer's Hidden Information
You might be wondering why we use 4th conv layer for contents and all 5 conv layers for style. Now change the parameters a little bit and run the code. See what's happening.
Let's first see what does the content conv layer do.
End of explanation
#############################################################################
# TODO: Change STYLE_LAYERS #
#############################################################################
ITERATIONS_3 = 500
STYLE_LAYERS = [ # for example
('conv1_1', 0.0),
('conv2_1', 0.0),
('conv3_1', 0.5),
('conv4_1', 1.0),
('conv5_1', 1.5),
]
#############################################################################
# END OF YOUR CODE #
#############################################################################
style_loss_3 = style_loss_func(sess, model)
total_loss_3 = ALPHA * content_loss + BETA * style_loss_3
train_step_3 = optimizer.minimize(total_loss_3)
sess.run(tf.global_variables_initializer())
input_image = np.reshape(gen_image, ((1,) + gen_image.shape))
sess.run(model['input'].assign(input_image))
for it in range(ITERATIONS_3):
sess.run(train_step_3)
if it%(ITERATIONS / 10) == 0:
# Print every 100 iteration.
mixed_image = sess.run(model['input'])
print('Iteration %d' % (it))
print('cost: ', sess.run(total_loss))
if not os.path.exists(OUTPUT_DIR):
os.mkdir(OUTPUT_DIR)
filename = 'output/%d.png' % (it)
save_image(filename, mixed_image[0])
final_image = recover_image(mixed_image[0]);
imshow(final_image)
plt.show()
Explanation: Inline Question: Write down your insights on the relation between depth of the layer and the content information of the image it represents.
Ans:
Next, we want to change the style's representation. Reassgin the weights of each layer to values you desire.
You can re run this single block multiple times to try out different values. Feel free to change ITERATIONS_3 of you find it's too slow.
End of explanation |
9,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="Images/Splice_logo.jpeg" width="250" height="200" align="left" >
Using the Feature Store for feature discovery
Step1: In addition to the Feature Set built in the last notebook, this cluster comes pre-loaded with a RFM Feature Set.
What is RFM Data?
Recency
Frequency
Monetary
Lets see the feature sets that are avilable to us
Step2: A search bar allows you to search for featues, across feature sets | Python Code:
#Begin spark session
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
#Create pysplice context. Allows you to create a Spark dataframe using our Native Spark DataSource
from splicemachine.spark import PySpliceContext
splice = PySpliceContext(spark)
#Iniatialize our Feature Store API
from splicemachine.mlflow_support import *
from splicemachine.features import FeatureStore
from splicemachine.features.constants import FeatureType
fs = FeatureStore(splice)
mlflow.register_feature_store(fs)
Explanation: <img src="Images/Splice_logo.jpeg" width="250" height="200" align="left" >
Using the Feature Store for feature discovery
End of explanation
fs.describe_feature_sets()
Explanation: In addition to the Feature Set built in the last notebook, this cluster comes pre-loaded with a RFM Feature Set.
What is RFM Data?
Recency
Frequency
Monetary
Lets see the feature sets that are avilable to us:
End of explanation
from util.feature_search import display_feature_search
display_feature_search()
spark.stop()
Explanation: A search bar allows you to search for featues, across feature sets
End of explanation |
9,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Paradigm entropy
This notebook shows some conditional entropy calculations from Ackerman & Malouf (in press). The Pite Saami data is taken from
Step1: Read in the paradigms from tab-delimited file as a pandas DataFrame
Step2: 1.pl and 2.pl are the cells with the largest number of distinct realizations and the augmentatives are the cells with the least
Step3: Total number of distinct realizations
Step4: If $D$ is the set of declensions for a particular paradigm, the probability (assuming all declensions are equally likely) of an arbitrary lexeme belonging to a particular paradigm $d$ is
$$P(d)=\frac{1}{|D|}$$
Since there are eight distinct classes, the probability of any lexeme belonging to any one class would be $\frac{1}{8}$. We could represent a lexeme's declension as a choice among eight equally likely alternatives, which thus has an entropy of $-\log_2 8=3$ bits. This is the declension entropy $H(D)$, the average information required to record the inflection class membership of a lexeme
Step5: Let $D_{c=r}$ be the set of declensions for which the paradigm cell $c$ has the formal realization $r$. Then the probability $P_{c}(r)$ of a paradigm cell $c$ of a particular lexeme having the realization $r$ is the probability of that lexeme belonging to one of the declensions in $D_{c=r}$, or
Step6: The average cell entropy is a measure of how difficult it is for a speaker to guess the realization of any one wordform of any particular lexeme in the absence of any information about that lexeme's declension
Step7: Above we defined $P_{c}(r)$, the probability that paradigm cell $c$ of a lexeme has the realization $r$. We can easily generalize that to the joint probability of two cells $c_1$ and $c_2$ having the realizations $r_1$ and $r_2$ respectively
Step8: The column averages measure predictedness, how hard it is to guess the realization of a cell given some other cell
Step9: And the row averages measures predictiveness, how hard it is to guess the realization of some other cell given this cell
Step10: Add row and column averages to the table
Step11: And format the result in $\LaTeX$
Step12: Next we try a simple bootstrap simulation to test the importance of implicational relations in the paradigm.
Statistical hypothesis testing proceeds by identifying a statistic whose sampling distribution is known under the null hypothesis $H_0$, and then estimating the probability of finding a result which deviates from what would be expected under $H_0$ at least as much as the observed data does. In this case, $H_0$ is that implicational relations are not a factor in reducing average conditional entropy in Saami, and the relevant statistic is the average conditional entropy. Unfortunately, we have no theoretical basis for deriving the sampling distribution of average conditional entropy under $H_0$, which precludes the use of conventional statistical methods. However, we can use a simple computational procedure for estimating the sampling distribution of the average conditional entropy. Take Saami$'$, an alternate version of Saami with formal realizations assigned randomly to paradigm cells. More specifically, we generate Saami$'$ by constructing 4 random conjugations, where each conjugation is produced by randomly selecting for each of the paradigm cells one of the possible realizations of that cell. The result is a language with more or less the same same number of declensions, paradigm cells, and allomorphs as genuine Saami, but with no implicational structure.
Step13: Averaged across 999 simulation runs (plus the original), the average average conditional entropy is notably higher than the true average conditional entropy of 0.116 bits
Step14: Across the distribution of simulated Saami$'$s, the real Saami is an outlier | Python Code:
%precision 3
import numpy as np
import pandas as pd
pd.set_option('display.float_format',lambda x : '%.3f'%x)
import entropy
Explanation: Paradigm entropy
This notebook shows some conditional entropy calculations from Ackerman & Malouf (in press). The Pite Saami data is taken from:
Wilbur, Joshua (2014). A Grammar of Pite Saami. Berlin: Langauge Science Press. [http://langsci-press.org/catalog/book/17]
End of explanation
saami = pd.read_table('saami.txt', index_col=0)
saami
sing = [c for c in saami.columns if c.endswith('sg')]
plur = [c for c in saami.columns if c.endswith('pl')]
print saami[sing].to_latex()
print saami[plur].to_latex()
Explanation: Read in the paradigms from tab-delimited file as a pandas DataFrame:
End of explanation
saami.describe()
Explanation: 1.pl and 2.pl are the cells with the largest number of distinct realizations and the augmentatives are the cells with the least:
End of explanation
len(set(saami.values.flatten()))
Explanation: Total number of distinct realizations:
End of explanation
np.log2(len(saami.index))
Explanation: If $D$ is the set of declensions for a particular paradigm, the probability (assuming all declensions are equally likely) of an arbitrary lexeme belonging to a particular paradigm $d$ is
$$P(d)=\frac{1}{|D|}$$
Since there are eight distinct classes, the probability of any lexeme belonging to any one class would be $\frac{1}{8}$. We could represent a lexeme's declension as a choice among eight equally likely alternatives, which thus has an entropy of $-\log_2 8=3$ bits. This is the declension entropy $H(D)$, the average information required to record the inflection class membership of a lexeme:
End of explanation
H = pd.DataFrame([entropy.entropy(saami)], index=['H'])
H
print H[sing].to_latex()
print H[plur].to_latex()
Explanation: Let $D_{c=r}$ be the set of declensions for which the paradigm cell $c$ has the formal realization $r$. Then the probability $P_{c}(r)$ of a paradigm cell $c$ of a particular lexeme having the realization $r$ is the probability of that lexeme belonging to one of the declensions in $D_{c=r}$, or:
$$P_{c}(r)=\sum_{d\in D_{c=r}}P(d)$$
The entropy of this distribution is the paradigm cell entropy $H(c)$, the uncertainty in the realization for a paradigm cell $c$:
End of explanation
print entropy.entropy(saami).mean()
print 2**entropy.entropy(saami).mean()
Explanation: The average cell entropy is a measure of how difficult it is for a speaker to guess the realization of any one wordform of any particular lexeme in the absence of any information about that lexeme's declension:
End of explanation
H = entropy.cond_entropy(saami)
H
Explanation: Above we defined $P_{c}(r)$, the probability that paradigm cell $c$ of a lexeme has the realization $r$. We can easily generalize that to the joint probability of two cells $c_1$ and $c_2$ having the realizations $r_1$ and $r_2$ respectively:
$$P_{c_1,c_2}(r_1,r_2)=\sum_{d\in D_{c_1=r_1 \wedge c_2=r_2}}P(d)$$
To quantify paradigm cell inter-predictability in terms of conditional entropy, we can define the conditional probability of a realization given another realization of a cell in the same lexeme's paradigm:
$$P_{c_1}(r_1|c_2=r_2)=\frac{P_{c_1,c_2}(r_1,r_2)}{P_{c_2}(r_2)}$$
With this background, the conditional entropy $H(c_1|c_2)$ of a cell $c_1$ given knowledge of the realization of $c_2$ for a particular lexeme is:
$$H(c_1|c_2)=\sum_{r_1}\sum_{r_2}P_{c_1}(r_1)\,P_{c_2}(r_2)\log_2 P_{c_1}(r_1|c_2=r_2)$$
End of explanation
pd.DataFrame([H.mean(0)], index=['AVG'])
print pd.DataFrame([H.mean(0)],index=['AVG'])[sing].to_latex()
print pd.DataFrame([H.mean(0)],index=['AVG'])[plur].to_latex()
Explanation: The column averages measure predictedness, how hard it is to guess the realization of a cell given some other cell:
End of explanation
pd.DataFrame([H.mean(1)], index=['AVG'])
print pd.DataFrame([H.mean(1)],index=['AVG'])[sing].to_latex()
print pd.DataFrame([H.mean(1)],index=['AVG'])[plur].to_latex()
Explanation: And the row averages measures predictiveness, how hard it is to guess the realization of some other cell given this cell:
End of explanation
H = H.join(pd.Series(H.mean(1), name='AVG'))
H = H.append(pd.Series(H.mean(0), name='AVG'))
H
Explanation: Add row and column averages to the table:
End of explanation
print H[sing].to_latex(na_rep='---')
print H[plur].to_latex(na_rep='---')
Explanation: And format the result in $\LaTeX$
End of explanation
boot = entropy.bootstrap(saami, 999)
Explanation: Next we try a simple bootstrap simulation to test the importance of implicational relations in the paradigm.
Statistical hypothesis testing proceeds by identifying a statistic whose sampling distribution is known under the null hypothesis $H_0$, and then estimating the probability of finding a result which deviates from what would be expected under $H_0$ at least as much as the observed data does. In this case, $H_0$ is that implicational relations are not a factor in reducing average conditional entropy in Saami, and the relevant statistic is the average conditional entropy. Unfortunately, we have no theoretical basis for deriving the sampling distribution of average conditional entropy under $H_0$, which precludes the use of conventional statistical methods. However, we can use a simple computational procedure for estimating the sampling distribution of the average conditional entropy. Take Saami$'$, an alternate version of Saami with formal realizations assigned randomly to paradigm cells. More specifically, we generate Saami$'$ by constructing 4 random conjugations, where each conjugation is produced by randomly selecting for each of the paradigm cells one of the possible realizations of that cell. The result is a language with more or less the same same number of declensions, paradigm cells, and allomorphs as genuine Saami, but with no implicational structure.
End of explanation
boot.mean()
len(boot),boot.min()
Explanation: Averaged across 999 simulation runs (plus the original), the average average conditional entropy is notably higher than the true average conditional entropy of 0.116 bits:
End of explanation
sum(boot <= boot[0]) / 1000.
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
plt.rcParams['figure.figsize']= '8, 6'
plot = boot.hist(bins=40)
Explanation: Across the distribution of simulated Saami$'$s, the real Saami is an outlier: only 0.01% of the sample have an average conditional entropy as low or lower than that of real Saami:
End of explanation |
9,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summarize tidy tables
This script summarizes the water use and water suppy tidy tables generated by the CreateUsageTable and CreateSupplyTable scripts, respectively. Each table is then merged into a single dataframe to create a table listing water use and supply for each year/state combination.
Workflow
Import and summarize use table on state, listing usage amounts by use class and source class (surface/groundwater)
Import and summarize supply table, first on county so that amounts can be converted from mm to MGal/year.
Summarize the county supply table to the state level, listing the total MGal/year of supply in each state
Step1: Summarize USE table by county
Computes water usage for each county broken into each sector and source category.
Step2: Import and summarize supply table by county
Step3: Join Use and Supply Tables on Year and FIPS
Step4: Summarize for entire US | Python Code:
#Import libraries
import sys, os
import pandas as pd
import numpy as np
#Get file names; these files are created by the CreateUsageTable.py and CreateSupplyTable.py respectively
dataDir = '../../Data'
tidyuseFN = dataDir + os.sep + "UsageDataTidy.csv"
tidysupplyFN = dataDir + os.sep + "SupplyTableTidy.csv"
outCountyFN = dataDir + os.sep + "WaterByCounty.csv"
outStateFN = dataDir + os.sep + "WaterByState.csv"
outNationFN = dataDir + os.sep + "WaterBalanceData.csv"
Explanation: Summarize tidy tables
This script summarizes the water use and water suppy tidy tables generated by the CreateUsageTable and CreateSupplyTable scripts, respectively. Each table is then merged into a single dataframe to create a table listing water use and supply for each year/state combination.
Workflow
Import and summarize use table on state, listing usage amounts by use class and source class (surface/groundwater)
Import and summarize supply table, first on county so that amounts can be converted from mm to MGal/year.
Summarize the county supply table to the state level, listing the total MGal/year of supply in each state
End of explanation
#Read in the usage table from the csv file
dfUse = pd.read_csv(tidyuseFN,dtype={'FIPS':np.str})
#Remove rows with irrigation and thermoelectric sub-classes
#dropValues = ['Irrigation_Crop', 'Irrigation_Golf','ThermoElec_OnceThru', 'ThermoElec_Recirc']
dropValues = ['Irrigation','ThermoElec']
dfUse = dfUse[~dfUse['UseClass'].isin(dropValues)]
#Convert amounts from MGal/day to MGal/year
dfUse['Amount'] = dfUse['Amount'] * 365
#Add STATEFIPS column to dfUse (as left most 2 characters of FIPS values)
dfUse['STATEFIPS'] = dfUse['FIPS'].str[:2]
#Pivot on YEAR and FIPS listing usage in sector/source categories
dfUseFIPS = dfUse.pivot_table(index=['YEAR','STATE','FIPS'],
values='Amount',
aggfunc='sum',
columns=['UseClass','SrcClass'])
#Flatten hierarchical column names
dfUseFIPS.columns = ['_'.join(col).strip() for col in dfUseFIPS.columns.values]
#Remove indices so values are available as columns
dfUseFIPS.reset_index(inplace=True)
dfUseFIPS.head(2)
Explanation: Summarize USE table by county
Computes water usage for each county broken into each sector and source category.
End of explanation
#Read in the supply table from the csv file
dfSupply = pd.read_csv(tidysupplyFN,dtype={'FIPS':np.str,'STATEFIPS':np.str})
#Compute supply as precipitation - evapotranspiration
#(See https://www.fs.fed.us/rm/value/docs/spatial_distribution_water_supply.pdf)
# * Could also use total_runoff
# * Values are in mm/year and need to be adjusted to MGal/year by mulitplying by weighted area
dfSupply['Supply'] = dfSupply['pr'] - dfSupply['et']
#Summarize supply on YEAR and FIPS
'''We take the mean mm/year across points in a county and then
mulitply by county area to get volume (mm * m3). These values
then need to by converted to MGal to give MGal/year
'''
#Compute mean runoff and supply on year and county
dfSupplyFIPS = dfSupply.groupby(('YEAR','STATEFIPS','FIPS','Area'))['total_runoff','Supply'].mean()
#Reset the index so Year, StateFIPS, FIPS, and AREA become columns again
dfSupplyFIPS.reset_index(inplace=True)
#Convert mm/Year * county area (m2) into MGal/year - to match use values
''' m = [mm] / 1000;
m * [m2] = m3;
[m3] / 3785.41178 = 1 MGal'''
for param in ('total_runoff','Supply'):
dfSupplyFIPS[param] = (dfSupplyFIPS[param] / 1000.0) * dfSupplyFIPS.Area / 3785.41178
dfSupplyFIPS.head(2)
Explanation: Import and summarize supply table by county
End of explanation
dfSupplyFIPS.columns.values
#Merge the two tables on YEAR and FIPS columns
dfAll = pd.merge(dfUseFIPS,dfSupplyFIPS, how='outer',on=['YEAR','FIPS'],left_index=True,right_index=True)
dfAll.head(2)
#Export to csv
dfAll.to_csv(outCountyFN, index=False, encoding='utf8')
Explanation: Join Use and Supply Tables on Year and FIPS
End of explanation
#Group by YEAR
dfUS = dfAll.groupby('YEAR').sum()
dfUS.head()
dfUS.reset_index(inplace=True)
dfUSm = pd.melt(dfUS,id_vars='YEAR',var_name='Group',value_name='MGal')
dfUSm.to_csv(outNationFN,index=False)
Explanation: Summarize for entire US
End of explanation |
9,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quiz
Step1: Quiz
Step7: Result
Step8: Quiz
Step9: Quiz
Step11: Quiz | Python Code:
import numpy as np
import pandas
import matplotlib.pyplot as plt
def entries_histogram(turnstile_weather):
'''
Task description is above.
'''
plt.figure()
turnstile_weather['ENTRIESn_hourly'].loc[turnstile_weather['rain'] == 1].hist() # your code here to plot a historgram for hourly entries when it is raining
turnstile_weather['ENTRIESn_hourly'].loc[turnstile_weather['rain'] == 0].hist() # your code here to plot a historgram for hourly entries when it is not raining
plt.title('Histogram of ENTRIESn_hourly')
plt.legend(['No Rain', 'Rain'])
return plt
Explanation: Quiz: 1 - Exploratory Data Analysis
Note: Sample sizes differ. "Rain" has many fewer samples. These histograms are primarily to compare the distributions of each.
Also the x-axis in this example image has been truncated at 6,000 cutting off outliers in the long tail which extends beyond 50,000.
Task description
Before we perform any analysis, it might be useful to take a
look at the data we're hoping to analyze.
More specifically, let's
- examine the hourly entries in our NYC subway data and
- determine what distribution the data follows.
- This data is stored in a dataframe called turnstile_weather under the ['ENTRIESn_hourly'] column.
Let's plot two histograms on the same axes to show hourly
entries when raining vs. when not raining.
Here's an example on how to plot histograms with pandas and matplotlib:
turnstile_weather['column_to_graph'].hist()
Your histogram may look similar to bar graph in the instructor notes below.
You can read a bit about using matplotlib and pandas to plot histograms here:
http://pandas.pydata.org/pandas-docs/stable/visualization.html#histograms
You can see the information contained within the turnstile weather data here:
https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/turnstile_data_master_with_weather.csv
End of explanation
import numpy as np
import scipy
import scipy.stats
import pandas
def mann_whitney_plus_means(turnstile_weather):
'''
Task description is above.
'''
### YOUR CODE HERE ###
# Extract rain data through hourly in turnstile_data
rain_data = turnstile_weather['ENTRIESn_hourly'].loc[turnstile_weather['rain'] == 1]
non_rain_data = turnstile_weather['ENTRIESn_hourly'].loc[turnstile_weather['rain'] == 0]
# Compute mean of `rain_data` and `non_rain_data` using numpy's mean function
with_rain_mean = np.mean(rain_data)
without_rain_mean = np.mean(non_rain_data)
# Run Mann Whitney U-test on `ENTRIESn_hourly` column
[U, p] = scipy.stats.mannwhitneyu(rain_data, non_rain_data)
return with_rain_mean, without_rain_mean, U, p # leave this line for the grader
Explanation: Quiz: 2 - Welch's T-Test?
Quiz 2 is a question.
Quiz: 3 - Mann-Whitney U-Test
Task description
This function will consume the turnstile_weather dataframe containing
our final turnstile weather data.
You will want to take the means and run the Mann Whitney U-test on the
ENTRIESn_hourly column in the turnstile_weather dataframe.
This function should return:
1) the mean of entries with rain
2) the mean of entries without rain
3) the Mann-Whitney U-statistic and p-value comparing the number of entries
with rain and the number of entries without rain
You should feel free to use scipy's Mann-Whitney implementation, and you
might also find it useful to use numpy's mean function.
Here are the functions' documentation:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mannwhitneyu.html
http://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html
You can look at the final turnstile weather data at the link below:
https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/turnstile_data_master_with_weather.csv
End of explanation
import csv
import numpy as np
import pandas as pd
from ggplot import *
In this question, you need to:
1) implement the compute_cost() and gradient_descent() procedures
2) Select features (in the predictions procedure) and make predictions.
def normalize_features(df):
Normalize the features in the data set.
mu = df.mean()
sigma = df.std()
if (sigma == 0).any():
raise Exception("One or more features had the same value for all samples, and thus could " + \
"not be normalized. Please do not include features with only a single value " + \
"in your model.")
df_normalized = (df - df.mean()) / df.std()
return df_normalized, mu, sigma
def compute_cost(features, values, theta):
Compute the cost function given a set of features / values,
and the values for our thetas.
This can be the same code as the compute_cost function in the lesson #3 exercises,
but feel free to implement your own.
# your code here
m = len(values)
sum_of_square_errors = np.square(np.dot(features, theta) - values)
cost = sum_of_square_errors / (2*m)
return cost
def gradient_descent(features, values, theta, alpha, num_iterations):
Perform gradient descent given a data set with an arbitrary number of features.
This can be the same gradient descent code as in the lesson #3 exercises,
but feel free to implement your own.
m = len(values)
cost_history = []
for i in range(num_iterations):
# Append new computed cost of given list of thets to cost_history list
cost_history.append(compute_cost(features, values, theta))
# Compute
diff = np.dot(features.transpose(), values - np.dot(features, theta))
theta += (alpha/len(values))*diff
return theta, pandas.Series(cost_history)
def predictions(dataframe):
'''
The NYC turnstile data is stored in a pandas dataframe called weather_turnstile.
Using the information stored in the dataframe, let's predict the ridership of
the NYC subway using linear regression with gradient descent.
You can download the complete turnstile weather dataframe here:
https://www.dropbox.com/s/meyki2wl9xfa7yk/turnstile_data_master_with_weather.csv
Your prediction should have a R^2 value of 0.40 or better.
You need to experiment using various input features contained in the dataframe.
We recommend that you don't use the EXITSn_hourly feature as an input to the
linear model because we cannot use it as a predictor: we cannot use exits
counts as a way to predict entry counts.
Note: Due to the memory and CPU limitation of our Amazon EC2 instance, we will
give you a random subet (~15%) of the data contained in
turnstile_data_master_with_weather.csv. You are encouraged to experiment with
this computer on your own computer, locally.
If you'd like to view a plot of your cost history, uncomment the call to
plot_cost_history below. The slowdown from plotting is significant, so if you
are timing out, the first thing to do is to comment out the plot command again.
If you receive a "server has encountered an error" message, that means you are
hitting the 30-second limit that's placed on running your program. Try using a
smaller number for num_iterations if that's the case.
If you are using your own algorithm/models, see if you can optimize your code so
that it runs faster.
'''
# Select Features (try different features!)
features_list = ['fog', 'meandewpti', 'rain', 'precipi', 'Hour', 'meantempi']
features = dataframe[features_list]
# Add UNIT to features using dummy variables
dummy_units = pandas.get_dummies(dataframe['UNIT'], prefix='unit')
features = features.join(dummy_units)
# Values
values = dataframe['ENTRIESn_hourly']
m = len(values)
features, mu, sigma = normalize_features(features)
features['ones'] = np.ones(m) # Add a column of 1s (y intercept)
# Convert features and values to numpy arrays
features_array = np.array(features)
values_array = np.array(values)
# Set values for alpha, number of iterations.
alpha = 0.1 # please feel free to change this value
num_iterations = 75 # please feel free to change this value
# Initialize theta, perform gradient descent
theta_gradient_descent = np.zeros(len(features.columns))
theta_gradient_descent, cost_history = gradient_descent(features_array,
values_array,
theta_gradient_descent,
alpha,
num_iterations)
plot = None
# -------------------------------------------------
# Uncomment the next line to see your cost history
# -------------------------------------------------
plot = plot_cost_history(alpha, cost_history)
#
# Please note, there is a possibility that plotting
# this in addition to your calculation will exceed
# the 30 second limit on the compute servers.
predictions = np.dot(features_array, theta_gradient_descent)
return predictions, plot
def plot_cost_history(alpha, cost_history):
This function is for viewing the plot of your cost history.
You can run it by uncommenting this
plot_cost_history(alpha, cost_history)
call in predictions.
If you want to run this locally, you should print the return value
from this function.
cost_df = pandas.DataFrame({
'Cost_History': cost_history,
'Iteration': range(len(cost_history))
})
return ggplot(cost_df, aes('Iteration', 'Cost_History')) + \
geom_point() + ggtitle('Cost History for alpha = %.3f' % alpha )
# -------------------------------------------------
# Locally run
# -------------------------------------------------
# data = pd.read_csv('./data/turnstile_data_master_with_weather.csv')
# predictions(data)
Explanation: Result:
(1105.4463767458733, 1090.278780151855, 1924409167.0, 0.024940392294493356)
Quiz: 4 - Ridership On Rainy Vs. Nonrainy Days
Q: Is the distribution of the number of entries statistically different between rainy and non rainy days?
A: Yes
For more intuitions about this test:
https://www.graphpad.com/guides/prism/7/statistics/how_the_mann-whitney_test_works.htm?toc=0&printWindow
Quiz: 5 - Linear Regression
Task description
In this question, you need to:
- 1) Implement the compute_cost() and gradient_descent() procedures
- 2) Select features (in the predictions procedure) and make predictions.
Note:
Plotting your cost history will help you convince yourself that gradient descent has converged to the minimum cost, but it's more of a learning tool than data to include in a report.
End of explanation
import numpy as np
import scipy
import matplotlib.pyplot as plt
def plot_residuals(turnstile_weather, predictions):
'''
Task description is above.
'''
plt.figure()
(turnstile_weather['ENTRIESn_hourly'] - predictions).hist(bins=100)
return plt
Explanation: Quiz: 6 - Plotting Residuals
Task description
Using the same methods that we used to plot a histogram of entries per hour for our data,
why don't you make a histogram of the residuals (that is, the difference between the
original hourly entry data and the predicted values).
Try different binwidths for your histogram.
Based on this residual histogram, do you have any insight into how our model
performed? Reading a bit on this webpage might be useful:
http://www.itl.nist.gov/div898/handbook/pri/section2/pri24.htm
End of explanation
import numpy as np
import scipy
import matplotlib.pyplot as plt
import sys
def compute_r_squared(data, predictions):
'''
Task description is above.
'''
# your code here
r_squared = 1 - sum((predictions - data)**2) / sum((data - np.mean(data))**2)
return r_squared
Explanation: Quiz: 7 - Compute R^2
Task description
In exercise 5, we calculated the $R^2$ value for you. But why don't you try and
and calculate the $R^2$ value yourself.
Given a list of original data points, and also a list of predicted data points,
- write a function that will compute and
- return the coefficient of determination ($R^2$)
for this data.
numpy.mean() and numpy.sum() might both be useful here, but not necessary.
Documentation about numpy.mean() and numpy.sum() below:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html
http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html
End of explanation
# -*- coding: utf-8 -*-
import numpy as np
import pandas
import scipy
import statsmodels.api as sm
def predictions(weather_turnstile):
#
# Your implementation goes here. Feel free to write additional
# helper functions
#
return prediction
Explanation: Quiz: 8 - More Linear Regression (Optional)
Before choosing to use Linear Regression, follow a checklist like this:
http://www.graphpad.com/guides/prism/6/curve-fitting/index.htm?reg_analysischeck_linearreg.htm
Analysis checklist: Linear regression
[ ] Can the relationship between X and Y be graphed as a straight line?
In many experiments the relationship between X and Y is curved, making linear regression inappropriate. It rarely helps to transform the data to force the relationship to be linear. Better, use nonlinear curve fitting.
[ ] Is the scatter of data around the line Gaussian (at least approximately)?
Linear regression analysis assumes that the scatter of data around the best-fit line is Gaussian.
[ ] Is the variability the same everywhere?
Linear regression assumes that scatter of points around the best-fit line has the same standard deviation all along the curve. The assumption is violated if the points with high or low X values tend to be further from the best-fit line. The assumption that the standard deviation is the same everywhere is termed homoscedasticity. (If the scatter goes up as Y goes up, you need to perform a weighted regression. Prism can't do this via the linear regression analysis. Instead, use nonlinear regression but choose to fit to a straight-line model.
[ ] Do you know the X values precisely?
The linear regression model assumes that X values are exactly correct, and that experimental error or biological variability only affects the Y values. This is rarely the case, but it is sufficient to assume that any imprecision in measuring X is very small compared to the variability in Y.
[ ] Are the data points independent?
Whether one point is above or below the line is a matter of chance, and does not influence whether another point is above or below the line.
[ ] Are the X and Y values intertwined?
If the value of X is used to calculate Y (or the value of Y is used to calculate X) then linear regression calculations are invalid. One example is a Scatchard plot, where the Y value (bound/free) is calculated from the X value. Another example would be a graph of midterm exam scores (X) vs. total course grades(Y). Since the midterm exam score is a component of the total course grade, linear regression is not valid for these data.
Note:
that if you are using statsmodels and your model does not include a constant statsmodels will calculate R^2 differently from the grader. See this link for more information:
http://www.ats.ucla.edu/stat/mult_pkg/faq/general/noconstant.htm
Task description
In this optional exercise, you should complete the function called
predictions(turnstile_weather). This function takes in our pandas
turnstile weather dataframe, and returns a set of predicted ridership values,
based on the other information in the dataframe.
In exercise 3.5 we used Gradient Descent in order to compute the coefficients
theta used for the ridership prediction.
Here you should attempt to implement another way of computing the coeffcients theta.
You may also try using a reference implementation such as:
http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLS.html
One of the advantages of the statsmodels implementation is that it gives you
easy access to the values of the coefficients theta. This can help you infer relationships
between variables in the dataset.
You may also experiment with polynomial terms as part of the input variables.
The following links might be useful:
http://en.wikipedia.org/wiki/Ordinary_least_squares
http://en.wikipedia.org/w/index.php?title=Linear_least_squares_(mathematics)
http://en.wikipedia.org/wiki/Polynomial_regression
This is your playground. Go wild!
How does your choice of linear regression compare to linear regression
with gradient descent computed in Exercise 3.5?
You can look at the information contained in the turnstile_weather dataframe below:
https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/turnstile_data_master_with_weather.csv
Note: due to the memory and CPU limitation of our amazon EC2 instance, we will
give you a random subset (~10%) of the data contained in turnstile_data_master_with_weather.csv
If you receive a "server has encountered an error" message, that means you are hitting
the 30 second limit that's placed on running your program. See if you can optimize your code so it
runs faster.
End of explanation |
9,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TEXT
This notebook serves as supporting material for topics covered in Chapter 22 - Natural Language Processing from the book Artificial Intelligence
Step1: CONTENTS
Text Models
Viterbi Text Segmentation
Information Retrieval
Information Extraction
Decoders
TEXT MODELS
Before we start analyzing text processing algorithms, we will need to build some language models. Those models serve as a look-up table for character or word probabilities (depending on the type of model). These models can give us the probabilities of words or character sequences appearing in text. Take as example "the". Text models can give us the probability of "the", P("the"), either as a word or as a sequence of characters ("t" followed by "h" followed by "e"). The first representation is called "word model" and deals with words as distinct objects, while the second is a "character model" and deals with sequences of characters as objects. Note that we can specify the number of words or the length of the char sequences to better suit our needs. So, given that number of words equals 2, we have probabilities in the form P(word1, word2). For example, P("of", "the"). For char models, we do the same but for chars.
It is also useful to store the conditional probabilities of words given preceding words. That means, given we found the words "of" and "the", what is the chance the next word will be "world"? More formally, P("world"|"of", "the"). Generalizing, P(Wi|Wi-1, Wi-2, ... , Wi-n).
We call the word model N-Gram Word Model (from the Greek "gram", the root of "write", or the word for "letter") and the char model N-Gram Character Model. In the special case where N is 1, we call the models Unigram Word Model and Unigram Character Model respectively.
In the text module we implement the two models (both their unigram and n-gram variants) by inheriting from the CountingProbDist from learning.py. Note that CountingProbDist does not return the actual probability of each object, but the number of times it appears in our test data.
For word models we have UnigramWordModel and NgramWordModel. We supply them with a text file and they show the frequency of the different words. We have UnigramCharModel and NgramCharModel for the character models.
Execute the cells below to take a look at the code.
Step2: Next we build our models. The text file we will use to build them is Flatland, by Edwin A. Abbott. We will load it from here. In that directory you can find other text files we might get to use here.
Getting Probabilities
Here we will take a look at how to read text and find the probabilities for each model, and how to retrieve them.
First the word models
Step3: We see that the most used word in Flatland is 'the', with 2081 occurences, while the most used sequence is 'of the' with 368 occurences. Also, the probability of 'an' is approximately 0.003, while for 'i was' it is close to 0.001. Note that the strings used as keys are all lowercase. For the unigram model, the keys are single strings, while for n-gram models we have n-tuples of strings.
Below we take a look at how we can get information from the conditional probabilities of the model, and how we can generate the next word in a sequence.
Step4: First we print all the possible words that come after 'i was' and the times they have appeared in the model. Next we print the probability of 'once' appearing after 'i was', and finally we pick a word to proceed after 'i was'. Note that the word is picked according to its probability of appearing (high appearance count means higher chance to get picked).
Let's take a look at the two character models
Step5: The most common letter is 'e', appearing more than 19000 times, and the most common sequence is "_t". That is, a space followed by a 't'. Note that even though we do not count spaces for word models or unigram character models, we do count them for n-gram char models.
Also, the probability of the letter 'z' appearing is close to 0.0006, while for the bigram 'gh' it is 0.003.
Generating Samples
Apart from reading the probabilities for n-grams, we can also use our model to generate word sequences, using the samples function in the word models.
Step6: For the unigram model, we mostly get gibberish, since each word is picked according to its frequency of appearance in the text, without taking into consideration preceding words. As we increase n though, we start to get samples that do have some semblance of conherency and do remind a little bit of normal English. As we increase our data, these samples will get better.
Let's try it. We will add to the model more data to work with and let's see what comes out.
Step7: Notice how the samples start to become more and more reasonable as we add more data and increase the n parameter. We are still a long way to go though from realistic text generation, but at the same time we can see that with enough data even rudimentary algorithms can output something almost passable.
VITERBI TEXT SEGMENTATION
Overview
We are given a string containing words of a sentence, but all the spaces are gone! It is very hard to read and we would like to separate the words in the string. We can accomplish this by employing the Viterbi Segmentation algorithm. It takes as input the string to segment and a text model, and it returns a list of the separate words.
The algorithm operates in a dynamic programming approach. It starts from the beginning of the string and iteratively builds the best solution using previous solutions. It accomplishes that by segmentating the string into "windows", each window representing a word (real or gibberish). It then calculates the probability of the sequence up that window/word occuring and updates its solution. When it is done, it traces back from the final word and finds the complete sequence of words.
Implementation
Step8: The function takes as input a string and a text model, and returns the most probable sequence of words, together with the probability of that sequence.
The "window" is w and it includes the characters from j to i. We use it to "build" the following sequence
Step9: The algorithm correctly retrieved the words from the string. It also gave us the probability of this sequence, which is small, but still the most probable segmentation of the string.
INFORMATION RETRIEVAL
Overview
With Information Retrieval (IR) we find documents that are relevant to a user's needs for information. A popular example is a web search engine, which finds and presents to a user pages relevant to a query. Information retrieval is not limited only to returning documents though, but can also be used for other type of queries. For example, answering questions when the query is a question, returning information when the query is a concept, and many other applications. An IR system is comprised of the following
Step10: The stopwords argument signifies words in the queries that should not be accounted for in documents. Usually they are very common words that do not add any significant information for a document's relevancy.
A quick guide for the functions in the IRSystem class
Step11: The class creates an IR System with the stopwords "how do i the a of". We could add more words to exclude, but the queries we will test will generally be in that format, so it is convenient. After the initialization of the system, we get the manual files and start indexing them.
Let's build our Unix consultant and run a query
Step12: We asked how to remove a file and the top result was the rm (the Unix command for remove) manual. This is exactly what we wanted! Let's try another query
Step13: Even though we are basically asking for the same thing, we got a different top result. The diff command shows the differences between two files. So the system failed us and presented us an irrelevant document. Why is that? Unfortunately our IR system considers each word independent. "Remove" and "delete" have similar meanings, but since they are different words our system will not make the connection. So, the diff manual which mentions a lot the word delete gets the nod ahead of other manuals, while the rm one isn't in the result set since it doesn't use the word at all.
INFORMATION EXTRACTION
Information Extraction (IE) is a method for finding occurences of object classes and relationships in text. Unlike IR systems, an IE system includes (limited) notions of syntax and semantics. While it is difficult to extract object information in a general setting, for more specific domains the system is very useful. One model of an IE system makes use of templates that match with strings in a text.
A typical example of such a model is reading prices from web pages. Prices usually appear after a dollar and consist of numbers, maybe followed by two decimal points. Before the price, usually there will appear a string like "price
Step14: Decoding a Caesar cipher
To decode a Caesar cipher we exploit the fact that not all letters in the alphabet are used equally. Some letters are used more than others and some pairs of letters are more probable to occur together. We call a pair of consecutive letters a <b>bigram</b>.
Step15: We use CountingProbDist to get the probability distribution of bigrams. In the latin alphabet consists of only only 26 letters. This limits the total number of possible substitutions to 26. We reverse the shift encoding for a given n and check how probable it is using the bigram distribution. We try all 26 values of n, i.e. from n = 0 to n = 26 and use the value of n which gives the most probable plaintext.
Step16: Example
Let us encode a secret message using Caeasar cipher and then try decoding it using ShiftDecoder. We will again use flatland.txt to build the text model
Step17: Permutation Decoder
Now let us try to decode messages encrypted by a general monoalphabetic substitution cipher. The letters in the alphabet can be replaced by any permutation of letters. For example if the alpahbet consisted of {A B C} then it can be replaced by {A C B}, {B A C}, {B C A}, {C A B}, {C B A} or even {A B C} itself. Suppose we choose the permutation {C B A}, then the plain text "CAB BA AAC" would become "ACB BC CCA". We can see that Caesar cipher is also a form of permutation cipher where the permutation is a cyclic permutation. Unlike the Caesar cipher, it is infeasible to try all possible permutations. The number of possible permutations in Latin alphabet is 26! which is of the order $10^{26}$. We use graph search algorithms to search for a 'good' permutation.
Step18: Each state/node in the graph is represented as a letter-to-letter map. If there no mapping for a letter it means the letter is unchanged in the permutation. These maps are stored as dictionaries. Each dictionary is a 'potential' permutation. We use the word 'potential' because every dictionary doesn't necessarily represent a valid permutation since a permutation cannot have repeating elements. For example the dictionary {'A' | Python Code:
from text import *
from utils import open_data
from notebook import psource
Explanation: TEXT
This notebook serves as supporting material for topics covered in Chapter 22 - Natural Language Processing from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from text.py.
End of explanation
psource(UnigramWordModel, NgramWordModel, UnigramCharModel, NgramCharModel)
Explanation: CONTENTS
Text Models
Viterbi Text Segmentation
Information Retrieval
Information Extraction
Decoders
TEXT MODELS
Before we start analyzing text processing algorithms, we will need to build some language models. Those models serve as a look-up table for character or word probabilities (depending on the type of model). These models can give us the probabilities of words or character sequences appearing in text. Take as example "the". Text models can give us the probability of "the", P("the"), either as a word or as a sequence of characters ("t" followed by "h" followed by "e"). The first representation is called "word model" and deals with words as distinct objects, while the second is a "character model" and deals with sequences of characters as objects. Note that we can specify the number of words or the length of the char sequences to better suit our needs. So, given that number of words equals 2, we have probabilities in the form P(word1, word2). For example, P("of", "the"). For char models, we do the same but for chars.
It is also useful to store the conditional probabilities of words given preceding words. That means, given we found the words "of" and "the", what is the chance the next word will be "world"? More formally, P("world"|"of", "the"). Generalizing, P(Wi|Wi-1, Wi-2, ... , Wi-n).
We call the word model N-Gram Word Model (from the Greek "gram", the root of "write", or the word for "letter") and the char model N-Gram Character Model. In the special case where N is 1, we call the models Unigram Word Model and Unigram Character Model respectively.
In the text module we implement the two models (both their unigram and n-gram variants) by inheriting from the CountingProbDist from learning.py. Note that CountingProbDist does not return the actual probability of each object, but the number of times it appears in our test data.
For word models we have UnigramWordModel and NgramWordModel. We supply them with a text file and they show the frequency of the different words. We have UnigramCharModel and NgramCharModel for the character models.
Execute the cells below to take a look at the code.
End of explanation
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramWordModel(wordseq)
P2 = NgramWordModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
print(P1['an'])
print(P2[('i', 'was')])
Explanation: Next we build our models. The text file we will use to build them is Flatland, by Edwin A. Abbott. We will load it from here. In that directory you can find other text files we might get to use here.
Getting Probabilities
Here we will take a look at how to read text and find the probabilities for each model, and how to retrieve them.
First the word models:
End of explanation
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P3 = NgramWordModel(3, wordseq)
print("Conditional Probabilities Table:", P3.cond_prob[('i', 'was')].dictionary, '\n')
print("Conditional Probability of 'once' give 'i was':", P3.cond_prob[('i', 'was')]['once'], '\n')
print("Next word after 'i was':", P3.cond_prob[('i', 'was')].sample())
Explanation: We see that the most used word in Flatland is 'the', with 2081 occurences, while the most used sequence is 'of the' with 368 occurences. Also, the probability of 'an' is approximately 0.003, while for 'i was' it is close to 0.001. Note that the strings used as keys are all lowercase. For the unigram model, the keys are single strings, while for n-gram models we have n-tuples of strings.
Below we take a look at how we can get information from the conditional probabilities of the model, and how we can generate the next word in a sequence.
End of explanation
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramCharModel(wordseq)
P2 = NgramCharModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
print(P1['z'])
print(P2[('g', 'h')])
Explanation: First we print all the possible words that come after 'i was' and the times they have appeared in the model. Next we print the probability of 'once' appearing after 'i was', and finally we pick a word to proceed after 'i was'. Note that the word is picked according to its probability of appearing (high appearance count means higher chance to get picked).
Let's take a look at the two character models:
End of explanation
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramWordModel(wordseq)
P2 = NgramWordModel(2, wordseq)
P3 = NgramWordModel(3, wordseq)
print(P1.samples(10))
print(P2.samples(10))
print(P3.samples(10))
Explanation: The most common letter is 'e', appearing more than 19000 times, and the most common sequence is "_t". That is, a space followed by a 't'. Note that even though we do not count spaces for word models or unigram character models, we do count them for n-gram char models.
Also, the probability of the letter 'z' appearing is close to 0.0006, while for the bigram 'gh' it is 0.003.
Generating Samples
Apart from reading the probabilities for n-grams, we can also use our model to generate word sequences, using the samples function in the word models.
End of explanation
data = open_data("EN-text/flatland.txt").read()
data += open_data("EN-text/sense.txt").read()
wordseq = words(data)
P3 = NgramWordModel(3, wordseq)
P4 = NgramWordModel(4, wordseq)
P5 = NgramWordModel(5, wordseq)
P7 = NgramWordModel(7, wordseq)
print(P3.samples(15))
print(P4.samples(15))
print(P5.samples(15))
print(P7.samples(15))
Explanation: For the unigram model, we mostly get gibberish, since each word is picked according to its frequency of appearance in the text, without taking into consideration preceding words. As we increase n though, we start to get samples that do have some semblance of conherency and do remind a little bit of normal English. As we increase our data, these samples will get better.
Let's try it. We will add to the model more data to work with and let's see what comes out.
End of explanation
psource(viterbi_segment)
Explanation: Notice how the samples start to become more and more reasonable as we add more data and increase the n parameter. We are still a long way to go though from realistic text generation, but at the same time we can see that with enough data even rudimentary algorithms can output something almost passable.
VITERBI TEXT SEGMENTATION
Overview
We are given a string containing words of a sentence, but all the spaces are gone! It is very hard to read and we would like to separate the words in the string. We can accomplish this by employing the Viterbi Segmentation algorithm. It takes as input the string to segment and a text model, and it returns a list of the separate words.
The algorithm operates in a dynamic programming approach. It starts from the beginning of the string and iteratively builds the best solution using previous solutions. It accomplishes that by segmentating the string into "windows", each window representing a word (real or gibberish). It then calculates the probability of the sequence up that window/word occuring and updates its solution. When it is done, it traces back from the final word and finds the complete sequence of words.
Implementation
End of explanation
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P = UnigramWordModel(wordseq)
text = "itiseasytoreadwordswithoutspaces"
s, p = viterbi_segment(text,P)
print("Sequence of words is:",s)
print("Probability of sequence is:",p)
Explanation: The function takes as input a string and a text model, and returns the most probable sequence of words, together with the probability of that sequence.
The "window" is w and it includes the characters from j to i. We use it to "build" the following sequence: from the start to j and then w. We have previously calculated the probability from the start to j, so now we multiply that probability by P[w] to get the probability of the whole sequence. If that probability is greater than the probability we have calculated so far for the sequence from the start to i (best[i]), we update it.
Example
The model the algorithm uses is the UnigramTextModel. First we will build the model using the Flatland text and then we will try and separate a space-devoid sentence.
End of explanation
psource(IRSystem)
Explanation: The algorithm correctly retrieved the words from the string. It also gave us the probability of this sequence, which is small, but still the most probable segmentation of the string.
INFORMATION RETRIEVAL
Overview
With Information Retrieval (IR) we find documents that are relevant to a user's needs for information. A popular example is a web search engine, which finds and presents to a user pages relevant to a query. Information retrieval is not limited only to returning documents though, but can also be used for other type of queries. For example, answering questions when the query is a question, returning information when the query is a concept, and many other applications. An IR system is comprised of the following:
A body (called corpus) of documents: A collection of documents, where the IR will work on.
A query language: A query represents what the user wants.
Results: The documents the system grades as relevant to a user's query and needs.
Presententation of the results: How the results are presented to the user.
How does an IR system determine which documents are relevant though? We can sign a document as relevant if all the words in the query appear in it, and sign it as irrelevant otherwise. We can even extend the query language to support boolean operations (for example, "paint AND brush") and then sign as relevant the outcome of the query for the document. This technique though does not give a level of relevancy. All the documents are either relevant or irrelevant, but in reality some documents are more relevant than others.
So, instead of a boolean relevancy system, we use a scoring function. There are many scoring functions around for many different situations. One of the most used takes into account the frequency of the words appearing in a document, the frequency of a word appearing across documents (for example, the word "a" appears a lot, so it is not very important) and the length of a document (since large documents will have higher occurences for the query terms, but a short document with a lot of occurences seems very relevant). We combine these properties in a formula and we get a numeric score for each document, so we can then quantify relevancy and pick the best documents.
These scoring functions are not perfect though and there is room for improvement. For instance, for the above scoring function we assume each word is independent. That is not the case though, since words can share meaning. For example, the words "painter" and "painters" are closely related. If in a query we have the word "painter" and in a document the word "painters" appears a lot, this might be an indication that the document is relevant but we are missing out since we are only looking for "painter". There are a lot of ways to combat this. One of them is to reduce the query and document words into their stems. For example, both "painter" and "painters" have "paint" as their stem form. This can improve slightly the performance of algorithms.
To determine how good an IR system is, we give the system a set of queries (for which we know the relevant pages beforehand) and record the results. The two measures for performance are precision and recall. Precision measures the proportion of result documents that actually are relevant. Recall measures the proportion of relevant documents (which, as mentioned before, we know in advance) appearing in the result documents.
Implementation
You can read the source code by running the command below:
End of explanation
psource(UnixConsultant)
Explanation: The stopwords argument signifies words in the queries that should not be accounted for in documents. Usually they are very common words that do not add any significant information for a document's relevancy.
A quick guide for the functions in the IRSystem class:
index_document: Add document to the collection of documents (named documents), which is a list of tuples. Also, count how many times each word in the query appears in each document.
index_collection: Index a collection of documents given by filenames.
query: Returns a list of n pairs of (score, docid) sorted on the score of each document. Also takes care of the special query "learn: X", where instead of the normal functionality we present the output of the terminal command "X".
score: Scores a given document for the given word using log(1+k)/log(1+n), where k is the number of query words in a document and k is the total number of words in the document. Other scoring functions can be used and you can overwrite this function to better suit your needs.
total_score: Calculate the sum of all the query words in given document.
present/present_results: Presents the results as a list.
We also have the class Document that holds metadata of documents, like their title, url and number of words. An additional class, UnixConsultant, can be used to initialize an IR System for Unix command manuals. This is the example we will use to showcase the implementation.
Example
First let's take a look at the source code of UnixConsultant.
End of explanation
uc = UnixConsultant()
q = uc.query("how do I remove a file")
top_score, top_doc = q[0][0], q[0][1]
print(top_score, uc.documents[top_doc].url)
Explanation: The class creates an IR System with the stopwords "how do i the a of". We could add more words to exclude, but the queries we will test will generally be in that format, so it is convenient. After the initialization of the system, we get the manual files and start indexing them.
Let's build our Unix consultant and run a query:
End of explanation
q = uc.query("how do I delete a file")
top_score, top_doc = q[0][0], q[0][1]
print(top_score, uc.documents[top_doc].url)
Explanation: We asked how to remove a file and the top result was the rm (the Unix command for remove) manual. This is exactly what we wanted! Let's try another query:
End of explanation
plaintext = "ABCDWXYZ"
ciphertext = shift_encode(plaintext, 3)
print(ciphertext)
Explanation: Even though we are basically asking for the same thing, we got a different top result. The diff command shows the differences between two files. So the system failed us and presented us an irrelevant document. Why is that? Unfortunately our IR system considers each word independent. "Remove" and "delete" have similar meanings, but since they are different words our system will not make the connection. So, the diff manual which mentions a lot the word delete gets the nod ahead of other manuals, while the rm one isn't in the result set since it doesn't use the word at all.
INFORMATION EXTRACTION
Information Extraction (IE) is a method for finding occurences of object classes and relationships in text. Unlike IR systems, an IE system includes (limited) notions of syntax and semantics. While it is difficult to extract object information in a general setting, for more specific domains the system is very useful. One model of an IE system makes use of templates that match with strings in a text.
A typical example of such a model is reading prices from web pages. Prices usually appear after a dollar and consist of numbers, maybe followed by two decimal points. Before the price, usually there will appear a string like "price:". Let's build a sample template.
With the following regular expression (regex) we can extract prices from text:
[$][0-9]+([.][0-9][0-9])?
Where + means 1 or more occurences and ? means at most 1 occurence. Usually a template consists of a prefix, a target and a postfix regex. In this template, the prefix regex can be "price:", the target regex can be the above regex and the postfix regex can be empty.
A template can match with multiple strings. If this is the case, we need a way to resolve the multiple matches. Instead of having just one template, we can use multiple templates (ordered by priority) and pick the match from the highest-priority template. We can also use other ways to pick. For the dollar example, we can pick the match closer to the numerical half of the highest match. For the text "Price $90, special offer $70, shipping $5" we would pick "$70" since it is closer to the half of the highest match ("$90").
The above is called attribute-based extraction, where we want to find attributes in the text (in the example, the price). A more sophisticated extraction system aims at dealing with multiple objects and the relations between them. When such a system reads the text "$100", it should determine not only the price but also which object has that price.
Relation extraction systems can be built as a series of finite state automata. Each automaton receives as input text, performs transformations on the text and passes it on to the next automaton as input. An automata setup can consist of the following stages:
Tokenization: Segments text into tokens (words, numbers and punctuation).
Complex-word Handling: Handles complex words such as "give up", or even names like "Smile Inc.".
Basic-group Handling: Handles noun and verb groups, segmenting the text into strings of verbs or nouns (for example, "had to give up").
Complex Phrase Handling: Handles complex phrases using finite-state grammar rules. For example, "Human+PlayedChess("with" Human+)?" can be one template/rule for capturing a relation of someone playing chess with others.
Structure Merging: Merges the structures built in the previous steps.
Finite-state, template based information extraction models work well for restricted domains, but perform poorly as the domain becomes more and more general. There are many models though to choose from, each with its own strengths and weaknesses. Some of the models are the following:
Probabilistic: Using Hidden Markov Models, we can extract information in the form of prefix, target and postfix from a given text. Two advantages of using HMMs over templates is that we can train HMMs from data and don't need to design elaborate templates, and that a probabilistic approach behaves well even with noise. In a regex, if one character is off, we do not have a match, while with a probabilistic approach we have a smoother process.
Conditional Random Fields: One problem with HMMs is the assumption of state independence. CRFs are very similar to HMMs, but they don't have the latter's constraint. In addition, CRFs make use of feature functions, which act as transition weights. For example, if for observation $e_{i}$ and state $x_{i}$ we have $e_{i}$ is "run" and $x_{i}$ is the state ATHLETE, we can have $f(x_{i}, e_{i}) = 1$ and equal to 0 otherwise. We can use multiple, overlapping features, and we can even use features for state transitions. Feature functions don't have to be binary (like the above example) but they can be real-valued as well. Also, we can use any $e$ for the function, not just the current observation. To bring it all together, we weigh a transition by the sum of features.
Ontology Extraction: This is a method for compiling information and facts in a general domain. A fact can be in the form of NP is NP, where NP denotes a noun-phrase. For example, "Rabbit is a mammal".
DECODERS
Introduction
In this section we will try to decode ciphertext using probabilistic text models. A ciphertext is obtained by performing encryption on a text message. This encryption lets us communicate safely, as anyone who has access to the ciphertext but doesn't know how to decode it cannot read the message. We will restrict our study to <b>Monoalphabetic Substitution Ciphers</b>. These are primitive forms of cipher where each letter in the message text (also known as plaintext) is replaced by another another letter of the alphabet.
Shift Decoder
The Caesar cipher
The Caesar cipher, also known as shift cipher is a form of monoalphabetic substitution ciphers where each letter is <i>shifted</i> by a fixed value. A shift by <b>n</b> in this context means that each letter in the plaintext is replaced with a letter corresponding to n letters down in the alphabet. For example the plaintext "ABCDWXYZ" shifted by 3 yields "DEFGZABC". Note how X became A. This is because the alphabet is cyclic, i.e. the letter after the last letter in the alphabet, Z, is the first letter of the alphabet - A.
End of explanation
print(bigrams('this is a sentence'))
Explanation: Decoding a Caesar cipher
To decode a Caesar cipher we exploit the fact that not all letters in the alphabet are used equally. Some letters are used more than others and some pairs of letters are more probable to occur together. We call a pair of consecutive letters a <b>bigram</b>.
End of explanation
%psource ShiftDecoder
Explanation: We use CountingProbDist to get the probability distribution of bigrams. In the latin alphabet consists of only only 26 letters. This limits the total number of possible substitutions to 26. We reverse the shift encoding for a given n and check how probable it is using the bigram distribution. We try all 26 values of n, i.e. from n = 0 to n = 26 and use the value of n which gives the most probable plaintext.
End of explanation
plaintext = "This is a secret message"
ciphertext = shift_encode(plaintext, 13)
print('The code is', '"' + ciphertext + '"')
flatland = open_data("EN-text/flatland.txt").read()
decoder = ShiftDecoder(flatland)
decoded_message = decoder.decode(ciphertext)
print('The decoded message is', '"' + decoded_message + '"')
Explanation: Example
Let us encode a secret message using Caeasar cipher and then try decoding it using ShiftDecoder. We will again use flatland.txt to build the text model
End of explanation
psource(PermutationDecoder)
Explanation: Permutation Decoder
Now let us try to decode messages encrypted by a general monoalphabetic substitution cipher. The letters in the alphabet can be replaced by any permutation of letters. For example if the alpahbet consisted of {A B C} then it can be replaced by {A C B}, {B A C}, {B C A}, {C A B}, {C B A} or even {A B C} itself. Suppose we choose the permutation {C B A}, then the plain text "CAB BA AAC" would become "ACB BC CCA". We can see that Caesar cipher is also a form of permutation cipher where the permutation is a cyclic permutation. Unlike the Caesar cipher, it is infeasible to try all possible permutations. The number of possible permutations in Latin alphabet is 26! which is of the order $10^{26}$. We use graph search algorithms to search for a 'good' permutation.
End of explanation
ciphertexts = ['ahed world', 'ahed woxld']
pd = PermutationDecoder(canonicalize(flatland))
for ctext in ciphertexts:
print('"{}" decodes to "{}"'.format(ctext, pd.decode(ctext)))
Explanation: Each state/node in the graph is represented as a letter-to-letter map. If there no mapping for a letter it means the letter is unchanged in the permutation. These maps are stored as dictionaries. Each dictionary is a 'potential' permutation. We use the word 'potential' because every dictionary doesn't necessarily represent a valid permutation since a permutation cannot have repeating elements. For example the dictionary {'A': 'B', 'C': 'X'} is invalid because 'A' is replaced by 'B', but so is 'B' because the dictionary doesn't have a mapping for 'B'. Two dictionaries can also represent the same permutation e.g. {'A': 'C', 'C': 'A'} and {'A': 'C', 'B': 'B', 'C': 'A'} represent the same permutation where 'A' and 'C' are interchanged and all other letters remain unaltered. To ensure we get a valid permutation a goal state must map all letters in the alphabet. We also prevent repetions in the permutation by allowing only those actions which go to new state/node in which the newly added letter to the dictionary maps to previously unmapped letter. These two rules togeter ensure that the dictionary of a goal state will represent a valid permutation.
The score of a state is determined using word scores, unigram scores, and bigram scores. Experiment with different weightages for word, unigram and bigram scores and see how they affect the decoding.
End of explanation |
9,648 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
how you can incorporate a preprocessing layer into a classification network and train it using a dataset
| Python Code::
from tensorflow.keras.utils import image_dataset_from_directory
import tensorflow as tf
import matplotlib.pyplot as plt
PATH='.../Citrus/Leaves' # modify to your path
ds = image_dataset_from_directory(PATH,
validation_split=0.2, subset="training",
image_size=(256,256), interpolation="mitchellcubic",
crop_to_aspect_ratio=True,
seed=42, shuffle=True, batch_size=32)
AUTOTUNE = tf.data.AUTOTUNE
ds = ds.cache().prefetch(buffer_size=AUTOTUNE)
num_classes = 5
model = tf.keras.Sequential([
tf.keras.layers.RandomFlip("horizontal_and_vertical"),
tf.keras.layers.RandomRotation(0.2),
tf.keras.layers.Rescaling(1/127.0, offset=-1),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(num_classes)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(ds, epochs=3)
|
9,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### Set self.activation_function to the sigmoid function ####
self.activation_function = lambda x : 1 / (1 + np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# Hidden layer
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# Output error
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# Calculate the hidden layer's contribution to the error
hidden_error = error * self.weights_hidden_to_output
# Backpropagated error terms
output_error_term = error
hidden_error_term = hidden_error.T * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# Update the weights
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 6000
learning_rate = 1.2
hidden_nodes = 12
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
9,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preparation
SciKit
We are using the brand new 0.16.1
Data Preparation
trainLabels.csv is provided by Kaggle, mix_lbp.csv are the features extracted for this learning.
Utilities
Developed to faciliate this
Step1: An auxilary function
Let's define a function that plots the confusion matrix to see how accurate our predictions really are
Step2: Load data from the text file
Loaded data contains all of the training examples.
<p>__NOTE
Step3: The last line above does the following
Step4: <font color='green'>SKSupervisedLearning</font> wraps the sklearn grid search technique for searching for optimal parameters in one call. You can take a look at the implementation details on GitHub.
Let's plot the confusion matrix to see how well we are doing (values inside squares are %'s)
Step5: As expected, we are not doing so well in class 5 where there are very few samples.
Train Neural Net`
This is a fun one, I promise.
Step6: Train Random Forest with Calibration
Finally, we train the random forest (which happens to train in seconds) with the calibration classifier (which takes 2 hours or so)
Random forests are very accurate, the problem is that they maker over-confident predictions (or at least that is what the <font color='green'>predict_proba</font> function, which is supposed to return probabilities of each class gives us). So, god forbid we are ever wrong! Since it predicts the probability of 0 on the correct class, log loss goes to infinity. Calibration classifier makes <font color='green'>predict_proba</font> return something sane.
Step7: As you can see, we are simply wrapping the $\color{green}{RandomForestClassifier}$ in the $\color{green}{CalibratedClassifier}$. Plot the matrix (after a couple of hours)
Step8: Voting
Now we can gather the results of our experiments and blend them. We use a simple weighted voting scheme for that.
$\color{green}{vote}$ function is implemented in tr_utils.py.
Here we are trying to balance the weights of SVM and calibrated RF. | Python Code:
from SupervisedLearning import SKSupervisedLearning
from train_files import TrainFiles
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.metrics import log_loss, confusion_matrix
from sklearn.calibration import CalibratedClassifierCV
from tr_utils import vote
import matplotlib.pylab as plt
import numpy as np
from train_nn import createDataSets, train
Explanation: Preparation
SciKit
We are using the brand new 0.16.1
Data Preparation
trainLabels.csv is provided by Kaggle, mix_lbp.csv are the features extracted for this learning.
Utilities
Developed to faciliate this:
- tr_utils.py - often used functions
- train_files.py - aids in file manipulation (loading features)
- SupervisedLearning.py - thin wrapper arround scikit supervised learning algorithms
- train_nn.py - neural net using PyBrain
Methodology
We are running three classifiers and blending their results:
1. SVM
2. Neural net
3. Random forest calibrated by the CalibratedClassifierCV available in scikit 0.16.1
mix_lbp.csv contains all the features (below) used for the experiments (as well as for final competition participation). It contains all the samples of the training set. We split this set 9:1, where 90% is used for training and 10% for validation.
We then run three different classifiers on these sets (training on the larger one and validating on the smaller one), and try to blend the results by simple voting in order to decrease the resulting log loss (computed on the validation dataset).
Feature Selection
We are training with two types of features:
1. Features selected from the binary files as described in the 1dlbp article. These produce a histogram of 256 bins for each file
2. Features selected from .asm files are binary features, indicating whehter a given file contains a certain Windows API. 141 top APIs are picked for this purpose
3. Number of subroutines in each file (one additional feature). Giving us a 398-dimensional vector
End of explanation
def plot_confusion(sl):
conf_mat = confusion_matrix(sl.Y_test, sl.clf.predict(sl.X_test_scaled)).astype(dtype='float')
norm_conf_mat = conf_mat / conf_mat.sum(axis = 1)[:, None]
fig = plt.figure()
plt.clf()
ax = fig.add_subplot(111)
ax.set_aspect(1)
res = ax.imshow(norm_conf_mat, cmap=plt.cm.jet,
interpolation='nearest')
cb = fig.colorbar(res)
labs = np.unique(Y_test)
x = labs - 1
plt.xticks(x, labs)
plt.yticks(x, labs)
for i in x:
for j in x:
ax.text(i - 0.2, j + 0.2, "{:3.0f}".format(norm_conf_mat[j, i] * 100.))
return conf_mat
Explanation: An auxilary function
Let's define a function that plots the confusion matrix to see how accurate our predictions really are
End of explanation
train_path_mix = "./mix_lbp.csv"
labels_file = "./trainLabels.csv"
X, Y_train, Xt, Y_test = TrainFiles.from_csv(train_path_mix, test_size = 0.1)
Explanation: Load data from the text file
Loaded data contains all of the training examples.
<p>__NOTE:__ Actually _almost_ all. 8 are missing, because binary features could not be extracted from them.
End of explanation
sl = SKSupervisedLearning(SVC, X, Y_train, Xt, Y_test)
sl.fit_standard_scaler()
sl.train_params = {'C': 100, 'gamma': 0.01, 'probability' : True}
ll_trn, ll_tst = sl.fit_and_validate()
print "SVC log loss: ", ll_tst
Explanation: The last line above does the following:
1. Loads the examples from the csv files, assuming the labels are in the last column
2. Splits the results in a training and a validation dataset using sklearn magic, based on the test_size parameter (defaults to 0.1). $test_size \in [0, 1]$.
3. Returns two tuples: (training, training_labels), (testing, testing_labels)
Training
Training consists of training three models
1. SVM with an RBF kernel
2. Random foreset with the calibration classifier installed in 0.16.0 of scikit
3. Neural net
Train SVM
We neatly wrap this into our $\color{green}{SKSupervisedLearning}$ class
The procedure is simple:
1. Instantiate the class with the tuple returned in by the TrainFiles instance or method above and the desired classifier
2. Apply standard scaling (in scikit this is the Z-score scaling which centers the samples and reduces std to 1).
NOTE: This is what the SVM classifier expects
3. Set training parameters
4. Call $\color{green}{fit_and_validate()}$ to retrieve the $\color{green}{log_loss}$. This function will compute the log loss on the validation dataset.
End of explanation
%matplotlib inline
conf_svm = plot_confusion(sl)
Explanation: <font color='green'>SKSupervisedLearning</font> wraps the sklearn grid search technique for searching for optimal parameters in one call. You can take a look at the implementation details on GitHub.
Let's plot the confusion matrix to see how well we are doing (values inside squares are %'s):
End of explanation
%matplotlib inline
trndata, tstdata = createDataSets(sl.X_train_scaled, Y_train, sl.X_test_scaled, Y_test)
fnn = train(trndata, tstdata, epochs = 10, test_error = 0.07, momentum = 0.15, weight_decay = 0.0001)
Explanation: As expected, we are not doing so well in class 5 where there are very few samples.
Train Neural Net`
This is a fun one, I promise. :)
The neural net is built by PyBrain, has just one hidden layer which is equal to $\frac{1}{4}$th of the input layer. The hidden layer activation is sigmoid, the output - softmax (since this is a multi-class neural net), and has bias units for the hidden and the output layers. We use the PyBrain $\color{green}{buildNetwork()}$ function that builds the network in one call.
<p>__NOTE:__ We are still using all the scaled features to train the neural net
On the left - average error, on the right - log loss.
This starts to overfit pretty fast, though.
End of explanation
sl_ccrf = SKSupervisedLearning(CalibratedClassifierCV, X, Y_train, Xt, Y_test)
sl_ccrf.train_params = \
{'base_estimator': RandomForestClassifier(**{'n_estimators' : 7500, 'max_depth' : 200}), 'cv': 10}
sl_ccrf.fit_standard_scaler()
ll_ccrf_trn, ll_ccrf_tst = sl_ccrf.fit_and_validate()
print "Calibrated log loss: ", ll_ccrf_tst
Explanation: Train Random Forest with Calibration
Finally, we train the random forest (which happens to train in seconds) with the calibration classifier (which takes 2 hours or so)
Random forests are very accurate, the problem is that they maker over-confident predictions (or at least that is what the <font color='green'>predict_proba</font> function, which is supposed to return probabilities of each class gives us). So, god forbid we are ever wrong! Since it predicts the probability of 0 on the correct class, log loss goes to infinity. Calibration classifier makes <font color='green'>predict_proba</font> return something sane.
End of explanation
%matplotlib inline
conf_ccrf = plot_confusion(sl_ccrf)
Explanation: As you can see, we are simply wrapping the $\color{green}{RandomForestClassifier}$ in the $\color{green}{CalibratedClassifier}$. Plot the matrix (after a couple of hours):
End of explanation
%matplotlib inline
x = 1. / np.arange(1., 6)
y = 1 - x
xx, yy = np.meshgrid(x, y)
lls1 = np.zeros(xx.shape[0] * yy.shape[0]).reshape(xx.shape[0], yy.shape[0])
lls2 = np.zeros(xx.shape[0] * yy.shape[0]).reshape(xx.shape[0], yy.shape[0])
for i, x_ in enumerate(x):
for j, y_ in enumerate(y):
proba = vote([sl.proba_test, sl_ccrf.proba_test], [x_, y_])
lls1[i, j] = log_loss(Y_test, proba)
proba = vote([sl.proba_test, sl_ccrf.proba_test], [y_, x_])
lls2[i, j] = log_loss(Y_test, proba)
fig = plt.figure()
plt.clf()
ax = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
ax.set_aspect(1)
ax1.set_aspect(1)
res = ax.imshow(lls1, cmap=plt.cm.jet,
interpolation='nearest')
res = ax1.imshow(lls2, cmap=plt.cm.jet,
interpolation='nearest')
cb = fig.colorbar(res)
Explanation: Voting
Now we can gather the results of our experiments and blend them. We use a simple weighted voting scheme for that.
$\color{green}{vote}$ function is implemented in tr_utils.py.
Here we are trying to balance the weights of SVM and calibrated RF.
End of explanation |
9,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Isentropic Analysis
The MetPy function mpcalc.isentropic_interpolation allows for isentropic analysis from model
analysis data in isobaric coordinates.
Step1: Getting the data
In this example, NARR reanalysis data for 18 UTC 04 April 1987 from the National Centers
for Environmental Information (https
Step2: We will reduce the dimensionality of the data as it is pulled in to remove an empty time
dimension.
Step3: To properly interpolate to isentropic coordinates, the function must know the desired output
isentropic levels. An array with these levels will be created below.
Step4: Conversion to Isentropic Coordinates
Once three dimensional data in isobaric coordinates has been pulled and the desired
isentropic levels created, the conversion to isentropic coordinates can begin. Data will be
passed to the function as below. The function requires that isentropic levels, isobaric
levels, and temperature be input. Any additional inputs (in this case relative humidity, u,
and v wind components) will be linearly interpolated to isentropic space.
Step5: The output is a list, so now we will separate the variables to different names before
plotting.
Step6: A quick look at the shape of these variables will show that the data is now in isentropic
coordinates, with the number of vertical levels as specified above.
Step7: Converting to Relative Humidity
The NARR only gives specific humidity on isobaric vertical levels, so relative humidity will
have to be calculated after the interpolation to isentropic space.
Step8: Plotting the Isentropic Analysis
Step9: Montgomery Streamfunction
The Montgomery Streamfunction, ${\psi} = gdz + CpT$, is often desired because its
gradient is proportional to the geostrophic wind in isentropic space. This can be easily
calculated with mpcalc.montgomery_streamfunction. | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, add_timestamp
from metpy.units import units
Explanation: Isentropic Analysis
The MetPy function mpcalc.isentropic_interpolation allows for isentropic analysis from model
analysis data in isobaric coordinates.
End of explanation
data = xr.open_dataset(get_test_data('narr_example.nc', False))
print(list(data.variables))
Explanation: Getting the data
In this example, NARR reanalysis data for 18 UTC 04 April 1987 from the National Centers
for Environmental Information (https://www.ncdc.noaa.gov/data-access/model-data)
will be used.
End of explanation
# Assign data to variable names
lat = data['lat']
lon = data['lon']
lev = data['isobaric']
times = data['time']
tmp = data['Temperature'][0]
uwnd = data['u_wind'][0]
vwnd = data['v_wind'][0]
spech = data['Specific_humidity'][0]
# pint doesn't understand gpm
data['Geopotential_height'].attrs['units'] = 'meter'
hgt = data['Geopotential_height'][0]
Explanation: We will reduce the dimensionality of the data as it is pulled in to remove an empty time
dimension.
End of explanation
isentlevs = [296.] * units.kelvin
Explanation: To properly interpolate to isentropic coordinates, the function must know the desired output
isentropic levels. An array with these levels will be created below.
End of explanation
isent_anal = mpcalc.isentropic_interpolation(isentlevs,
lev,
tmp,
spech,
uwnd,
vwnd,
hgt,
tmpk_out=True)
Explanation: Conversion to Isentropic Coordinates
Once three dimensional data in isobaric coordinates has been pulled and the desired
isentropic levels created, the conversion to isentropic coordinates can begin. Data will be
passed to the function as below. The function requires that isentropic levels, isobaric
levels, and temperature be input. Any additional inputs (in this case relative humidity, u,
and v wind components) will be linearly interpolated to isentropic space.
End of explanation
isentprs, isenttmp, isentspech, isentu, isentv, isenthgt = isent_anal
isentu.ito('kt')
isentv.ito('kt')
Explanation: The output is a list, so now we will separate the variables to different names before
plotting.
End of explanation
print(isentprs.shape)
print(isentspech.shape)
print(isentu.shape)
print(isentv.shape)
print(isenttmp.shape)
print(isenthgt.shape)
Explanation: A quick look at the shape of these variables will show that the data is now in isentropic
coordinates, with the number of vertical levels as specified above.
End of explanation
isentrh = 100 * mpcalc.relative_humidity_from_specific_humidity(isentspech, isenttmp, isentprs)
Explanation: Converting to Relative Humidity
The NARR only gives specific humidity on isobaric vertical levels, so relative humidity will
have to be calculated after the interpolation to isentropic space.
End of explanation
# Set up our projection
crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0)
# Coordinates to limit map area
bounds = [(-122., -75., 25., 50.)]
# Choose a level to plot, in this case 296 K
level = 0
fig = plt.figure(figsize=(17., 12.))
add_metpy_logo(fig, 120, 245, size='large')
ax = fig.add_subplot(1, 1, 1, projection=crs)
ax.set_extent(*bounds, crs=ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Plot the surface
clevisent = np.arange(0, 1000, 25)
cs = ax.contour(lon, lat, isentprs[level, :, :], clevisent,
colors='k', linewidths=1.0, linestyles='solid', transform=ccrs.PlateCarree())
ax.clabel(cs, fontsize=10, inline=1, inline_spacing=7,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Plot RH
cf = ax.contourf(lon, lat, isentrh[level, :, :], range(10, 106, 5),
cmap=plt.cm.gist_earth_r, transform=ccrs.PlateCarree())
cb = fig.colorbar(cf, orientation='horizontal', extend='max', aspect=65, shrink=0.5, pad=0.05,
extendrect='True')
cb.set_label('Relative Humidity', size='x-large')
# Plot wind barbs
ax.barbs(lon.values, lat.values, isentu[level, :, :].m, isentv[level, :, :].m, length=6,
regrid_shape=20, transform=ccrs.PlateCarree())
# Make some titles
ax.set_title('{:.0f} K Isentropic Pressure (hPa), Wind (kt), Relative Humidity (percent)'
.format(isentlevs[level].m), loc='left')
add_timestamp(ax, times[0].dt, y=0.02, high_contrast=True)
fig.tight_layout()
Explanation: Plotting the Isentropic Analysis
End of explanation
# Calculate Montgomery Streamfunction and scale by 10^-2 for plotting
msf = mpcalc.montgomery_streamfunction(isenthgt, isenttmp) / 100.
# Choose a level to plot, in this case 296 K
level = 0
fig = plt.figure(figsize=(17., 12.))
add_metpy_logo(fig, 120, 250, size='large')
ax = plt.subplot(111, projection=crs)
ax.set_extent(*bounds, crs=ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75)
ax.add_feature(cfeature.STATES.with_scale('50m'), linewidth=0.5)
# Plot the surface
clevmsf = np.arange(0, 4000, 5)
cs = ax.contour(lon, lat, msf[level, :, :], clevmsf,
colors='k', linewidths=1.0, linestyles='solid', transform=ccrs.PlateCarree())
ax.clabel(cs, fontsize=10, inline=1, inline_spacing=7,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Plot RH
cf = ax.contourf(lon, lat, isentrh[level, :, :], range(10, 106, 5),
cmap=plt.cm.gist_earth_r, transform=ccrs.PlateCarree())
cb = fig.colorbar(cf, orientation='horizontal', extend='max', aspect=65, shrink=0.5, pad=0.05,
extendrect='True')
cb.set_label('Relative Humidity', size='x-large')
# Plot wind barbs.
ax.barbs(lon.values, lat.values, isentu[level, :, :].m, isentv[level, :, :].m, length=6,
regrid_shape=20, transform=ccrs.PlateCarree())
# Make some titles
ax.set_title('{:.0f} K Montgomery Streamfunction '.format(isentlevs[level].m) +
r'($10^{-2} m^2 s^{-2}$), ' +
'Wind (kt), Relative Humidity (percent)', loc='left')
add_timestamp(ax, times[0].dt, y=0.02, pretext='Valid: ', high_contrast=True)
fig.tight_layout()
plt.show()
Explanation: Montgomery Streamfunction
The Montgomery Streamfunction, ${\psi} = gdz + CpT$, is often desired because its
gradient is proportional to the geostrophic wind in isentropic space. This can be easily
calculated with mpcalc.montgomery_streamfunction.
End of explanation |
9,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyze Satisfaction Survey Responses
An example demonstration of a typical analysis workflow in DSX leveraging Spark, Watson APIs, Pandas, and various visualization libraries.
<img src='https
Step1: Import packages to the notebook
Import the requests, base64, StringIO, pandas, SparkContext, json, and re packages to use in the notebook. The pandas package is traditionally imported as pd
Step2: Access Object Storage
Because the surveys.csv file is located in Object Storage, you need to define a helper function to access the data file that you loaded. Run the following cell to define the method get_file_content()
Step3: Load data into pandas DataFrame
Run the next cell to load the data into a pandas DataFrame
Step4: <a id="explore_data"></a>
4. Explore data
Show the first five and the last five rows of the data by using the head() and tail() methods. Run each code cell
Step5: Each row in the table lists
Step6: Unique Survey Sections
Let's identify the unique survey categories of response.
Step7: Data Transformations
Now that we have a basic understanding of our data we will want to move through the data and clean it up for easier prcoessing of our sentiment and tonal analysis.
Lowercase
Convert to comments to lowercase.
Step10: Remove Punctuation and Whitespace and Stopwords
Let's remove punctuation and whitespace so we can omit them from our analysis. We should also remove common words from our analysis for term frequencies.
Step11: Indexes
Let's build an index by dept so we can summarize all surveys together easier.
Step12: Convert to Spark Data Frame from Pandas
So far we've explored the data and performed several manipulations in Pandas. Pandas is a great library, but it will not provide us with the distributed computing facilities we require for large scale analysis. Let's move this into Spark and see what we can work with.
Once in Spark we can visualize our dataset with the excellent Pixiedust library by IBM.
Step13: Counts of Surveys by Section
Let's graph some basic counts of surveys with Pixiedust. Note how you can change the graph type graphically.
Step14: Machine Learning
Now that we have an idea of our data, we can move into our actual modeling and analysis. An interesting thought exercise might be to determine what words are mostly associated with a good sentiment or a bad sentiment.
We will use the Spark MLlib library Word2Vec to find the cosine distance synonyms for terms 'good' and 'bad'.
Step15: Most Associated Good Synonyms
Step16: Word Clouds
Word clouds are a nice human way to understand the relationship and weighting of sentiment in a free text field. They're not scientific, but they do help us to dig deeper.
Word Clouds for Good Terms
Step17: Most Associated Bad Synonyms
Step18: Watson Tonal Analysis
Sentiment analysis is relatively trivial at this stage, and most survey vendors provide a basic NLP implementation. I feel we can go deeper and perhaps find a larger insight with more sophisticated analysis. Let's leverage IBM's Watson ML platform via RESTful API calls and see what we can determine.
Tone Analyzer uses linguistic analysis to detect three types of tones in written text
Step19: Thank you for your time! | Python Code:
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
Explanation: Analyze Satisfaction Survey Responses
An example demonstration of a typical analysis workflow in DSX leveraging Spark, Watson APIs, Pandas, and various visualization libraries.
<img src='https://www-935.ibm.com/services/image/watson_data_platform_banner_running.jpg' width="100%" height="50%"></img>
Code Toggle
Hide code cells for cleaner output with a selectable toggle.
End of explanation
import nltk, requests, StringIO, pandas as pd, pprint, json, re, sys
from pyspark import SparkContext
from pixiedust.display import *
!pip install wordcloud --user nltk
from wordcloud import WordCloud, STOPWORDS
Explanation: Import packages to the notebook
Import the requests, base64, StringIO, pandas, SparkContext, json, and re packages to use in the notebook. The pandas package is traditionally imported as pd:
End of explanation
# The code was removed by DSX for sharing.
# The code was removed by DSX for sharing.
Explanation: Access Object Storage
Because the surveys.csv file is located in Object Storage, you need to define a helper function to access the data file that you loaded. Run the following cell to define the method get_file_content():
Insert data source credentials
The credentials for accessing the surveys.csv file are added to the cell in a function. With these credentials, you can use the helper function to load the data file into a pandas.DataFrame.
Note: When you select the Insert to code function, a code cell with a dictionary is created for you. Adjust the credentials in the Python dictionary to correspond with the credentials inserted by the Insert to code function and run the dictionary code cell. The access credentials to the Object Storage instance in the dictionary are provided for convenience for later usage.
End of explanation
surveys_df = pd.read_csv(get_object_storage_file_with_credentials_bba657a78df141959b30542141270d03('WSUSymposium', 'surveys.csv'))
Explanation: Load data into pandas DataFrame
Run the next cell to load the data into a pandas DataFrame:
End of explanation
surveys_df.head()
surveys_df.tail()
Explanation: <a id="explore_data"></a>
4. Explore data
Show the first five and the last five rows of the data by using the head() and tail() methods. Run each code cell:
End of explanation
surveys_df.dept.unique()
Explanation: Each row in the table lists:
The facility where the survey was done.
The type of survey involved.
Unique Facilities
How many discrete facilities are captured in our dataset?
End of explanation
surveys_df.SectionName.unique()
Explanation: Unique Survey Sections
Let's identify the unique survey categories of response.
End of explanation
surveys_df['CommentText'] = surveys_df['CommentText'].str.lower()
surveys_df.head()
Explanation: Data Transformations
Now that we have a basic understanding of our data we will want to move through the data and clean it up for easier prcoessing of our sentiment and tonal analysis.
Lowercase
Convert to comments to lowercase.
End of explanation
nltk.download("stopwords")
def removePunctuation(text):
Removes punctuation, changes to lowercase, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated. (e.g. it's becomes its)
Args:
text (str): A string.
Returns:
str: The cleaned up string.
letters_only = re.sub("[^a-zA-Z]", " ", text)
words = letters_only.lower().split()
return(words)
def removeWords(text):
from nltk.corpus import stopwords # Import the stop word list
Removes punctuation, changes to lowercase, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated. (e.g. it's becomes its)
Args:
text (str): A string.
Returns:
str: The cleaned up string.
stops = set(stopwords.words("english"))
meaningful_words = [w for w in text if not w in stops]
return(" ".join(meaningful_words))
surveys_df['CommentText'] = surveys_df['CommentText'].apply(removePunctuation)
surveys_df_trim = surveys_df
surveys_df_trim['CommentText'] = surveys_df['CommentText'].apply(removeWords)
Explanation: Remove Punctuation and Whitespace and Stopwords
Let's remove punctuation and whitespace so we can omit them from our analysis. We should also remove common words from our analysis for term frequencies.
End of explanation
surveys_df = surveys_df.set_index(surveys_df["dept"])
surveys_df.drop(['dept'], axis=1, inplace=True)
Explanation: Indexes
Let's build an index by dept so we can summarize all surveys together easier.
End of explanation
from pyspark.sql import SQLContext
print sc
sqlCtx = SQLContext(sc)
spark_df = sqlCtx.createDataFrame(surveys_df)
display(spark_df)
Explanation: Convert to Spark Data Frame from Pandas
So far we've explored the data and performed several manipulations in Pandas. Pandas is a great library, but it will not provide us with the distributed computing facilities we require for large scale analysis. Let's move this into Spark and see what we can work with.
Once in Spark we can visualize our dataset with the excellent Pixiedust library by IBM.
End of explanation
countsBySection = spark_df.groupby("SectionName").count()
display(countsBySection)
import brunel
words = spark_df.select("CommentText").rdd.map(lambda r: r[0])
counts = words.flatMap(lambda line: re.split('\W+', line.lower().strip())) \
.filter(lambda x: len(x) > 2) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b) \
.map(lambda x: (x[1], x[0])).sortByKey(False) \
.map(lambda x: (x[1], x[0]))
counts_df = pd.DataFrame(counts.take(10))
#counts_df = counts_df.transpose()
cols = ['Word', 'Count']
counts_df.columns = cols
%brunel data('counts_df') bar x(Word) y(Count) sort(Count) transpose :: width=800, height=640
Explanation: Counts of Surveys by Section
Let's graph some basic counts of surveys with Pixiedust. Note how you can change the graph type graphically.
End of explanation
from pyspark.mllib.feature import Word2Vec
inp = words.map(lambda row: row.split(" "))
word2vec = Word2Vec()
model = word2vec.fit(inp)
synonyms = model.findSynonyms('good', 40)
for word, cosine_distance in synonyms:
print "{}: {}".format(word, cosine_distance)
Explanation: Machine Learning
Now that we have an idea of our data, we can move into our actual modeling and analysis. An interesting thought exercise might be to determine what words are mostly associated with a good sentiment or a bad sentiment.
We will use the Spark MLlib library Word2Vec to find the cosine distance synonyms for terms 'good' and 'bad'.
End of explanation
values = map(lambda x: x[1], synonyms)
labels = map(lambda x: x[0], synonyms)
syn_df = pd.DataFrame(synonyms)
cols = ['term', 'score']
syn_df.columns = cols
%brunel data('syn_df') bar x(term) y(score) sort(score) transpose :: width=800, height=640
Explanation: Most Associated Good Synonyms
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
!pip install wordcloud --user nltk
from wordcloud import WordCloud, STOPWORDS
wordList = " ".join([x[0] for x in synonyms for times in range(0, int(x[1]*10))])
wordcloud = WordCloud(stopwords=STOPWORDS,
background_color='white',
relative_scaling=.5,
width=2400,
height=2000,
).generate(wordList)
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
Explanation: Word Clouds
Word clouds are a nice human way to understand the relationship and weighting of sentiment in a free text field. They're not scientific, but they do help us to dig deeper.
Word Clouds for Good Terms
End of explanation
synonyms = model.findSynonyms('bad', 10)
for word, cosine_distance in synonyms:
print "{}: {}".format(word, cosine_distance)
values = map(lambda x: x[1], synonyms)
labels = map(lambda x: x[0], synonyms)
syn_df = pd.DataFrame(synonyms)
cols = ['term', 'score']
syn_df.columns = cols
%brunel data('syn_df') bar x(term) y(score) sort(score) transpose :: width=800, height=640
Explanation: Most Associated Bad Synonyms
End of explanation
!pip install watson_developer_cloud --user nltk
import json
from watson_developer_cloud import ToneAnalyzerV3
tone_analyzer = ToneAnalyzerV3(
username='e6eb62c2-cb6f-4036-9ad8-666afd8cd185',
password='wxoRyaPj1d4m',
version='2016-02-11')
toneRows = spark_df.select("CommentText").take(100)
tones = []
for r in toneRows:
tones.append(json.dumps(tone_analyzer.tone(text=str(r)), indent=2))
from collections import defaultdict
score_list = defaultdict(list)
# Build Dict of Tonal Scores
for i in tones:
data = json.loads(str(i))
for r in data['document_tone']['tone_categories']:
for score in r['tones']:
score_list[score['tone_name']].append(score['score'])
# Average Tonal Sentiment by Tone Category
avgDict = {}
for k,v in score_list.iteritems():
# v is the list of grades for student k
avgDict[k] = sum(v)/ float(len(v))
print("Tone Category Averages")
print(json.dumps(avgDict, indent=2))
# Display Tonal Scores Human Readable
for i in tones:
data = json.loads(str(i))
for r in data['document_tone']['tone_categories']:
print(r['category_name'])
print("-" * len(r['category_name']))
for j in r['tones']:
print(j['tone_name'].ljust(20),(str(round(j['score'] * 100,1)) + "%").rjust(10))
print()
Explanation: Watson Tonal Analysis
Sentiment analysis is relatively trivial at this stage, and most survey vendors provide a basic NLP implementation. I feel we can go deeper and perhaps find a larger insight with more sophisticated analysis. Let's leverage IBM's Watson ML platform via RESTful API calls and see what we can determine.
Tone Analyzer uses linguistic analysis to detect three types of tones in written text: emotions, social tendencies, and writing style. Use the Tone Analyzer service to understand emotional context of conversations and communications. Use this insight to respond in an appropriate manner.
End of explanation
import requests
url = 'https://kafka-rest-prod01.messagehub.services.us-south.bluemix.net:443/topics/sentiment'
headers = {'Content-type': 'application/vnd.kafka.json.v1+json',
'X-Auth-Token': '0voZA4gXyOWP4ORu0LxarghaHMZaDwI0eDErLGoYyAProwgB'}
response = requests.put(url, data=tones[0], headers=headers)
print(response)
response = requests.get('https://kafka-rest-prod01.messagehub.services.us-south.bluemix.net:443/topics/sentiment', headers)
print(response)
spark_df.saveAsParquetFile("swift://notebooks.spark/surveys.parquet")
Explanation: Thank you for your time!
End of explanation |
9,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/hanford_variables.png">
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: 3. Calculate the basic descriptive statistics on the data
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step6: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
Step7: Now using statsmodels | Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
import numpy as np
Explanation: <img src="images/hanford_variables.png">
1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
df = pd.read_csv("data/hanford.csv")
Explanation: 2. Read in the hanford.csv file
End of explanation
df
df.describe()
df.hist()
Explanation: 3. Calculate the basic descriptive statistics on the data
End of explanation
df.corr()
df.plot(kind='scatter',x='Exposure',y='Mortality')
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
lm = LinearRegression()
data = np.asarray(df[['Mortality','Exposure']])
x = data[:,1:]
y = data[:,0]
lm.fit(x,y)
lm.score(x,y)
m = lm.coef_[0]
m
b = lm.intercept_
b
Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
df.plot(kind='scatter',x='Exposure',y='Mortality')
plt.plot(df['Exposure'],m*df['Exposure']+b,'-')
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation
lm.predict(10)
Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation
import statsmodels.formula.api as smf
lm = smf.ols(formula='Mortality~Exposure',data=df).fit()
lm.params
intercept, slope = lm.params
df.plot(kind='scatter',x='Exposure',y='Mortality')
plt.plot(df['Exposure'],slope*df['Exposure']+intercept,'-')
plt.xkcd()
df.plot(kind='scatter',x='Exposure',y='Mortality')
plt.plot(df['Exposure'],slope*df['Exposure']+intercept,'-')
lm.summary()
lm.mse_model
lm.pvalues
Explanation: Now using statsmodels
End of explanation |
9,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continued
Step1: More Text Analysis
Step2: Let's analyze the frequency of words that show up in question texts that have __ans__ in the 75th or above percentile
Step3: Since it's an intensive task to create a correlation on each of the 7615 words, most of which only appear once anyway, let's instead just work with the most frequent 1,000 words.
Step4: To make our task even less intensive (running the analysis, which I will describe below, took longer than 15 mins on the 1000 words), let's do it in chunks. Details below.
Step5: We will run each word and look up the correlation.
Step6: Would be nice to add the length and frequency.
Step7: Now let's redo the plot.
Step8: We could use zooming into 0-50 freq as it is apparent that higher frequency words have higher correlation.
Step9: So the previous conclusion hold for words that are three or four letters long.
Step10: Nothing more is descernable at this point.
To check our work, let's go through the code. Say, we pick out[19][2], the second word in the 19th slice
Step11: The series has 9000 entries of zeroes and one--zero means the question text in that row doesn't contain the keyword out[19][2]=research. So, looking at the mean of the series, we can tell data points with a question text conatining this word are 0.6% of the dataset.
On the other hand, we are calculating the correlation between __ans__ and the above series of 0's and 1's. This effectively tells us if we should weigh question texts with the particular word more.
Now, we will check the correlation in the remaining 19 chunks as we did with the 19th chunk. Simply run corrIt(index).
We also need to get rid of this to avoid multiple repeat errors.
Step12: Calculate the correlation of all the lists
Step13: Since the out list is randomized (see svf_list_ section) and because computing corr_big_list takes quite a while, I have saved a copy of a pair of out and corr_big_list as out_1 and corr_big_list_1 for convenience
Step14:
Step15: Ok university does really well compared to others, with correlation of 0.10. What if we look at the combination of the top 3 words.
Step16: Let's do even more combinations.
Step17: Notice that the second highest coefficient correlation could be in the list containing the first highest one; if that the case, we have missed it when we picked the top 3 above. To avoid this let's pick the top 3 in each list in corr_big_list_1.
Step18: This is an improvement over the previous combination! What if we do only top 2?
Step19: Ok so not as great as top 3 word combinations. I will create a feature for the top 3 (and other features explored in this and the previous notebook) in the next notebook.
Number of Followers
Step20: We take the sum of the followers of each topic it appears under because, naturally, the chances of a question being viewed increases with the number of topics it appears under, garnering a wider audience. | Python Code:
import pandas as pd
import json
json_data = open('../views/sample/input00.in') # Edit this to where you have put the input00.in file
data = []
for line in json_data:
data.append(json.loads(line))
data.remove(9000)
data.remove(1000)
df = pd.DataFrame(data)
cleaned_df=pd.DataFrame(data[0:9000])
data_df = cleaned_df.copy()
# extra libraries
from plotnine import *
Explanation: Continued
End of explanation
# The 75th and 90th percentile of the `__ans__` column.
data_df['__ans__'].quantile([0.75, 0.9])
Explanation: More Text Analysis
End of explanation
# svf stands for seventy-fifth
svf_words = data_df[data_df.__ans__ >= 4.019578][['question_text']].question_text.values
svf_words = ' '.join(svf_words).split()
svf_unique_words = sorted(set(svf_words))
svf_list = []
for word in svf_unique_words:
if len(word) >= 3:
a = [int(svf_words.count(word)), str(word)]
svf_list.append(a)
svf_list_df = pd.DataFrame(svf_list, columns=['freq', 'word']).sort_values(by=['freq'], ascending=False)
svf_list_df.describe()
Explanation: Let's analyze the frequency of words that show up in question texts that have __ans__ in the 75th or above percentile
End of explanation
# first sort the whole thing
svf_list_df = pd.DataFrame(svf_list, columns=['freq', 'word']).sort_values(by=['freq'], ascending=False)
# pick out the most frequent 1000 words
svf_words_freq_sorted = svf_list_df['word'][:1000].sample(frac=1) # the `sample` method randomizes all the rows (frac=1) after picking the top 1000
# convert in to an array
svf_list_ = svf_words_freq_sorted.values
svf_list_[0:30] # "head" of the array
Explanation: Since it's an intensive task to create a correlation on each of the 7615 words, most of which only appear once anyway, let's instead just work with the most frequent 1,000 words.
End of explanation
# divide svf_list_ into chunks
out = []
def chunkIt(seq, num):
avg = len(seq) / float(num)
last = 0.0
while last < len(seq):
out.append(seq[int(last):int(last + avg)])
last += avg
# the array `out` has 20 subarrays containing 50 words
chunkIt(svf_list_, 20)
Explanation: To make our task even less intensive (running the analysis, which I will describe below, took longer than 15 mins on the 1000 words), let's do it in chunks. Details below.
End of explanation
# give it an index and it will return the correlation array of the words in `out[index]`
def corrIt(idx):
var = []
for i in range(len(out[idx])):
a = data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains(str(out[idx][i]))) else 0)
var.append(a.corr(data_df['__ans__']))
return var
# return a sorted dataframe with words and their correlation in appearance
def dfIt(idx):
corr_list = corrIt(idx)
mash_df = pd.concat([pd.DataFrame(corr_list, columns=['cor']),pd.DataFrame(out[idx],
columns=['word'])],
axis=1)
return mash_df.sort_values(by='cor', ascending=False)
df_19=dfIt(19)
df_19.head()
Explanation: We will run each word and look up the correlation.
End of explanation
def enrich(data_f):
data_f['len'] = data_f['word'].apply(lambda x: len(x)) #adds length
data_f['freq'] = data_f['word'].apply(lambda x: # grab frequency of the word from `svf_list_df`
svf_list_df[svf_list_df['word']==x].iloc[0].freq)
return data_f
enrich(df_19).head()
(ggplot(df_19, aes('freq', 'cor', color='factor(len)'))+ geom_point())
df_19['len'].value_counts(bins=4)
# divide length into bins
bins = [2.9, 4.9, 7.9, 12] # bins will be effectively be [3, 5) etc
group_names = ['[3, 5)', '[5-8)', '[8-12]']
df_19['len_bins'] = pd.cut(df_19.len, bins, labels=group_names)
df_19.head()
Explanation: Would be nice to add the length and frequency.
End of explanation
(ggplot(df_19, aes('freq', 'cor'))
+ geom_point(aes(color='len_bins'))
+ geom_line(aes(group='len_bins'), size=0.1)
)
Explanation: Now let's redo the plot.
End of explanation
(ggplot(df_19[df_19['freq']<50], aes('freq', 'cor'))
+ geom_point(aes(color='len_bins'))
+ geom_line(aes(group='len_bins'), size=0.1)
)
Explanation: We could use zooming into 0-50 freq as it is apparent that higher frequency words have higher correlation.
End of explanation
(ggplot(df_19[df_19['freq']<15], aes('freq', 'cor'))
+ geom_point(aes(color='len_bins'))
+ geom_line(aes(group='len_bins'), size=0.1)
)
Explanation: So the previous conclusion hold for words that are three or four letters long.
End of explanation
print(out[19][2], str(out[19][2]))
data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains(str(out[19][2]))) else 0).describe()
Explanation: Nothing more is descernable at this point.
To check our work, let's go through the code. Say, we pick out[19][2], the second word in the 19th slice
End of explanation
# Searching for `\\` it only appears once as `\\sin` and `++` only appears in `C++`
for i in range(20):
try:
if list(out[i]).index('C++') != 999:
print('C++ was at ', i, list(out[i]).index('C++'))
out[i][list(out[i]).index('C++')] = 'Cpp'
except:
pass
for i in range(20):
try:
if list(out[i]).index('\\sin') != 999:
print('\\sin was at', i, list(out[i]).index('\\sin'))
out[i][list(out[i]).index('\\sin')] = 'sin'
except:
pass
for i in range(20):
try:
if list(out[i]).index('(or') != 999:
print('(or was at ', i, list(out[i]).index('(or'))
out[i][list(out[i]).index('(or')] = 'or'
except:
pass
Explanation: The series has 9000 entries of zeroes and one--zero means the question text in that row doesn't contain the keyword out[19][2]=research. So, looking at the mean of the series, we can tell data points with a question text conatining this word are 0.6% of the dataset.
On the other hand, we are calculating the correlation between __ans__ and the above series of 0's and 1's. This effectively tells us if we should weigh question texts with the particular word more.
Now, we will check the correlation in the remaining 19 chunks as we did with the 19th chunk. Simply run corrIt(index).
We also need to get rid of this to avoid multiple repeat errors.
End of explanation
corr_big_list = []
for i in range(0,20):
corr_big_list.append(corrIt(i))
Explanation: Calculate the correlation of all the lists
End of explanation
corr_big_list_1=[[0.00030816712579101571, 0.0044730016004795375, -0.018184254245652551, -0.011636498757889403, -0.0098418778581828206, 0.026097559924847694, 0.013042976799379082, -0.0024952856521519564, -0.0014278019418653462, -0.0066361332379731679, 0.034104606674625594, -0.0035300160142935299, -0.003388724944028706, -0.00080654880511365698, 0.0064072351728142116, -0.0066817381346244509, 0.00040979589589246629, -0.010358276711574618, 0.024729665673063398, -0.0048034598229090199, 0.1023288271726879, -0.0033370673570963859, 0.035198130429207504, -0.0023873121233479811, -0.0075365026756989712, -0.0043573283771661201, -0.00091066872737137863, 0.0085851641623233832, -0.0069155008482646016, -0.0033476240638101243, -0.00033489503766559448, 0.014024682957343831, -0.0066328075447330643, 0.0077050217122674606, 0.0083476497238622431, 0.00027866044671089101, -0.0046298488103451371, 0.0094317686368927591, 0.0031286448646075495, -0.007465595807550328, -0.0062456567924396708, -0.0051985229784726949, -0.0087520730389423484, 0.0014825103568157643, 0.025447137503987683, 0.0031569477232396693, -0.007618273825201566, 0.022717987737610412, -0.0065577550547866823, 0.0024386167711497784], [-0.0028881589497217167, 0.010970003941513328, 0.0012167271461354528, -0.0021697742804818623, 0.024265919636645681, -0.00024240895328244371, -0.0034342361979813633, -0.0032547763290757593, 0.0020840162391008521, 0.016400747472871784, 0.0037064189243741956, -0.0038322475167536837, -0.0058279101130724415, -0.003531779753513931, -0.0033060631530503915, -0.0047608726154802911, -0.0015200622602567389, -0.010092491967139818, -0.0026847949372279622, -0.014343373247904022, -0.0024851015775787261, 0.0072252372611917833, 0.012332709730471519, 0.02166493696952319, 0.0020093176341874639, 0.0022830828870203054, 0.0027891198322104837, -0.0025104065772374035, -0.00055134449132394057, -0.001007314989035814, -0.0037201198144300551, 0.006633708403083834, -0.005827203317157627, 0.0027196570088708887, 0.013579865687406288, -0.0022465082543230513, 0.0017858139450578983, 0, 0.0013523228325291912, 0.0039347553975315414, 0.042502038772258076, -0.00056345496362009508, -0.0089123906202641118, -0.0029185246867027741, 0.0003741297966530131, -0.006718165630880242, -0.0038299141483705431, 0.00076828958569195448, -0.0073595881806147595, 0.0065152552923246213], [-0.0055662554304464693, 0.0021857985438673176, -0.006157833996541494, 0.020118287899894512, -0.0064322125267556015, -0.0033640023093176388, -0.0034487164022805384, 0.0010694360014762655, 0.010806860685386742, 0.019611096216255855, 0.06473026552187619, -0.007815658265600196, -0.0012049951012723396, 0.0096650532832685109, -0.0081848479110903649, 0.0024824638193696215, 0.045228364686901064, 0.020919625833423588, -0.0057262034628689897, -0.0019546889972717834, -0.0017681380449655051, 0.0019305929838144295, -0.0031441860996190675, -0.0091759626612213149, 0.0018272732681465899, -0.003419877793160774, 0.024590202172987598, -0.0071735609830342996, -0.0019238105997293742, -0.01032902061134779, 0.010247275627760207, -0.00056781109993696277, -0.0074412546495274809, 0.0022089926031274303, -0.0054321061719030551, -0.010937530508012645, 0.0050621539189750044, 0.011163850401181617, -0.00097233345353591328, -0.0067471022553516153, 0.0018617529033470284, -0.011395546776726858, 0.0036492808301568512, 0.026283788933778612, -0.0025309817485629493, -0.0082404999294812854, 0.011175686406548348, -0.00084449398642895326, -0.0038134751677193356, 0.0031286448646075495], [0.0039335300005773734, -0.0027872361334454962, 0.025109668718567041, -0.0024729319841923386, -0.0054221514177400405, -0.0046140021830209455, -0.0029644205787335356, 0.0058658412666491933, 0.0020648373671194958, -0.0030732652382889514, 0.0043183460398299178, -0.0057778457326998888, -0.0052229711454849287, -0.0087179001899529308, -0.0070100485247778727, 0.010058161838999092, -0.013467420834281653, -0.0076595558346031116, -0.006846817797872766, 0.0039897170246467667, -0.00225475456179865, 0.0013765911353118988, 0.057405017749717721, -0.01071715522122787, -0.0063368385283841403, 0.0063244299368342832, -0.0051365796481612006, 0.00025895772891952227, -0.0036898711970211047, -0.004416346605422782, -0.0073325950512096111, -0.0083767904154685175, -0.002655266274509108, 0.0040331504683418593, -0.0059432116691789738, -0.011600644473004301, -0.0027082071049287838, -0.0089337864054543428, -0.0033391570284709042, 0.0056973588788680057, -0.0041113398203742732, -0.0083833449232279429, -0.0019220847118080551, -0.0046328240228199523, 0.052146092864333041, 0.0094031766500344595, -0.0035331066727011832, 0.020810083453388445, -0.0076294707410923378, 0.01827868115925433], [-0.0040999983406437204, 0.0071064687418175591, -0.0041744351036019541, -0.0092409000028602969, 0.0083413418554632458, 0.0061532729187045278, 0.024004010174493714, 0.0082475317939110435, -0.0029356768647843911, -0.0052995819515106448, -0.0029477837556126695, -0.0059717367506664154, -0.0012952588264538371, 0.00021457496458008647, 0.028008637897122796, -0.003350012091366671, 0.0067106966916407112, -0.0017859386698853088, 0.0019353708646550583, 0.015645789981094838, 0.0067197670852938776, 0.0026522558154171249, -0.0003834322656298732, 0.0026174491573469076, 0.0018674784322043995, -0.012823927615131186, -0.0025581337564028828, 0.036261302754747878, 0.02945685709070631, 0.0017672842404350584, -0.026653736529446911, 0.021783953923978368, -0.0063488181493658296, 0.0093811436284397982, 0.0012313161467450836, -0.0027337978275053599, -0.00012914090885128123, -0.0061367746890713549, -0.0037267591462406286, 0.0026482795080635798, -0.0023140109871226957, -0.0039020252953801326, 0.00072875607763547845, 0.0015743850131852326, 0.0035940581761771916, 0.024590202172987598, 0.024655095186671205, 0.014183880200424995, -0.0020420290703361045, -0.014693842607312928], [0.0038503522695805346, -0.0023517091676619525, -0.010318305672876588, -0.00042244225200082811, -0.0039518917421703355, -0.014099966557010814, 0.0024579510388644419, 0.0030376607863483707, 0.00045128384453019477, 0.021650811452954503, 0.0013021710725960036, 0.0083341419580174169, 0.0022297621686237961, -0.0020111774561364479, -0.0071181590003503133, 0.0017916488125445846, 0.01113642310938299, 0.010138389559395696, 0.025112717036490935, -0.0090855374891545132, -0.0082730701654181223, 0.00675544616502422, -0.0058234912322820669, -0.011867624622636167, -0.00083783076087712207, -0.0031845608898710001, -0.0031842325860524097, -0.0069703493237041414, 0.0006256344722183468, 0.0033393274744845792, 0.00046985420465254477, -0.0027841821451274596, 0.0020286587868865386, -0.0021104300455953248, -0.0034687677950377364, -0.0045299810568424003, 0.0047083750935003957, -0.0063750052148376698, 0.0046934237754060384, -0.004856157897193833, 0.0054115145682798035, 0.0051035416338226867, -0.0042193702592235195, -0.0032130289521849786, 0.011696984645701548, -5.8902068559955218e-05, 0.012184247199221634, -0.0033517694188394372, 0.024330266219235487, 0.013076039980225058], [-0.0038451914011656519, -0.00021363649429256967, 5.9090333593991127e-05, -0.0073023692941335643, 0.015061958681791316, 0.009585116790732635, 0.016930010402586914, 0.022387589787546621, -0.0026631965047689875, -0.0051318945834591427, 0.032753509899166221, 0.00041031453723438393, -0.0065245314768517769, -0.0071417760259205295, 0.0027903080571627941, 0.00040036555401640753, 0.028385483975969798, -0.0015095560242410814, -0.007638712730518386, -0.0040741820181012392, -0.010000785517681023, -0.0087611653520964939, 0.011136944368517943, 0.0051680346193349706, 0.0055985008002052026, -0.00080947815204772118, -0.0081609889727837955, 0.0081961287857103184, -0.011227820874658572, -0.010238005881836728, -0.003419877793160774, -0.0021931925323416912, -0.00656274344782567, -0.0088650888517708606, 0.0094240204530220174, 0.0021069579342005264, -0.0027983374335761378, 0.0030171507778535872, -0.01097187825221722, 0.00074066758851370259, 0.0050445268070984867, -0.007870968171528674, -0.010270834859436867, -0.0039036695585955416, -0.005993248340466906, -0.0029204620248914669, -0.009223794131203198, -0.0076433156190382915, -0.0012567290676569255, -0.0052400047482109554], [0.0038915939735842831, 0.010419525081562821, -0.0014618143848855159, 0.010263483727329549, -0.010147434195145164, -0.0016170715462519084, -0.0025132968406928499, -0.0031254701326580581, -0.0032887401182045213, -0.0034556989966380669, -0.012067012166186676, 0.0058702909130399997, 0.012847738210386711, 0.00013033764649521306, -0.0018411710863566961, 0.0039131486150130656, 0.002314308546943724, -0.0040535444224627697, 0.0050241272805715371, -0.001702124258244042, 0.0075564504686124068, 0.001550024429751934, -0.002887086432830136, 0.0072807186084017771, 0.0029205697320207231, 0.00081900061778488495, 0.0023003886197646818, 0.0032932866844910705, 0.009399055035279498, -0.0017197794980169281, -0.0012322012735645216, -0.0052786005677411627, -0.0088920062953825232, 0.03970866645777648, -0.0057586510024742726, 0.0025391228728078299, -0.00021039548638815764, -0.014662151053035876, -0.010048852910795273, -0.0040067115025220366, -0.0053217909670061174, 0.0029828715085204946, -0.0021048820455902063, 0.0035449319017630287, 0.00069485710144022265, 0.03596378835673901, -0.00057400669361574205, 0.018226979955979515, -0.0032560820862573042, 0.002008256246220017], [-0.015038653093675068, 0.01052483364975592, -0.0079863665852985873, -0.0023034061867802176, 0.0017043042379101528, -0.0061036426811022894, 0.003225738154128278, -0.0037952025903429682, 0.0021528158939119794, -0.002769857822606899, 0.0019854337272486576, 0.030264366126329077, 0.010970015145314302, 0.0082252611128555524, -0.014575735929070343, -0.0026135399341919627, 0.0067656588201241716, 0.0035870745225404232, -0.0020654670087214807, -0.00025227353405862633, -0.00066890175365683709, 0.0067113280622261521, -0.0063130420494131728, 0.0063184052216883734, 0.0037607464480548677, -0.0045919200550520282, -0.0053460344591380405, -0.0032186246376210274, -0.0025581337564028828, -0.0070792437043274712, 0.045281035540934676, -0.0019972461158679657, -0.0074370659997657967, 0.029401468531120169, -0.00095022065973632665, -0.0057784924425085806, -0.005935098965299446, 0.00027670428440446494, 0.0045496340505188334, -0.0038764324918879062, -0.0030744393824485073, -0.0045624099980782181, -0.0046489715113922402, -0.0059985173639422349, 0.011496458322218236, 0.008751826213773484, -0.010452830003218071, -0.0040717383453025766, -0.0090070215124744505, -0.0040278223122745998], [0.0083550610277740023, -0.029050900883528373, -0.0059106816833672231, 0.0025835362973095173, -0.00047749693317631898, 0.018436905510209354, -3.8100417005802405e-05, -0.0055563259956187333, 0.0086672026387371922, 0.00063187176136450995, 0.00037380900035735196, -0.0012005370222728615, -0.0049455635833718049, -0.0018319569495935028, -0.005709456317318686, -0.009105033441588176, -0.0023731954146389753, -0.0015316030560129272, -0.0060180999470846449, -0.0038337183122445278, -0.0050635482441812284, 0.011996320748298336, 0.0043345341132875279, -0.0060829850440349388, 0.00078186190151715142, -0.0065426159576585695, 0.0011657955259697095, 0.022621877650546317, 0.0054682153838136009, -0.0098564257470907576, -0.0040672990175107671, -0.0047926150452541382, 0.012519106443645582, 0.0037378607394021532, 0.0012509562605219327, -0.0071523724079871447, 0.0051035416338226867, -0.003997929253643724, -0.017743068830958764, -0.0020268299884030048, 0.00037163709034099696, -0.0081609889727837955, 0.0026093962558993906, -0.0059053378170682881, 0.0095691906133355611, -0.0012167806386807931, -0.008495618197349666, -0.0050955885526389631, 0.0071940750924432886, 0.012626858271881676], [-0.0062033595185335116, -0.0044518895663623517, 0.010122869733204186, 0.0052590591324923261, -0.0017133949744627213, -0.0052020701301213976, -0.0027011730028023064, -0.0035300160142935299, 0.012526541348959797, 0.011307020773258479, 0.011127857534022165, -0.0021683129538185423, -0.0031086399060015427, 0.008751826213773484, 0.010192807934568253, -0.0058626654583513278, -0.0057782506601284471, -0.0026096042760198833, -0.0035230359831737402, -0.0070266762569692635, 0.017119928387593034, -0.0018287755740245776, -0.0052865659125757745, 0.0061532729187045278, -0.0060899997886011354, -0.0039920757611741553, 0.0010018136069110566, -0.0032277025135943717, 0.0023945481447076402, -0.0003937682063627097, -0.0059644708495630271, 0.0067197670852938776, -0.0062245675277347815, -0.00041636302583883882, -0.005404639563105028, -0.0054455124177861568, 0.0098012868366185585, -0.010238308303535339, -0.00079942227911325371, -0.0056243488165086866, 0.048650643880207463, 0.007901494859536189, 0.0096640776546497859, 0.021838105204411987, 0.018903671245551652, -0.0035459246645856732, 0.0010694360014762655, 0.0025867626300677495, 0.0035236311152237938, 0.026537827187008613], [0.0079532169686003466, 0.004982873762155819, 0.0082475317939110435, -0.0059805694251638588, -0.0071850066596413648, -0.001220919583676416, 0.012690952660123449, -0.0074786033472630476, -0.0025472465008443534, 0.00034019764663567419, -0.0078830509781961048, -0.0070709827639764038, -0.0034518435675685525, 0.00062669661274122922, -0.0067005333108912371, -0.00020198578267311394, -0.0056746449828797138, -0.014751886177321712, -0.00019718970001080682, -0.0011064547317243428, -0.0036295370177789514, -0.0061616445228561646, 0.00078559356232328734, -0.0059289095438923274, 0.0032335371246084831, 0.0023673408825878651, -0.0026761516318406994, -0.0089139720371155268, -0.0015100285871206538, -0.0041163601446803869, -0.011970782074637518, -0.0058268009433599874, -0.0011529303586283719, -0.0028351384834251069, 0.0085715687236584456, -0.0028711414578712438, -0.0035675113215551577, 0.0064486515228663532, 0.00066490359591693635, 0.0019569901055810487, -0.0029798744279565968, 0.0052335700839740654, -0.0050921547961718053, -0.0016731643899604532, 0.0068279774697896575, -0.01097187825221722, -0.0049064981602198431, 0.0052552783055297781, -0.008542719756438227, 0.010249949199206944], [0.0067511968300894903, 0.01074805457971997, 0.0048257191735579345, 0.0512523800623211, 0.0070232356282575467, 0.0021657086982868148, -0.0036049896889845507, 0.01557247414537136, 0.012710019789090081, -0.0016467322913514856, -0.0025895005870781732, -0.001207033861826496, -0.0084358836143301249, 0.004073308020897521, -0.0054232749337083429, -0.011140710146534567, -0.0073726276234351339, -0.0024615635962582918, -0.0079223903990460892, 0.0025891567456240005, -0.0025938251481470997, -0.0094730761346421832, 0.0098950230562226838, -0.011570294939930082, -0.0037208554841798706, -0.0037664071327770544, -0.0054443717842090908, -0.010170767261880379, 0.021720509764465865, 0.027566755054238782, -0.0014204949086361678, -0.00067570561456134709, -0.0062955698076899895, -0.0070369881036013073, 0.0037570868752071647, -0.0038073323829991715, 0.00063709037595732982, -0.0099654500974158903, 0.0019289567514207307, -0.0030725575177697078, 0.013224318848091166, 0.0090585549607551438, 0.012125487227658941, -0.0041035459076260213, -0.0037184681746698835, -0.0058574906817098656, -0.0011012297309672618, 0.0065438455956716331, -0.0058987830850976654, 0.0098185565063758744], [-0.012654242717889164, -0.0025672486694806335, -0.0059432116691789738, -0.0024592655951068501, -0.010110302732871794, 0.0071920269311909583, -0.00027560280488812431, -0.0035401219539874791, 0.0041007750552731013, -0.0042252191122454409, -0.0023071670851554623, 0.0023041824897939052, -0.0056328047875145388, 0.012184247199221634, 0.0015414206672716782, -0.0056902740887640009, 0.0083012909377272633, -0.01122654036800845, -0.0073595881806147595, 0.0034273901043434438, -0.0030985902617879043, 0.011264638317512199, 0.002264724791756427, 0.083267258086572302, 0.002363614050036046, -0.015750555377159924, -0.0015402932161761242, -0.0043299906415134731, 0.0073033837659305005, 0.0034956063277067628, 0.0017565046652500006, -0.0030831483356614348, 0.028146282369618601, 0.022387589787546621, -0.0025617014493173134, 0.023671991914385355, -0.00024240895328244371, 0.0038770481984026638, -0.010939882288173936, 0.020665870016532816, 0.0089047764907555938, -0.0085764897976901375, -0.0044136370500696956, 0.001812700921538562, -0.00011281897515511597, -0.0045931805225767376, 0.010419525081562821, -0.0048894707574018799, 0.02014725179242742, 0.0047104180633297018], [-0.0096014833278657095, -0.0084035894378828285, -0.007687104334526647, -0.0032698757357000007, -0.0002490757315802865, 0.0094831683873128451, -0.0060135313225925966, 0.0052814559136878077, -0.0082609161488176701, -0.0065815953517476293, 0.028911686398659542, -0.0062387841699766636, 0.018649412096706459, -0.0086387514022342529, 0.023835397088238858, -0.007200872443893544, -0.0039865446015948285, -0.0014114544661985649, 0.0089546843910136235, 0.0010229880009275955, 0.0048040834877872071, 0.012845813029980504, 0.0058840457340290754, 0.0026560074152205569, 0.063296508519526104, -0.014224800403898305, 0.03596378835673901, 0.018608853514283284, -0.01426558554787689, 0.0029227349201175552, -0.0051281340547188721, -0.0079807481750164885, -0.010372853199477264, -0.012906187365057579, 0.0019139898264629444, 0.0017090667576043609, 0.015070821571564853, 0.0042709081049846819, -0.0025869853028361884, 0.025918395780214397, 0.028273746629177299, -0.010419581778707108, -0.0075044081473940368, -0.0030751222228698177, -0.0069669090572028929, -0.010276761750972554, -0.0012421896632617297, 0.0072527721572508384, -0.0056635312636723429, 0.0058702909130399997], [-0.0087802015564753776, 0.0061344240131735407, -0.0040490001808132286, 0.0011034330032901444, -0.0069003137220430477, 0.00088576454887124548, -0.0014600426102404258, 0.015098060940807928, 0.024048530641768186, -0.0012386590124490863, 0.00084437557204774226, -0.0053624428469605679, -0.0071602930098101569, -0.0036063664453677667, -0.021591582007980128, 0.0029803689648313786, 0.0051905326298402445, -0.0090061549353531358, 0.0082970732779835635, -0.0089889735369055521, 0.0038477795212605, -0.0052167134225994255, 0.013478903863842984, -0.0055043788146595263, -0.0061685904678544286, -0.0066423722460146727, 0.00015302192820175906, 0.0029769234256276102, 0.0022653130769389293, 0.0079105584421877029, 8.0279506318141298e-05, 0.0073033837659305005, 0.0037936640567481942, -0.0058206070188344819, -0.010046619303296013, 0.0017471093374766327, 0.017561320521540805, -0.021208327242835886, 0.014333644638912358, -0.0064693173896922286, -0.0030789136219182131, 0.052525470021713122, 0.021884857026661174, 0.0027737858726483047, 0.0090359417144925489, 0.0052910367946880947, 0.0088355701438497872, 0.0043544085196156115, 0.0069199902571643818, -0.0027207475289445565], [-0.0042112720095504154, 0.020962190480557689, 0.0082194129759970112, -0.0014268125649219297, -0.004344523384250956, 0.0094924454396460734, 0.0047083750935003957, 0.00027670428440446494, -0.0089440836654303083, -0.0090987944734081429, 0.0018513772554416652, 0.028273746629177299, 0.0023779281251868054, -0.0062296722666043847, -0.0085313780311072132, -0.0016545405254824481, -0.0058218981492385059, 0.0046241950845224439, 0.0041552759059361507, -0.00054828337850201803, -0.0045551229627827901, -0.0081722321955866421, -0.00854946548166843, 0.008751826213773484, 0.0018408717624664229, -0.0088920062953825232, 0.07906941671999583, -0.0011522942112940616, 0.0022941482735665433, 0.009399055035279498, -0.0023376183240288776, -0.0032074811530607909, -0.0071047299300351632, 0.01944013792475649, -0.0053471544470128614, 0.0018023280434369519, -0.0023979830208241014, 0.051545697844733063, -0.0015038968258339158, -0.0008623064155058775, -0.0010626848077561859, 0.008560792079496761, 0.0022219119173328074, -0.0081971187196068392, -0.0058424591442238849, 0.00066339016577460436, -0.0019308599849549438, 0.061596171542990123, -0.0018813873548246653, 0.0090658713789474026], [-0.017845640815911449, -0.0031020998236286458, 0.0074217967909285974, 0.0013815752082004041, 0.004554731374488967, -0.01032902061134779, 0.0029333064592933439, -0.0018943445250411608, -0.012346057208533035, 0.0040005905863377024, -0.0071149718237795536, -0.0056675424774434392, -0.0038179001426877179, -0.0087527770065764544, 0.0016277008144263154, -0.011441660383957795, -0.004207129077700063, 0.0031956332922284613, 0.0075273802096894032, -0.0058396240910732373, 0.0039726482495901156, 0.0072165595320878746, -0.0036138256007556984, 0.0024203640999713397, -0.0034887990135648726, -0.0025637511853339257, -0.0051918327709520786, -0.0019197293469865899, -0.010589441659839027, 0.02500362712556695, -0.0023196604677095055, -0.0020271855974564185, 0.0010563758249445066, 0.00033685524899888249, -0.0052017760590021885, -0.0023831774829495105, -0.00053559730589721583, -0.0041271314824054993, -0.0063736038491123172, -0.010937732581657691, 0.024243625733129215, -0.0070185398944262213, 0.0032185303999392365, 8.3203170942328092e-05, -0.0054188180326979751, 0.0012008628335259005, -0.0048832318789797996, 0.03092249831082082, -0.0037854325566301708, -0.0034375517588172075], [0.00015813515885737905, -0.0011657729106589493, -0.0033258425977705963, 0.0093934720462424577, 0.04532092713176402, -4.8400482391087285e-05, 0.0055160873885184494, 0.0027685958180729041, -0.0092795756891307021, -0.0029449826597074675, 0.021714147994139454, 0.003045933521284694, -0.007468941900957266, 0.01498902065157781, -0.0089933393108274483, -0.0037820678942625773, 0.01736522546247465, -0.0025581337564028828, -0.0060658737973498271, -0.0072219553960532964, 0.0394676196235344, -0.0031088576063752135, 0.0045836168511889956, -0.0089691832493192254, -0.0014407189110933116, -0.015159438994250398, 0.016402853375735934, -0.0098402215481742505, 0.0065192793324558426, 0.0037161046354960755, -0.0050913630481300648, 0.014809808503917271, -0.0039102388101285241, -0.005080855427060515, -0.0049834039055323756, 0.014910440909068373, -0.00059868023197447802, 0.0032719072552370942, 0.010183852139492638, -0.0014403590397732864, -0.010937732581657691, 0.002570934732080235, 0.012689151605211675, 0.0066254286175235471, -0.0026220100540470187, -0.003590299609605033, -0.0010724707894585433, -0.0059423737247605657, -0.0012804402014583382, -0.0019350176014494075], [-0.0028014086225296806, 0.035482090202056657, 0.027856121186101635, 0.005974594875636864, -0.0013263464474456029, 0.0020602344846144473, 0.027423998401378422, 0.042781972805756657, -0.011644335566401787, -0.0021258793391432346, 0.0010452484601072153, 0.028302345251363388, 0.0058991575936129404, -0.0029211774171432688, 0.0018606874621000789, 0.00042462392202935689, -0.0046787216688437957, 0.0011657955259697095, 0.0012008628335259005, 0.005974594875636864, -0.0092852834952489305, -0.0015180298606563363, -0.0054108166068269651, 0.020741808957365951, 0.0052551477092289056, -0.0077721593921502461, -0.013007736290457726, 0.0096358087564050003, 0.0036732647690840181, -0.0040976931857197871, 0.001479998347060363, -0.0070615369329555854, 0.0050205856267242438, 0.0096650532832685109, -0.0049468678995842306, 0.0077092618353251574, 0.00064727155579378116, -0.0016234058822858059, -0.0046065190435684933, 0.0080884437426522945, 0.00071201967042957811, 0.0032932866844910705, 0.016524045060611937, 0.049485131558357411, -0.0086049260357222157, -0.0029833185113560348, 0.00037660084371365622, -0.0082577743215483303, -0.00039780810888356019, 0.0001964981046567112]]
out_1 = [['degree', 'experience?', 'way', 'buy', 'When', 'BITS', 'potential', 'scene', 'Larry', 'sleep', 'With', 'learning?', 'regular', 'importance', 'outside', 'Web', "don't", 'legal', 'like?', 'called', 'university', 'progress', 'list', 'development', 'store', 'help', 'bringing', "you've", 'sort', 'apply', 'according', 'were', 'likely', 'island', 'heavy', 'jobs', 'given', 'now', 'program?', 'sell', 'share', 'clear', 'money?', 'encryption', 'character', 'years', 'between', 'build', 'differences', 'points'], ['matter', 'times', 'planning', 'decide', 'shown', 'women?', 'suddenly', 'raised', 'technical', 'random', 'considering', 'classes', 'than', 'song', 'Korean', 'android', 'changing', 'popular', 'was', 'only', 'asked', 'IIT', 'case', 'have', 'entrepreneur?', 'famous', 'reliable', "someone's", 'kinds', 'law', 'deal', 'significant', 'evidence', 'work?', 'things', 'research', 'all', 'Cpp', 'since', 'advantages', 'physics?', 'custom', 'brand', 'Germany?', 'object', 'process', 'ideas', 'Internet', 'business?', 'society'], ['hard', 'killing', 'career', 'about', 'how', 'getting', 'consider', 'machine?', 'important', 'sin', 'single', 'look', 'his', 'school', 'average', 'who', 'finding', 'least', 'light', 'visiting', 'Japanese', 'why?', 'space', 'general', 'run', 'the', 'college', 'country', 'added', 'account', 'Marc', 'photo', 'names', 'community', "I'm", 'company', 'society?', 'watch', 'following', 'happened', 'really', 'online', 'mistakes', 'around', 'seen', 'live', 'starting', 'Has', 'James', 'program'], ['basic', 'parents', 'come', 'Bangalore?', 'both', 'Top', 'microsoft', 'greatest', 'leading', 'return', 'books', 'job', 'paid', 'design', 'option', 'behind', 'car', 'information', 'vs.', 'economy?', 'Japan', 'little', 'mind', 'past?', 'still', 'been', 'sports', 'U.S.', 'done', 'North', 'player', 'possible?', 'sending', 'creative', 'answer?', 'Where', 'needs', 'idea', 'continue', 'languages', 'trust', 'during', 'create', 'quality', 'Indian', 'analysis', 'Which', 'prepare', 'possible', 'believe'], ['pictures', 'you?', 'MBA', 'own', 'economic', 'programming', 'would', 'people?', 'creating', 'kill', 'short', 'contact', 'any', 'escape', 'inspired', 'JavaScript', 'coding', 'sources', 'death?', 'win', 'campus?', 'strong', 'animal', 'global', 'animals', 'does', 'startup?', 'interesting', 'most', 'understand', 'for?', 'Why?', "can't", 'Facebook', 'employees', 'why', 'shows', 'else', 'moved', '2013', 'songs', 'summer', 'oil', 'win?', 'Wales', 'college?', 'like', 'photography', 'point', 'app'], ["isn't", 'digital', "What's", 'conference', 'cool', 'day?', 'last', 'Jimmy', 'Will', 'value', 'staying', 'its', 'picture', 'path', 'small', 'model?', 'you', 'University', 'guy', 'back', 'same', 'industry?', 'software', 'good', 'think', 'never', 'over', 'able', 'beginning', 'affect', 'series', 'steps', 'education', 'being', 'nuclear', 'exist?', 'Google', 'fast', 'knowledge', 'working', 'them', 'future', 'active', 'launched', 'near', 'write', 'algorithms?', 'US?', 'movies', 'math'], ['numbers', 'mathematics', 'Java?', 'science', 'know', 'well', 'left', 'movies?', 'fall', 'position', 'instead', 'energy', 'New', 'send', 'that', 'with?', 'most?', 'educational', 'serve', 'view', 'her', 'anyone', 'Who', 'placement', 'Islam', 'required', 'name', 'history', 'service', 'video', 'them?', 'rate', 'already', 'post', 'dollar', 'worst', 'analyst', 'The', 'others?', 'read', 'girls', 'need', 'could', 'while', 'through', 'adding', 'question', 'take', 'based', 'government'], ['can', 'decision?', 'businesses', 'correct', 'using', 'dark', 'social', 'considered', 'areas', 'enjoy', 'down?', 'Python?', "Isn't", 'cause', 'happen', 'And', 'better', 'War', 'IITians', 'explain', 'inspiring', 'stupid', 'kept', 'declared', 'period', 'finance', 'Americans', 'day', 'power?', 'price', 'How', 'having', 'website', 'indian', 'startups', 'relative', 'hiring', 'company?', 'compare', 'support', 'among', 'scientific', 'Michael', 'corporate', 'employee', 'girlfriend?', 'virtual', 'advice', 'details', 'reduce'], ['ways', 'thing', 'she', 'spend', 'dangerous', 'City?', 'hacks', 'before', 'spread', 'end', 'him', 'physical', 'real', 'life?', 'effect', 'playing', 'because', 'high', 'hire', 'consumer', 'terms', 'safe', 'number', 'jokes', 'Kickstarter', 'daily', 'players', 'per', 'startups?', 'screen', 'across?', 'saved', 'pros', 'cross', 'hurt', 'computer', 'and/or', 'language', 'read?', 'success', 'bad', 'background', 'very', 'form', 'now?', 'year', 'search', 'Best', 'film', 'said'], ['police', 'for', 'model', 'engineer', 'Delhi', 'hate', 'secret', 'direct', 'prevent', 'student', 'medical', 'has', 'estate', 'against', 'standard', 'just', 'word', 'ability', 'actions', 'besides', 'Was', 'problems', 'study', 'coffee', 'embedded', 'group', 'marriage?', 'not', 'favorite', 'apps', 'professor', 'much', 'are', 'Did', 'electricity', 'private', 'future?', 'turn', 'Does', 'avoid', 'season', 'names?', 'current', 'hotel', 'once', 'products', 'get', 'tips', 'site?', 'Apple'], ['services', 'house', 'up?', 'history?', 'english', 'cons', 'Microsoft', 'learning', 'from', 'significance', 'wants', 'normal', 'water', 'year?', 'What', "one's", 'personal', 'great', 'kind', 'launch', 'predict', 'natural', 'USA', 'programming?', 'political', 'e-commerce', 'feel', 'recommendation', 'ever?', 'should', 'train', 'campus', 'family', 'Amazon', 'enough', 'longer', 'improve', 'play', 'highest', 'chance', 'actually', 'art', 'love', 'modern', 'teach', 'desktop', 'machine', 'email', 'often', 'found'], ['out', 'plan', 'people', 'scene?', 'control', 'rank', 'technology', 'child', 'learn', 'time', 'percentage', 'increase', 'yet', 'etc.?', 'full', 'something', 'data', 'and', 'laptop', 'known', 'months?', 'even', 'Kim', 'receive', 'anything', 'connections', 'type', 'credit', 'had', 'choose?', 'where', 'USA?', 'amount', 'late', 'biggest', 'Indians', 'taken', 'marketing', 'foreign', 'Computer', 'Windows', 'types', 'area', 'successful', 'amazing', 'other', 'minimum', 'capital', 'man', 'there'], ['theory', 'Valley?', 'become', 'India?', 'new', 'state', 'detect', 'did', 'stories', 'application', 'increasing', 'you,', 'country?', 'worth', "doesn't", 'two', 'same?', 'Wikipedia', 'content', 'explanation', 'old', 'long', 'under', 'effective', 'alcohol', 'add', 'either', 'pay', 'but', 'each', 'Have', 'violent', 'early', 'next', 'project', 'financial', 'students', 'companies', 'show', 'memory', 'investor', 'beautiful', 'start', 'events', 'United', 'see', 'soon', 'tech', 'say', 'start?'], ['one', 'posts', 'answer', 'everyone', 'music', 'animated', 'might', 'smart', 'like,', 'laws', 'mean', 'consecutive', 'options', 'algorithm', 'lot', 'courses', 'made', 'person', 'business', 'relationship', 'words', 'convince', 'those', 'within', 'size', 'age', 'goes', 'sign', 'engineering?', 'make', 'necessary', 'features', 'physics', 'movie', 'needed', 'tell', 'women', 'life', 'mobile', 'widely', 'major', 'place', 'websites', 'engine', 'profile', 'camera', 'decision', 'solution', 'Jobs', 'master'], ['free', 'far', 'money', 'PhD', 'travel', 'low', 'rather', 'iPhone', 'web', 'without', 'changes', 'their', 'this?', 'recommend', 'close', 'international', 'military', 'result', 'they', 'scientists', 'such', 'Silicon', 'prove', 'learned', 'intelligence', 'or', 'girlfriend', 'Steve', 'off', 'find', 'yourself', 'allow', 'more?', 'more', 'quantum', 'join', 'media', 'eat', 'level', 'keep', 'world', 'food', 'alternative', 'human', 'places', 'used', 'survive', 'Masters', 'certain', 'Python'], ['available', 'opinion', 'consulting', 'grow', 'going', 'scope', 'into', 'sex', 'Glass?', 'powerful', 'seek', 'members', 'few', 'role', 'men', 'writing', 'challenging', 'put', 'industry', 'Are', 'Would', 'after', 'what', 'stock', 'page?', 'blogs', 'cost', 'Your', 'change', 'access', 'resources', 'engineering', 'makes', 'develop', 'coming', 'triangle', 'craziest', 'age?', 'of?', 'public', 'criteria', 'sound', 'iOS', 'crash', 'stop', 'Quora,', 'America', 'salary', 'lines', 'Area'], ['along', 'me?', 'recent', 'opposite', 'field', 'Ubuntu', 'Google?', 'language?', 'source', 'this', 'attacks', 'world?', 'programmers', 'these', 'always', 'buying', 'questions', 'wanted', 'sense', 'gets', 'course', 'move', 'making', 'years?', 'humans', 'website?', 'mind-blowing', 'every', 'answers', 'power', 'leave', 'photos', 'San', 'heard', 'methods', 'brain', 'someone', 'India', 'market?', 'time?', 'originated', 'God', 'book', 'difference', 'another', 'main', 'B2B', 'universe?', 'universities', "aren't"], ['use', 'too', 'got', 'reality', 'plausible', 'account?', 'myself', 'attention', 'part', 'despite', 'looking', 'English', 'lead', 'less', 'our', 'down', 'google', 'telling', 'it?', 'phone?', 'anything,', 'started', 'Chinese', 'big', 'doing', 'pursue', 'cloud', 'President', 'city', 'right', 'product?', 'call', 'which', 'Java', 'professors', 'population', 'American', 'currently', 'game', 'Quora', 'Why', 'Bay', 'large', 'nothing', 'phones?', 'system?', 'download', 'cases', 'countries', 'NBA'], ['investment', 'national', 'civil', 'invest', 'boyfriend', 'give', 'lives', 'growth', 'tools', 'videos', 'efforts', 'wrong', 'different', 'cannot', 'home', 'reading', 'dollars', 'startup', 'Twitter', 'will', 'true', 'shoot', 'favourite', 'be?', 'product', 'Can', 'projects', 'then', 'living', 'do?', 'apart', 'know?', '2012?', 'difficult', 'link', 'out?', 'soccer', 'with', 'earth?', 'attractive', 'Quora?', 'want', 'simple', 'wearing', 'choose', 'similar', 'common', 'provide', 'stay', 'except'], ['restaurants', 'some', 'team', 'meaning?', 'reasons', 'happens', 'top', 'girl', 'best', 'World', 'many', 'sexual', 'Bill', 'regarding', 'morally', 'running', 'married', 'marriage', 'system', 'meaning', 'ask', 'cold', 'moment', 'key', 'examples', 'when', 'work', 'young', 'vacation', 'maximum', 'century?', 'sites', 'experience', 'school?', 'platform', 'text', 'mothers', 'night', 'order', 'ever', "didn't", 'days?', 'story', 'facts', 'Should', 'first', 'ten', 'facebook', 'IITs', 'your']]
Explanation: Since the out list is randomized (see svf_list_ section) and because computing corr_big_list takes quite a while, I have saved a copy of a pair of out and corr_big_list as out_1 and corr_big_list_1 for convenience:
End of explanation
print('\n')
print('Min')
for i in range(len(corr_big_list_1)):
print(i, min(corr_big_list_1[i]))
print('\n')
print('Max')
for i in range(len(corr_big_list_1)):
print( i, max(corr_big_list_1[i]))
corr_big_list_1[0].index(0.1023288271726879)
out_1[0][20]
Explanation:
End of explanation
corr_big_list_1[13].index(max(corr_big_list_1[13]))
out_1[13][23]
corr_big_list_1[16].index(max(corr_big_list_1[16]))
out_1[16][26]
#gives the correlation of `__ans__` to the product of num_answers and the ...
# ... boolean column of number of characters less than int_
def CorrOR(str_):
split = str_.split(', ')
joined= '|'.join(split)
# create a pd series with boolean values
combined_df = data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains(str(joined))) else 0)
prod_df = combined_df * data_df['num_answers']
return combined_df.corr(data_df.__ans__), prod_df.corr(data_df.__ans__)
CorrOR('university, within, mind-blowing')
Explanation: Ok university does really well compared to others, with correlation of 0.10. What if we look at the combination of the top 3 words.
End of explanation
# First let's order them by their coefficient of correlation
more_words = []
for i in range(20):
idx_1 = i
idx_2 = corr_big_list_1[idx_1].index(max(corr_big_list_1[idx_1]))
more_words.append(out[idx_1][idx_2])
more_words
more_words_str = ', '.join(more_words)
more_words_str
CorrOR(more_words_str)
CorrOR('university, physics?, single, mind, interesting, guy, instead, indian, across?, not, actually, technology, India, within, intelligence, sound, mind-blowing, cases, boyfriend, facts')
Explanation: Let's do even more combinations.
End of explanation
top_3_list = []
for i in range(0, 20):
list_= corr_big_list_1[i]
a_1 = [x for x in list_ if not x == max(list_)]
a_2 = [x for x in a_1 if not x == max(a_1)]
l_nn = [max(list_), max(a_1), max(a_2)]
top_3_list.append(l_nn)
top_3_list
more_top_words = []
for idx_1 in range(20):
for idx_2 in range(3):
idx = corr_big_list_1[idx_1].index(top_3_list[idx_1][idx_2])
more_top_words.append(out_1[idx_1][idx])
len(more_top_words)
more_top_words_str = ', '.join(more_top_words)
more_top_words_str
CorrOR(more_top_words_str)
Explanation: Notice that the second highest coefficient correlation could be in the list containing the first highest one; if that the case, we have missed it when we picked the top 3 above. To avoid this let's pick the top 3 in each list in corr_big_list_1.
End of explanation
more_top_words_ = [] #ran out of ideas for list names!
for idx_1 in range(20):
for idx_2 in range(2):
idx = corr_big_list_1[idx_1].index(top_3_list[idx_1][idx_2])
more_top_words_.append(out_1[idx_1][idx])
len(more_top_words_)
more_top_words_str_ = ', '.join(more_top_words_)
more_top_words_str_
CorrOR(more_top_words_str_)
Explanation: This is an improvement over the previous combination! What if we do only top 2?
End of explanation
data_df['topics']
def funn(x): #where x will be a row when running `apply`
return sum(x[i]['followers'] for i in range(len(x)))
data_df['topics_followers'] = data_df['topics'].apply(funn)
data_df.drop(['topics'], axis =1, inplace=True)
Explanation: Ok so not as great as top 3 word combinations. I will create a feature for the top 3 (and other features explored in this and the previous notebook) in the next notebook.
Number of Followers
End of explanation
data_df['topics_followers'].corr(data_df.__ans__)
temp = data_df['topics_followers'] * data_df['anonymous']
temp.corr(data_df.__ans__)
temp = data_df['topics_followers'] * data_df['context_topic'].apply(lambda x: 0 if x==None else 1)
temp.corr(data_df.__ans__)
temp = data_df['topics_followers'] * data_df['num_answers'].apply(lambda x: 1 if x>=29 else 0)
temp.corr(data_df.__ans__)
temp_0 = data_df.question_text.apply(lambda x: len(x)) #len of characters
temp = data_df['topics_followers'] * temp_0.apply(lambda x: 1 if x <= 182 else 0)
temp.corr(data_df.__ans__)
Explanation: We take the sum of the followers of each topic it appears under because, naturally, the chances of a question being viewed increases with the number of topics it appears under, garnering a wider audience.
End of explanation |
9,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="https
Step1: Étude d'une boucle
Très souvent, on désire faire une action pour chaque élément d'une liste. Par exemple, afficher les carrés des nombres de 1 à 10. Ceci peut être fait grâce une boucle for
Step2: Cette boucle peut se lire "pour chaque élément x dans la liste l
Step3: On utilise ici le formatage de chaîne de caractère dans le print (avec un f devant la str), que nous avons vu au TP 2.
La syntaxe générale de la boucle forest
Step4: <div class="alert alert-block alert-info bilan">
* le bloc d'instruction est **répété** autant de fois que ...
* la variable de boucle ...
* si on oublie les deux points, à la fin de la ligne du `for`, le python ...
* le bloc d'instruction doit être ...
</div>
Exercices d'application
<div class="alert alert-block alert-danger travail">
**Ex1.0 - Triple opposition**
Pour les entiers pairs de 2 à 10 inclus, affichez à l'aide d'une boucle le triple du nombre puis son opposé. Pour les premiers entiers cela doit ressembler à
Step5: <div class="alert alert-block alert-danger travail">
**Ex1.2 attention au BUG
Step6: <div class="alert alert-block alert-danger travail">
**Ex1.3 - Maximum for ever**
Reprendre le calcul du maximum d'une liste, mais cette fois-ci à l'aide d'une boucle `for` et il doit marcher quelle que soit la liste d'entiers non vide !!</div>
Step7: <div class="alert alert-block alert-danger travail">
**Ex1.4 - Donnez-nous un indice**
Même chose avec l'indice du maximum d'une liste (prendre le premier indice possible)
Step8: Enchaîner les boucles for
Les instructions sont effectuées à la suite, c'est aussi le cas pour les boucles for.
Il faut qu'une boucle soit terminée pour passer à la suite du code.
<div class="alert alert-block alert-danger travail">
**Ex1.5 - for puis for**
Essayez de prévoir ce qui va s'écrire avant d'exécuter le code.
Step9: Boucles imbriquées
Si on met une boucle dans le bloc d'instruction d'une autre boucle, c'est ce qu'on appelle des boucles imbriquées.
Dans ce cas la boucle "extérieure" va répéter la boucle "intérieure" en commençant à chaque fois depuis son début.
Step10: <div class="alert alert-block alert-danger travail">
**Ex1.6 - En son for intérieur**
Essayez de prévoir la différence avec la boucle suivante avant de l'exécuter
Step11: <div class="alert alert-block alert-danger travail">
**Ex1.7 - Le b-a ba**
Utilisez des boucles imbriquées pour écrire
aa
ab
ba
bb
ca
cb
## Dans le bloc ou pas dans le bloc ?
Le fait de mettre une instruction dans le bloc d'instruction du for ou de la mettre en dehors, ça change tout ! Comparez les deux codes suivants
Step12: Dans la première version, il faut attendre 20 minutes en tout !...
Boucles à la suite, imbriquées, quelles instructions dedans et dehors de la boucle ? Suivez les instructions pour chaque code.
<div class="alert alert-block alert-danger travail">
**Ex2.0 - Desserts à la suite**
1. créez une liste de vos desserts préférés
2. affichez pour chacun de ces desserts une phrase qui dit que vous les adorez (avec une boucle !!)
3. puis pour chacun de ces desserts une phrase qui dit que vous en voulez encore.
</div>
Step13: <div class="alert alert-block alert-danger travail">
**Ex2.1 - Desserts immédiats**
Cette fois-ci, écrivez les deux phrases pour chaque dessert, avant de passer au dessert suivant.
</div>
<div class="alert alert-block alert-danger travail">
**Ex2.2 - Salutations imbriquées**
Écrivez un code utilisant les listes l et m ci-dessous et qui affiche
Step14: La fonction range
La fonction native range est un peu particulière mais très utile. Elle permet de renvoyer une sorte de liste des entiers de 0 à n-1, ce qui fait n entiers consécutifs en commençant à 0.
Step15: Tout se passe comme si on avait écrit "à la main"
Step16: Toutefois il faut se méfier car range(10) n'est pas exactement une liste c'est un itérateur ; cela dépasse le cadre du TP pour l'instant, mais il faut savoir qu'il peut être converti en liste.
Step17: Comme vous voyez le type de range(10) n'est pas list mais range. Ceci dit on a pas exactement besoin de savoir ce que c'est pour s'en servir. On peut le convertir en liste
Step18: <div class="alert alert-block alert-info bilan">
`range(n)` où n est un entier est "une sorte" de liste des entiers qui vont de ...
On peut | Python Code:
# Exécutez cette cellule !
from IPython.core.display import HTML
styles = "<style>\n.travail {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2018/01/04/16/53/building-3061124_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n\n.bilan {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2016/10/18/19/40/anatomy-1751201_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n</style>"
HTML(styles)
Explanation: <img src="https://live.staticflickr.com/7726/17042914871_af2828b43c_c_d.jpg" align=center width="400">
SAÉ 03 - TP4 - La boucle for
Bienvenue sur le Jupyter pour préparer la SAÉ Traitement numérique du signal du département RT de Vélizy. Les notebooks sont une adaptation de ceux proposés par le département info et de leur profs d'info géniaux.
Ce TP est consacré au boucle for en python, qu'on applique sur les list ou pour répéter des instructions lorsqu'on connait à l'avance le nombre de répétitions nécessaires.
End of explanation
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for x in l:
print(x**2)
Explanation: Étude d'une boucle
Très souvent, on désire faire une action pour chaque élément d'une liste. Par exemple, afficher les carrés des nombres de 1 à 10. Ceci peut être fait grâce une boucle for :
End of explanation
animaux = ["chat", "chien", "pingouin", "lézard"]
for animal in animaux:
print(f"le {animal} donne la papate")
Explanation: Cette boucle peut se lire "pour chaque élément x dans la liste l : il faut afficher x au carré".
Autre exemple :
End of explanation
for nom_variable in nom_liste:
bloc_d_instructions
Explanation: On utilise ici le formatage de chaîne de caractère dans le print (avec un f devant la str), que nous avons vu au TP 2.
La syntaxe générale de la boucle forest :
End of explanation
somme = 0
for x in ...
print(somme) # en principe ça fait 55 !
Explanation: <div class="alert alert-block alert-info bilan">
* le bloc d'instruction est **répété** autant de fois que ...
* la variable de boucle ...
* si on oublie les deux points, à la fin de la ligne du `for`, le python ...
* le bloc d'instruction doit être ...
</div>
Exercices d'application
<div class="alert alert-block alert-danger travail">
**Ex1.0 - Triple opposition**
Pour les entiers pairs de 2 à 10 inclus, affichez à l'aide d'une boucle le triple du nombre puis son opposé. Pour les premiers entiers cela doit ressembler à :
6 -2
12 -4
etc
</div>
<div class="alert alert-block alert-danger travail">
**Ex1.1 - Sommer sans se fatiguer**
Complétez le code ci-dessous dont le but est de calculer la somme des entiers de 1 à 10
S = 1 + 2 + 3 + .. + 10
</div>
End of explanation
l = [0]
for x in l:
l.append(x+1)
print(l)
Explanation: <div class="alert alert-block alert-danger travail">
**Ex1.2 attention au BUG:**
**SAUVEGARDEZ** avant d'exécuter le code suivant car il va jamais s'arrêter... ça va bugger ! Vous risquez de perdre votre travail. Pour éviter de perdre votre travail, vous pouvez cliquer dans le menu sur "Noyau" > "Interrompre" pour arrêter une code qui ne se termine pas.
Essayez de comprendre pourquoi il ne s'arrête jamais.. et pourquoi il faut **éviter** faire ce genre de choses.</div>
End of explanation
l = [3, 5, 2, 8, 6]
maxi = l[0] # initialisation du maximum avec la première valeur de la liste
# À completer
print("le maximum vaut", maxi)
Explanation: <div class="alert alert-block alert-danger travail">
**Ex1.3 - Maximum for ever**
Reprendre le calcul du maximum d'une liste, mais cette fois-ci à l'aide d'une boucle `for` et il doit marcher quelle que soit la liste d'entiers non vide !!</div>
End of explanation
l = [3, 5, 2, 8, 6, 4, 8, 3, 4]
imaxi = 0 # initialisation du maximum avec le premier indice de la liste
# À completer
print("l'indice du maximum vaut", imaxi)
Explanation: <div class="alert alert-block alert-danger travail">
**Ex1.4 - Donnez-nous un indice**
Même chose avec l'indice du maximum d'une liste (prendre le premier indice possible)
End of explanation
for x in ['a', 'b', 'c']:
print("boucle 1")
print(x)
print("entre les deux boucles")
for y in ['x', 'y', 'z']:
print("boucle 2")
print(y)
Explanation: Enchaîner les boucles for
Les instructions sont effectuées à la suite, c'est aussi le cas pour les boucles for.
Il faut qu'une boucle soit terminée pour passer à la suite du code.
<div class="alert alert-block alert-danger travail">
**Ex1.5 - for puis for**
Essayez de prévoir ce qui va s'écrire avant d'exécuter le code.
End of explanation
for animal in ['koala', 'paresseux', 'maki', 'ouistiti']:
for adjectif in ['mignon', 'malin', 'malicieux']:
print(animal + " " + adjectif)
Explanation: Boucles imbriquées
Si on met une boucle dans le bloc d'instruction d'une autre boucle, c'est ce qu'on appelle des boucles imbriquées.
Dans ce cas la boucle "extérieure" va répéter la boucle "intérieure" en commençant à chaque fois depuis son début.
End of explanation
for adjectif in ['mignon', 'malin', 'malicieux']:
for animal in ['koala', 'paresseux', 'maki', 'ouistiti']:
print(animal + " " + adjectif)
Explanation: <div class="alert alert-block alert-danger travail">
**Ex1.6 - En son for intérieur**
Essayez de prévoir la différence avec la boucle suivante avant de l'exécuter:</div>
End of explanation
for animal in ['koala', 'paresseux', 'maki', 'ouistiti']:
print("le " + animal +" fait la sieste")
print("attendez cinq minutes !")
for animal in ['koala', 'paresseux', 'maki', 'ouistiti']:
print("le " + animal +" fait la sieste")
print("attendez cinq minutes !")
Explanation: <div class="alert alert-block alert-danger travail">
**Ex1.7 - Le b-a ba**
Utilisez des boucles imbriquées pour écrire
aa
ab
ba
bb
ca
cb
## Dans le bloc ou pas dans le bloc ?
Le fait de mettre une instruction dans le bloc d'instruction du for ou de la mettre en dehors, ça change tout ! Comparez les deux codes suivants:
End of explanation
desserts = ["sorbet fraise", "oréo", "pomme"] # à compléter
Explanation: Dans la première version, il faut attendre 20 minutes en tout !...
Boucles à la suite, imbriquées, quelles instructions dedans et dehors de la boucle ? Suivez les instructions pour chaque code.
<div class="alert alert-block alert-danger travail">
**Ex2.0 - Desserts à la suite**
1. créez une liste de vos desserts préférés
2. affichez pour chacun de ces desserts une phrase qui dit que vous les adorez (avec une boucle !!)
3. puis pour chacun de ces desserts une phrase qui dit que vous en voulez encore.
</div>
End of explanation
l = ["a", "b", "c"]
m = [1, 2, 3]
Explanation: <div class="alert alert-block alert-danger travail">
**Ex2.1 - Desserts immédiats**
Cette fois-ci, écrivez les deux phrases pour chaque dessert, avant de passer au dessert suivant.
</div>
<div class="alert alert-block alert-danger travail">
**Ex2.2 - Salutations imbriquées**
Écrivez un code utilisant les listes l et m ci-dessous et qui affiche:
`coucou a1 ciao`
`coucou a2 ciao`
`coucou a3 ciao`
`coucou b1 ciao`
`coucou b2 ciao`
...
`coucou c3 ciao`
</div>
End of explanation
for i in range(10):
print(i)
Explanation: La fonction range
La fonction native range est un peu particulière mais très utile. Elle permet de renvoyer une sorte de liste des entiers de 0 à n-1, ce qui fait n entiers consécutifs en commençant à 0.
End of explanation
for i in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]:
print(i)
Explanation: Tout se passe comme si on avait écrit "à la main" :
End of explanation
print(range(10))
print(type(range(10)))
Explanation: Toutefois il faut se méfier car range(10) n'est pas exactement une liste c'est un itérateur ; cela dépasse le cadre du TP pour l'instant, mais il faut savoir qu'il peut être converti en liste.
End of explanation
l = list(range(10))
print(l)
print(type(l))
Explanation: Comme vous voyez le type de range(10) n'est pas list mais range. Ceci dit on a pas exactement besoin de savoir ce que c'est pour s'en servir. On peut le convertir en liste :
End of explanation
list(range(12))
list(range(2, 12))
list(range(12, 2))
list(range(2, 12, 1))
list(range(2, 12, 2))
list(range(1, 12, 5))
list(range(2, 12, 5))
list(range(1, 12, -1))
list(range(10, 2, -1))
Explanation: <div class="alert alert-block alert-info bilan">
`range(n)` où n est un entier est "une sorte" de liste des entiers qui vont de ...
On peut :
* l'utiliser dans une boucle pour compter de ...
* le convertir en ...
</div>
<div class="alert alert-block alert-danger travail">
**Ex3.0 - Range un peu !**
Créez une liste des entiers de 0 à 12 (12 exclu)
</div>
<div class="alert alert-block alert-danger travail">
**Ex3.1 - Range encore !**
créez une liste des entiers de 0 à 20 (20 inclus)
</div>
<div class="alert alert-block alert-danger travail">
**Ex3.2 - Salut tout le monde !**
Affichez, à l'aide d'une boucle for, les phrases "hello0", "hello1", ... pour les entiers de 0 à 5 (exclus)
</div>
<div class="alert alert-block alert-danger travail">
**Ex3.3 - Des paramètres à géométrie variable**
On peut appeler `range` avec 1 ou 2 ou 3 paramètres. Essayez vous-même de comprendre à quoi ils servent :
End of explanation |
9,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Effect of cancelling a process zero
The following exercise is taken from Åström & Wittenmark (problem 5.3)
Consider the system with pulse-transfer function
$$ H(z) = \frac{z+0.7}{z^2 - 1.8z + 0.81}.$$
Use polynomial design to determine a controller such that the closed-loop system has the characteristic polynomial $$ A_c = z^2 -1.5z + 0.7. $$
Let the observer polynomial have as low order as possible, and place all observer poles in the origin (dead-beat observer). Consider the following two cases
(a) The process zero is cancelled
(b) The process zero is not cancelled.
Simulate the two cases and discuss the differences between the two controllers. Which one should be preferred?
(c) Design an incremental controller for the system
Step1: Checking the poles
Before solving the problem, let's look at the location of the poles of the plant and the desired closed-loop system.
Step2: So, the plant has a double pole in $z=0.9$, and the desired closed-loop system has complex-conjugated poles in $z=0.75 \pm i0.37$.
(a)
The feedback controller $F_b(z)$
The plant has numerator polynomial $B(z) = z+0.7$ and denominator polynomial $A(z) = z^2 - 1.8z + 0.81$. With the feedback controller $$F_b(z) = \frac{S(z)}{R(z)}$$ and feedforward $$F_f(z) = \frac{T(z)}{R(z)}$$ the closed-loop pulse-transfer function from the command signal to the output becomes
$$ H_{c}(z) = \frac{\frac{T(z)}{R(z)} \frac{B(z)}{A(z)}}{1 + \frac{B(z)}{A(z)}\frac{S(z)}{R(z)}} = \frac{T(z)(z+0.7)}{A(z)R(z) + S(z)(z+0.7)}.$$
To cancel the process zero, $z+0.7$ should be a factor of $R(z)$. Write $R(z)= \bar{R}(z)(z+0.7)$ to obtain the Diophantine equation
$$ A(z)\bar{R}(z) + S(z) = A_c(z)A_o(z).$$
Let's try to find a minimum-order controller that solves the Diophantine equation. The degree of the left hand side (and hence also of the right-hand side) is
$$ \deg (A\bar{R} + S) = \deg A + \deg \bar{R} = 2 + \deg\bar{R}.$$
The number of equations obtained when setting the coefficients of the left- and right-hand side equal is the same as the degree of the polynomials on each side (taking into account that the leading coefficient is 1, by convention).
The feedback controller can be written
$$ F_b(z) = \frac{S(z)}{R(z)} = \frac{s_0z^n + s_1z^{n-1} + \cdots + s_n}{(z+0.7)(z^{n-1} + r_1z^{n-2} + \cdots + r_{n-1}}, $$
which has $(n-1) + (n+1) = 2n$ unknown parameters, where $n = \deg\bar{R} + 1$.
So to obtain a Diophantine equation which gives exactly as many equations in the coefficients as unknowns, we must have
$$ 2 + \deg\bar{R} = 2\deg\bar{R} + 2 \quad \Rightarrow \quad \deg\bar{R} = 0.$$
Thus, the controller becomes
$$ F_b(z) = \frac{s_0z + s_1}{z+0.7}, $$
and the Diophantine equation
$$ z^2 - 1.8z + 0.81 + (s_0z + s_1) = z^2 - 1.5z + 0.7$$
$$ z^2 - (1.8-s_0)z + (0.81 + s_1) = z^2 - 1.5z + 0.7, $$
with solution
$$ s_0 = 1.8 - 1.5 = 0.3, \qquad s_1 = 0.7-0.81 = -0.11. $$
The right hand side of the Diophantine equation consists only of the desired characteristic polynomial $A_c(z)$, and the observer polynomial is $A_o(z) = 1$, in order for the degrees of the left- and right hand side to be the same.
Let's verify by calculation using SymPy.
Step3: The feedforward controller $F_f(z)$
Part of the methodology of the polynomial design, is that the forward controller $F_f(z) = \frac{T(z)}{R(z)}$ should cancel the observer poles, so we set $T(z) = t_0A_o(z)$. In case (a) the observer poynomial is simply $A_o(z)=1$. However, since $R(z)=z+0.7$, we can choose $T(z) = t_0z$ and still have a causal controller $F_f(z)$.
The scalar factor $t_0$ is chosen to obtain unit DC-gain of $H_c(z)$, hence
$$ H_c(1) = \frac{t_0}{A_c(1)} = 1 \quad \Rightarrow \quad t_0 = A_c(1) = 1-1.5+0.7 = 0.2$$
Simulate
Let's simulate a step-responses from the command signal, and plot both the output and the control signal. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import control
import sympy as sy
Explanation: Effect of cancelling a process zero
The following exercise is taken from Åström & Wittenmark (problem 5.3)
Consider the system with pulse-transfer function
$$ H(z) = \frac{z+0.7}{z^2 - 1.8z + 0.81}.$$
Use polynomial design to determine a controller such that the closed-loop system has the characteristic polynomial $$ A_c = z^2 -1.5z + 0.7. $$
Let the observer polynomial have as low order as possible, and place all observer poles in the origin (dead-beat observer). Consider the following two cases
(a) The process zero is cancelled
(b) The process zero is not cancelled.
Simulate the two cases and discuss the differences between the two controllers. Which one should be preferred?
(c) Design an incremental controller for the system
End of explanation
H = control.tf([1, 0.7], [1, -1.8, 0.81], 1)
control.pzmap(H)
z = sy.symbols("z", real=False)
Ac = sy.Poly(z**2 - 1.5*z + 0.7,z)
sy.roots(Ac)
Explanation: Checking the poles
Before solving the problem, let's look at the location of the poles of the plant and the desired closed-loop system.
End of explanation
s0,s1 = sy.symbols("s0, s1")
A = sy.Poly(z**2 -1.8*z + 0.81, z)
B = sy.Poly(z + 0.7, z)
S = sy.Poly(s0*z + s1, z)
Ac = sy.Poly(z**2 - 1.5*z + 0.7, z)
Ao = sy.Poly(1, z)
# Diophantine equation
Dioph = A + S - Ac*Ao
# Extract the coefficients
Dioph_coeffs = Dioph.all_coeffs()
# Solve for s0 and s1,
sol = sy.solve(Dioph_coeffs, (s0,s1))
print('s_0 = %f' % sol[s0])
print('s_1 = %f' % sol[s1])
Explanation: So, the plant has a double pole in $z=0.9$, and the desired closed-loop system has complex-conjugated poles in $z=0.75 \pm i0.37$.
(a)
The feedback controller $F_b(z)$
The plant has numerator polynomial $B(z) = z+0.7$ and denominator polynomial $A(z) = z^2 - 1.8z + 0.81$. With the feedback controller $$F_b(z) = \frac{S(z)}{R(z)}$$ and feedforward $$F_f(z) = \frac{T(z)}{R(z)}$$ the closed-loop pulse-transfer function from the command signal to the output becomes
$$ H_{c}(z) = \frac{\frac{T(z)}{R(z)} \frac{B(z)}{A(z)}}{1 + \frac{B(z)}{A(z)}\frac{S(z)}{R(z)}} = \frac{T(z)(z+0.7)}{A(z)R(z) + S(z)(z+0.7)}.$$
To cancel the process zero, $z+0.7$ should be a factor of $R(z)$. Write $R(z)= \bar{R}(z)(z+0.7)$ to obtain the Diophantine equation
$$ A(z)\bar{R}(z) + S(z) = A_c(z)A_o(z).$$
Let's try to find a minimum-order controller that solves the Diophantine equation. The degree of the left hand side (and hence also of the right-hand side) is
$$ \deg (A\bar{R} + S) = \deg A + \deg \bar{R} = 2 + \deg\bar{R}.$$
The number of equations obtained when setting the coefficients of the left- and right-hand side equal is the same as the degree of the polynomials on each side (taking into account that the leading coefficient is 1, by convention).
The feedback controller can be written
$$ F_b(z) = \frac{S(z)}{R(z)} = \frac{s_0z^n + s_1z^{n-1} + \cdots + s_n}{(z+0.7)(z^{n-1} + r_1z^{n-2} + \cdots + r_{n-1}}, $$
which has $(n-1) + (n+1) = 2n$ unknown parameters, where $n = \deg\bar{R} + 1$.
So to obtain a Diophantine equation which gives exactly as many equations in the coefficients as unknowns, we must have
$$ 2 + \deg\bar{R} = 2\deg\bar{R} + 2 \quad \Rightarrow \quad \deg\bar{R} = 0.$$
Thus, the controller becomes
$$ F_b(z) = \frac{s_0z + s_1}{z+0.7}, $$
and the Diophantine equation
$$ z^2 - 1.8z + 0.81 + (s_0z + s_1) = z^2 - 1.5z + 0.7$$
$$ z^2 - (1.8-s_0)z + (0.81 + s_1) = z^2 - 1.5z + 0.7, $$
with solution
$$ s_0 = 1.8 - 1.5 = 0.3, \qquad s_1 = 0.7-0.81 = -0.11. $$
The right hand side of the Diophantine equation consists only of the desired characteristic polynomial $A_c(z)$, and the observer polynomial is $A_o(z) = 1$, in order for the degrees of the left- and right hand side to be the same.
Let's verify by calculation using SymPy.
End of explanation
t0 = float(Ac.eval(1))
Scoeffs = [float(sol[s0]), float(sol[s1])]
Rcoeffs = [1, 0.7]
Fb = control.tf(Scoeffs, Rcoeffs, 1)
Ff = control.tf([t0], Rcoeffs, 1)
Hc = Ff * control.feedback(H, Fb) # From command-signal to output
Hcu = Ff * control.feedback(1, Fb*H)
tvec = np.arange(40)
(t1, y1) = control.step_response(Hc,tvec)
plt.figure(figsize=(14,4))
plt.step(t1, y1[0])
plt.xlabel('k')
plt.ylabel('y')
plt.title('Output')
(t1, y1) = control.step_response(Hcu,tvec)
plt.figure(figsize=(14,4))
plt.step(t1, y1[0])
plt.xlabel('k')
plt.ylabel('y')
plt.title('Control signal')
Explanation: The feedforward controller $F_f(z)$
Part of the methodology of the polynomial design, is that the forward controller $F_f(z) = \frac{T(z)}{R(z)}$ should cancel the observer poles, so we set $T(z) = t_0A_o(z)$. In case (a) the observer poynomial is simply $A_o(z)=1$. However, since $R(z)=z+0.7$, we can choose $T(z) = t_0z$ and still have a causal controller $F_f(z)$.
The scalar factor $t_0$ is chosen to obtain unit DC-gain of $H_c(z)$, hence
$$ H_c(1) = \frac{t_0}{A_c(1)} = 1 \quad \Rightarrow \quad t_0 = A_c(1) = 1-1.5+0.7 = 0.2$$
Simulate
Let's simulate a step-responses from the command signal, and plot both the output and the control signal.
End of explanation |
9,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Random Forests
About this course
Teaching approach
This course is being taught by Jeremy Howard, and was developed by Jeremy along with Rachel Thomas. Rachel has been dealing with a life-threatening illness so will not be teaching as originally planned this year.
Jeremy has worked in a number of different areas - feel free to ask about anything that he might be able to help you with at any time, even if not directly related to the current topic
Step1: Introduction to Blue Book for Bulldozers
About...
...our teaching
At fast.ai we have a distinctive teaching philosophy of "the whole game". This is different from how most traditional math & technical courses are taught, where you have to learn all the individual elements before you can combine them (Harvard professor David Perkins call this elementitis), but it is similar to how topics like driving and baseball are taught. That is, you can start driving without knowing how an internal combustion engine works, and children begin playing baseball before they learn all the formal rules.
...our approach to machine learning
Most machine learning courses will throw at you dozens of different algorithms, with a brief technical description of the math behind them, and maybe a toy example. You're left confused by the enormous range of techniques shown and have little practical understanding of how to apply them.
The good news is that modern machine learning can be distilled down to a couple of key techniques that are of very wide applicability. Recent studies have shown that the vast majority of datasets can be best modeled with just two methods
Step2: In any sort of data science work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. Even if you've read descriptions about your data, the actual data may not be what you expect.
Step3: It's important to note what metric is being used for a project. Generally, selecting the metric(s) is an important part of the project setup. However, in this case Kaggle tells us what metric to use
Step4: Initial processing
Step5: This dataset contains a mix of continuous and categorical variables.
The following method extracts particular date fields from a complete datetime for the purpose of constructing categoricals. You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities.
Step6: The categorical variables are currently stored as strings, which is inefficient, and doesn't provide the numeric coding required for a random forest. Therefore we call train_cats to convert strings to pandas categories.
Step7: We can specify the order to use for categorical variables if we wish
Step8: We're still not quite done - for instance we have lots of missing values, wish we can't pass directly to a random forest.
Step9: But let's save this file for now, since it's already in format can we be stored and accessed efficiently.
Step10: Pre-processing
In the future we can simply read it from this fast format.
Step11: We'll replace categories with their numeric codes, handle missing continuous values, and split the dependent variable into a separate variable.
Step12: We now have something we can pass to a random forest!
Step13: todo define r^2
Wow, an r^2 of 0.98 - that's great, right? Well, perhaps not...
Possibly the most important idea in machine learning is that of having separate training & validation data sets. As motivation, suppose you don't divide up your data, but instead use all of it. And suppose you have lots of parameters
Step14: Random Forests
Base model
Let's try our model again, this time with separate training and validation sets.
Step15: An r^2 in the high-80's isn't bad at all (and the RMSLE puts us around rank 100 of 470 on the Kaggle leaderboard), but we can see from the validation set score that we're over-fitting badly. To understand this issue, let's simplify things down to a single small tree.
Speeding things up
Step16: Single tree
Step17: Let's see what happens if we create a bigger tree.
Step18: The training set result looks great! But the validation set is worse than our original model. This is why we need to use bagging of multiple trees to get more generalizable results.
Bagging
Intro to bagging
To learn about bagging in random forests, let's start with our basic model again.
Step19: We'll grab the predictions for each individual tree, and look at one example.
Step20: The shape of this curve suggests that adding more trees isn't going to help us much. Let's check. (Compare this to our original model on a sample)
Step21: Out-of-bag (OOB) score
Is our validation set worse than our training set because we're over-fitting, or because the validation set is for a different time period, or a bit of both? With the existing information we've shown, we can't tell. However, random forests have a very clever trick called out-of-bag (OOB) error which can handle this (and more!)
The idea is to calculate error on the training set, but only include the trees in the calculation of a row's error where that row was not included in training that tree. This allows us to see whether the model is over-fitting, without needing a separate validation set.
This also has the benefit of allowing us to see whether our model generalizes, even if we only have a small amount of data so want to avoid separating some out to create a validation set.
This is as simple as adding one more parameter to our model constructor. We print the OOB error last in our print_score function below.
Step22: This shows that our validation set time difference is making an impact, as is model over-fitting.
Reducing over-fitting
Subsampling
It turns out that one of the easiest ways to avoid over-fitting is also one of the best ways to speed up analysis
Step23: The basic idea is this
Step24: Since each additional tree allows the model to see more data, this approach can make additional trees more useful.
Step25: Tree building parameters
We revert to using a full bootstrap sample in order to show the impact of other over-fitting avoidance methods.
Step26: Let's get a baseline for this full set to compare to.
Step27: Another way to reduce over-fitting is to grow our trees less deeply. We do this by specifying (with min_samples_leaf) that we require some minimum number of rows in every leaf node. This has two benefits
Step28: We can also increase the amount of variation amongst the trees by not only use a sample of rows for each tree, but to also using a sample of columns for each split. We do this by specifying max_features, which is the proportion of features to randomly select from at each split.
None
0.5
'sqrt'
1, 3, 5, 10, 25, 100 | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
PATH = "data/bulldozers/"
!ls {PATH}
Explanation: Intro to Random Forests
About this course
Teaching approach
This course is being taught by Jeremy Howard, and was developed by Jeremy along with Rachel Thomas. Rachel has been dealing with a life-threatening illness so will not be teaching as originally planned this year.
Jeremy has worked in a number of different areas - feel free to ask about anything that he might be able to help you with at any time, even if not directly related to the current topic:
Management consultant (McKinsey; AT Kearney)
Self-funded startup entrepreneur (Fastmail: first consumer synchronized email; Optimal Decisions: first optimized insurance pricing)
VC-funded startup entrepreneur: (Kaggle; Enlitic: first deep-learning medical company)
I'll be using a top-down teaching method, which is different from how most math courses operate. Typically, in a bottom-up approach, you first learn all the separate components you will be using, and then you gradually build them up into more complex structures. The problems with this are that students often lose motivation, don't have a sense of the "big picture", and don't know what they'll need.
If you took the fast.ai deep learning course, that is what we used. You can hear more about my teaching philosophy in this blog post or in this talk.
Harvard Professor David Perkins has a book, Making Learning Whole in which he uses baseball as an analogy. We don't require kids to memorize all the rules of baseball and understand all the technical details before we let them play the game. Rather, they start playing with a just general sense of it, and then gradually learn more rules/details as time goes on.
All that to say, don't worry if you don't understand everything at first! You're not supposed to. We will start using some "black boxes" such as random forests that haven't yet been explained in detail, and then we'll dig into the lower level details later.
To start, focus on what things DO, not what they ARE.
Your practice
People learn by:
1. doing (coding and building)
2. explaining what they've learned (by writing or helping others)
Therefore, we suggest that you practice these skills on Kaggle by:
1. Entering competitions (doing)
2. Creating Kaggle kernels (explaining)
It's OK if you don't get good competition ranks or any kernel votes at first - that's totally normal! Just try to keep improving every day, and you'll see the results over time.
To get better at technical writing, study the top ranked Kaggle kernels from past competitions, and read posts from well-regarded technical bloggers. Some good role models include:
Peter Norvig (more here)
Stephen Merity
Julia Evans (more here)
Julia Ferraioli
Edwin Chen
Slav Ivanov (fast.ai student)
Brad Kenstler (fast.ai and USF MSAN student)
Books
The more familiarity you have with numeric programming in Python, the better. If you're looking to improve in this area, we strongly suggest Wes McKinney's Python for Data Analysis, 2nd ed.
For machine learning with Python, we recommend:
Introduction to Machine Learning with Python: From one of the scikit-learn authors, which is the main library we'll be using
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition: New version of a very successful book. A lot of the new material however covers deep learning in Tensorflow, which isn't relevant to this course
Hands-On Machine Learning with Scikit-Learn and TensorFlow
Syllabus in brief
Depending on time and class interests, we'll cover something like (not necessarily in this order):
Train vs test
Effective validation set construction
Trees and ensembles
Creating random forests
Interpreting random forests
What is ML? Why do we use it?
What makes a good ML project?
Structured vs unstructured data
Examples of failures/mistakes
Feature engineering
Domain specific - dates, URLs, text
Embeddings / latent factors
Regularized models trained with SGD
GLMs, Elasticnet, etc (NB: see what James covered)
Basic neural nets
PyTorch
Broadcasting, Matrix Multiplication
Training loop, backpropagation
KNN
CV / bootstrap (Diabetes data set?)
Ethical considerations
Skip:
Dimensionality reduction
Interactions
Monitoring training
Collaborative filtering
Momentum and LR annealing
Imports
End of explanation
df_raw = pd.read_csv(f'{PATH}Train.csv', low_memory=False,
parse_dates=["saledate"])
Explanation: Introduction to Blue Book for Bulldozers
About...
...our teaching
At fast.ai we have a distinctive teaching philosophy of "the whole game". This is different from how most traditional math & technical courses are taught, where you have to learn all the individual elements before you can combine them (Harvard professor David Perkins call this elementitis), but it is similar to how topics like driving and baseball are taught. That is, you can start driving without knowing how an internal combustion engine works, and children begin playing baseball before they learn all the formal rules.
...our approach to machine learning
Most machine learning courses will throw at you dozens of different algorithms, with a brief technical description of the math behind them, and maybe a toy example. You're left confused by the enormous range of techniques shown and have little practical understanding of how to apply them.
The good news is that modern machine learning can be distilled down to a couple of key techniques that are of very wide applicability. Recent studies have shown that the vast majority of datasets can be best modeled with just two methods:
Ensembles of decision trees (i.e. Random Forests and Gradient Boosting Machines), mainly for structured data (such as you might find in a database table at most companies)
Multi-layered neural networks learnt with SGD (i.e. shallow and/or deep learning), mainly for unstructured data (such as audio, vision, and natural language)
In this course we'll be doing a deep dive into random forests, and simple models learnt with SGD. You'll be learning about gradient boosting and deep learning in part 2.
...this dataset
We will be looking at the Blue Book for Bulldozers Kaggle Competition: "The goal of the contest is to predict the sale price of a particular piece of heavy equiment at auction based on it's usage, equipment type, and configuaration. The data is sourced from auction result postings and includes information on usage and equipment configurations."
This is a very common type of dataset and prediciton problem, and similar to what you may see in your project or workplace.
...Kaggle Competitions
Kaggle is an awesome resource for aspiring data scientists or anyone looking to improve their machine learning skills. There is nothing like being able to get hands-on practice and receiving real-time feedback to help you improve your skills.
Kaggle provides:
Interesting data sets
Feedback on how you're doing
A leader board to see what's good, what's possible, and what's state-of-art.
Blog posts by winning contestants share useful tips and techniques.
The data
Look at the data
Kaggle provides info about some of the fields of our dataset; on the Kaggle Data info page they say the following:
For this competition, you are predicting the sale price of bulldozers sold at auctions. The data for this competition is split into three parts:
Train.csv is the training set, which contains data through the end of 2011.
Valid.csv is the validation set, which contains data from January 1, 2012 - April 30, 2012 You make predictions on this set throughout the majority of the competition. Your score on this set is used to create the public leaderboard.
Test.csv is the test set, which won't be released until the last week of the competition. It contains data from May 1, 2012 - November 2012. Your score on the test set determines your final rank for the competition.
The key fields are in train.csv are:
SalesID: the uniue identifier of the sale
MachineID: the unique identifier of a machine. A machine can be sold multiple times
saleprice: what the machine sold for at auction (only provided in train.csv)
saledate: the date of the sale
Question
What stands out to you from the above description? What needs to be true of our training and validation sets?
End of explanation
def display_all(df):
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
display_all(df_raw.tail().transpose())
display_all(df_raw.describe(include='all').transpose())
Explanation: In any sort of data science work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. Even if you've read descriptions about your data, the actual data may not be what you expect.
End of explanation
df_raw.SalePrice = np.log(df_raw.SalePrice)
Explanation: It's important to note what metric is being used for a project. Generally, selecting the metric(s) is an important part of the project setup. However, in this case Kaggle tells us what metric to use: RMSLE (root mean squared log error) between the actual and predicted auction prices. Therefore we take the log of the prices, so that RMSE will give us what we need.
End of explanation
m = RandomForestRegressor(n_jobs=-1)
m.fit(df_raw.drop('SalePrice', axis=1), df_raw.SalePrice)
Explanation: Initial processing
End of explanation
add_datepart(df_raw, 'saledate')
df_raw.saleYear.head()
Explanation: This dataset contains a mix of continuous and categorical variables.
The following method extracts particular date fields from a complete datetime for the purpose of constructing categoricals. You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities.
End of explanation
train_cats(df_raw)
Explanation: The categorical variables are currently stored as strings, which is inefficient, and doesn't provide the numeric coding required for a random forest. Therefore we call train_cats to convert strings to pandas categories.
End of explanation
df_raw.UsageBand.cat.categories
df_raw.UsageBand.cat.set_categories(['High', 'Medium', 'Low'], ordered=True, inplace=True)
df_raw.UsageBand = df_raw.UsageBand.cat.codes
Explanation: We can specify the order to use for categorical variables if we wish:
End of explanation
display_all(df_raw.isnull().sum().sort_index()/len(df_raw))
Explanation: We're still not quite done - for instance we have lots of missing values, wish we can't pass directly to a random forest.
End of explanation
os.makedirs('tmp', exist_ok=True)
df_raw.to_feather('tmp/bulldozers-raw')
Explanation: But let's save this file for now, since it's already in format can we be stored and accessed efficiently.
End of explanation
df_raw = pd.read_feather('tmp/bulldozers-raw')
Explanation: Pre-processing
In the future we can simply read it from this fast format.
End of explanation
df, y, nas = proc_df(df_raw, 'SalePrice')
Explanation: We'll replace categories with their numeric codes, handle missing continuous values, and split the dependent variable into a separate variable.
End of explanation
m = RandomForestRegressor(n_jobs=-1)
m.fit(df, y)
m.score(df,y)
Explanation: We now have something we can pass to a random forest!
End of explanation
def split_vals(a,n): return a[:n].copy(), a[n:].copy()
n_valid = 12000 # same as Kaggle's test set size
n_trn = len(df)-n_valid
raw_train, raw_valid = split_vals(df_raw, n_trn)
X_train, X_valid = split_vals(df, n_trn)
y_train, y_valid = split_vals(y, n_trn)
X_train.shape, y_train.shape, X_valid.shape
Explanation: todo define r^2
Wow, an r^2 of 0.98 - that's great, right? Well, perhaps not...
Possibly the most important idea in machine learning is that of having separate training & validation data sets. As motivation, suppose you don't divide up your data, but instead use all of it. And suppose you have lots of parameters:
<img src="images/overfitting2.png" alt="" style="width: 70%"/>
<center>
Underfitting and Overfitting
</center>
The error for the pictured data points is lowest for the model on the far right (the blue curve passes through the red points almost perfectly), yet it's not the best choice. Why is that? If you were to gather some new data points, they most likely would not be on that curve in the graph on the right, but would be closer to the curve in the middle graph.
This illustrates how using all our data can lead to overfitting. A validation set helps diagnose this problem.
End of explanation
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
m = RandomForestRegressor(n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: Random Forests
Base model
Let's try our model again, this time with separate training and validation sets.
End of explanation
df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice', subset=30000, na_dict=nas)
X_train, _ = split_vals(df_trn, 20000)
y_train, _ = split_vals(y_trn, 20000)
m = RandomForestRegressor(n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: An r^2 in the high-80's isn't bad at all (and the RMSLE puts us around rank 100 of 470 on the Kaggle leaderboard), but we can see from the validation set score that we're over-fitting badly. To understand this issue, let's simplify things down to a single small tree.
Speeding things up
End of explanation
m = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
draw_tree(m.estimators_[0], df_trn, precision=3)
Explanation: Single tree
End of explanation
m = RandomForestRegressor(n_estimators=1, bootstrap=False, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: Let's see what happens if we create a bigger tree.
End of explanation
m = RandomForestRegressor(n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: The training set result looks great! But the validation set is worse than our original model. This is why we need to use bagging of multiple trees to get more generalizable results.
Bagging
Intro to bagging
To learn about bagging in random forests, let's start with our basic model again.
End of explanation
preds = np.stack([t.predict(X_valid) for t in m.estimators_])
preds[:,0], np.mean(preds[:,0]), y_valid[0]
preds.shape
plt.plot([metrics.r2_score(y_valid, np.mean(preds[:i+1], axis=0)) for i in range(10)]);
Explanation: We'll grab the predictions for each individual tree, and look at one example.
End of explanation
m = RandomForestRegressor(n_estimators=20, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=40, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=80, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: The shape of this curve suggests that adding more trees isn't going to help us much. Let's check. (Compare this to our original model on a sample)
End of explanation
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: Out-of-bag (OOB) score
Is our validation set worse than our training set because we're over-fitting, or because the validation set is for a different time period, or a bit of both? With the existing information we've shown, we can't tell. However, random forests have a very clever trick called out-of-bag (OOB) error which can handle this (and more!)
The idea is to calculate error on the training set, but only include the trees in the calculation of a row's error where that row was not included in training that tree. This allows us to see whether the model is over-fitting, without needing a separate validation set.
This also has the benefit of allowing us to see whether our model generalizes, even if we only have a small amount of data so want to avoid separating some out to create a validation set.
This is as simple as adding one more parameter to our model constructor. We print the OOB error last in our print_score function below.
End of explanation
df_trn, y_trn = proc_df(df_raw, 'SalePrice')
X_train, X_valid = split_vals(df_trn, n_trn)
y_train, y_valid = split_vals(y_trn, n_trn)
Explanation: This shows that our validation set time difference is making an impact, as is model over-fitting.
Reducing over-fitting
Subsampling
It turns out that one of the easiest ways to avoid over-fitting is also one of the best ways to speed up analysis: subsampling. Let's return to using our full dataset, so that we can demonstrate the impact of this technique.
End of explanation
set_rf_samples(20000)
m = RandomForestRegressor(n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: The basic idea is this: rather than limit the total amount of data that our model can access, let's instead limit it to a different random subset per tree. That way, given enough trees, the model can still see all the data, but for each individual tree it'll be just as fast as if we had cut down our dataset as before.
End of explanation
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: Since each additional tree allows the model to see more data, this approach can make additional trees more useful.
End of explanation
reset_rf_samples()
Explanation: Tree building parameters
We revert to using a full bootstrap sample in order to show the impact of other over-fitting avoidance methods.
End of explanation
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: Let's get a baseline for this full set to compare to.
End of explanation
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: Another way to reduce over-fitting is to grow our trees less deeply. We do this by specifying (with min_samples_leaf) that we require some minimum number of rows in every leaf node. This has two benefits:
There are less decision rules for each leaf node; simpler models should generalize better
The predictions are made by averaging more rows in the leaf node, resulting in less volatility
End of explanation
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: We can also increase the amount of variation amongst the trees by not only use a sample of rows for each tree, but to also using a sample of columns for each split. We do this by specifying max_features, which is the proportion of features to randomly select from at each split.
None
0.5
'sqrt'
1, 3, 5, 10, 25, 100
End of explanation |
9,658 | Given the following text description, write Python code to implement the functionality described.
Description:
"Given an array representing a branch of a tree that has non-negative integer nodes
your task is to pluck one of the nodes and return it.
The plucked node should be the node with the smallest even value.
If multiple nodes with the same smallest even value are found return the node that has smallest index.
The plucked node should be returned in a list, [ smalest_value, its index ],
If there are no even values or the given array is empty, return [].
Example 1:
Input: [4,2,3]
Output: [2, 1]
Explanation: 2 has the smallest even value, and 2 has the smallest index.
Example 2:
Input: [1,2,3]
Output: [2, 1]
Explanation: 2 has the smallest even value, and 2 has the smallest index.
Example 3:
Input: []
Output: []
Example 4:
Input: [5, 0, 3, 0, 4, 2]
Output: [0, 1]
Explanation: 0 is the smallest value, but there are two zeros,
so we will choose the first zero, which has the smallest index.
Constraints:
* 1 <= nodes.length <= 10000
* 0 <= node.value
| Python Code:
def pluck(arr):
if(len(arr) == 0): return []
evens = list(filter(lambda x: x%2 == 0, arr))
if(evens == []): return []
return [min(evens), arr.index(min(evens))] |
9,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trace Analysis
This notebook performs some analysis on the trace files for the initial consensus experiments.
Step1: Trace Visualization
Step2: Experiment Analysis
Some details on the experimental design from the JSON files.
Step3: Tiered Trace Analysis
Visualizations of the trace access patterns for confirmation. | Python Code:
%matplotlib inline
import os
import re
import csv
import glob
import json
import numpy as np
import pandas as pd
import seaborn as sns
## Load Data
PROPRE = re.compile(r'^trace-(\d+)ms-(\d+)user.tsv$')
TRACES = os.path.join("..", "fixtures", "traces", "trace-*")
def load_trace_data(traces=TRACES, pattern=PROPRE):
for idx, name in enumerate(glob.glob(traces)):
path = os.path.abspath(name)
name = os.path.basename(name)
prop = dict(zip(('access mean (ms)', 'users'), map(int, pattern.match(name).groups())))
prop['id'] = idx + 1
with open(path, 'r') as f:
tstep = 0
header = ('time', 'replica', 'object', 'access')
reader = csv.DictReader(f, delimiter='\t', fieldnames=header)
for row in reader:
row.update(prop)
row['time'] = int(row['time'])
row['delay since last access (ms)'] = row['time'] - tstep
tstep = row['time']
yield row
traces = pd.DataFrame(load_trace_data())
Explanation: Trace Analysis
This notebook performs some analysis on the trace files for the initial consensus experiments.
End of explanation
sns.factorplot(
data=traces, x="replica", y="delay since last access (ms)", hue="access",
row='access mean (ms)', col='users', kind='bar'
)
sns.factorplot(
data=traces, x="object", y="delay since last access (ms)", hue="replica",
row='access mean (ms)', col='users', kind='bar'
)
def count(x):
return sum(1 for i in x)
repobj = traces.groupby(['replica', 'object', 'access mean (ms)', 'users', 'access'])
repobj = repobj.agg({'delay since last access (ms)': np.mean, 'id': count, 'time': np.sum})
del repobj['time']
repobj = repobj.rename(columns = {'id':'count'})
sns.clustermap(repobj)
sns.factorplot(
data=traces, x="time", y="object",
row='access mean (ms)', col='users', kind='violin'
)
Explanation: Trace Visualization
End of explanation
## Load Data
TOPORE = re.compile(r'^(\w+)\-(\d+).json$')
TOPOLOGIES = os.path.join("..", "fixtures", "experiments", "*.json")
def load_experiment_data(topos=TOPOLOGIES, pattern=TOPORE):
for name in glob.glob(topos):
path = os.path.abspath(name)
name = os.path.basename(name)
prop = dict(zip(('type', 'id'), pattern.match(name).groups()))
prop['eid'] = int(prop['id'])
with open(path, 'r') as f:
data = json.load(f)
conns = {
link['source']: link['latency']
for link in data['links']
}
conns.update({
link['target']: link['latency']
for link in data['links']
})
for idx, node in enumerate(data['nodes']):
node.update(prop)
node['users'] = data['meta']['users']
latency = conns[idx]
electto = node.pop('election_timeout', [0,0])
del node['consistency']
node['eto_low'] = electto[0]
node['eto_high'] = electto[1]
node['eto_mean'] = sum(electto) / 2.0
node['latency_low'] = latency[0]
node['latency_high'] = latency[1]
node['latency_mean'] = sum(latency) / 2.0
yield node
experiments = pd.DataFrame(load_experiment_data())
experiments.describe()
sns.factorplot(x='eid', y='latency_mean', kind='bar', row='type', hue='users', data=experiments)
Explanation: Experiment Analysis
Some details on the experimental design from the JSON files.
End of explanation
data = pd.read_csv('../fixtures/traces/tiered.tsv', sep='\t', names=['time', 'replica', 'object', 'access'])
aggregation = {
'time': {
'from': 'min',
'to': 'max'
},
'access': {
'count'
}
}
replicas = data.groupby(['replica', 'object']).agg({'access': 'count'})
from collections import defaultdict
records = defaultdict(dict)
for replica, obj, count in replicas.to_records():
records[replica][obj] = count
records = pd.DataFrame(records)
# records = records.fillna(0)
sns.set_style('whitegrid')
sns.set_context('talk')
sns.heatmap(records, annot=True, fmt="0.0f", linewidths=.5, cmap="Reds")
series = defaultdict(list)
for idx, ts, rep, obj, acc in data.to_records():
series[obj].append((ts, acc))
durations = defaultdict(list)
for obj, times in series.iteritems():
prev = 0
for time, acc in times:
durations[obj].append((time - prev, acc))
prev = time
data = []
for key, duration in durations.iteritems():
for time, acc in duration:
data.append((key, time, acc))
df = pd.DataFrame(data)
df.columns = ['object', 'since', 'access']
sns.violinplot(x="object", y="since", hue="access", data=df[df.since < 5000], palette="muted", split=True, scale="count", inner="quartile")
Explanation: Tiered Trace Analysis
Visualizations of the trace access patterns for confirmation.
End of explanation |
9,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Restore the whole energysystem with results
Step1: Convert keys to strings and print all keys
Step2: Use the outputlib to collect all the flows into and out of the electricity bus
Collect all flows into and out of the electricity bus by using outputlib.views.node()
Step3: What we will be working with now is a pandas dataframe. Have a look at these links to learn about pandas, especially the last one (pandas in 10min)
Step4: Use pandas functionality to create a plot of all the columns of the dataframe
Step5: oemof visio provides a function that collects the column names for in and outflows as lists in a dictionary
Step6: This allows us to get the all the columns that are outflows
Step7: Plot only outflows
Step8: Use the functions of oemof_visio to create plots
See also | Python Code:
energysystem = solph.EnergySystem()
energysystem.restore(dpath=None, filename=None)
Explanation: Restore the whole energysystem with results
End of explanation
string_results = outputlib.views.convert_keys_to_strings(energysystem.results['main'])
print(string_results.keys())
Explanation: Convert keys to strings and print all keys
End of explanation
node_results_bel = outputlib.views.node(energysystem.results['main'], 'bel')
Explanation: Use the outputlib to collect all the flows into and out of the electricity bus
Collect all flows into and out of the electricity bus by using outputlib.views.node()
End of explanation
df = node_results_bel['sequences']
df.head(2)
Explanation: What we will be working with now is a pandas dataframe. Have a look at these links to learn about pandas, especially the last one (pandas in 10min):
https://pandas.pydata.org/
http://pandas.pydata.org/pandas-docs/stable/
http://pandas.pydata.org/pandas-docs/stable/10min.html
End of explanation
ax = df.plot(kind='line', drawstyle='steps-post')
ax.set_xlabel('Time [h]')
ax.set_ylabel('Energy [MWh]')
ax.set_title('Flows into and out of bel')
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) # place legend outside of plot
plt.show()
Explanation: Use pandas functionality to create a plot of all the columns of the dataframe
End of explanation
in_out_dictionary = oev.plot.divide_bus_columns('bel', df.columns)
in_cols = in_out_dictionary['in_cols']
out_cols = in_out_dictionary['out_cols']
Explanation: oemof visio provides a function that collects the column names for in and outflows as lists in a dictionary
End of explanation
bel_to_demand_el = [(('bel', 'demand_el'), 'flow')] # this is a list with one element
df[bel_to_demand_el].head(2)
Explanation: This allows us to get the all the columns that are outflows:
We can get any column of the dataframe by providing its label as a list
End of explanation
ax = df[out_cols].plot(kind='line', drawstyle='steps-post')
ax.set_xlabel('Time [h]')
ax.set_ylabel('Energy [MWh]')
ax.set_title('Flows into or out of bel')
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) # place legend outside of plot
plt.show()
Explanation: Plot only outflows
End of explanation
inorder = [(('pp_chp', 'bel'), 'flow'),
(('pp_coal', 'bel'), 'flow'),
(('pp_gas', 'bel'), 'flow'),
(('pp_lig', 'bel'), 'flow'),
(('pp_oil', 'bel'), 'flow'),
(('pv', 'bel'), 'flow'),
(('wind', 'bel'), 'flow')]
outorder = [(('bel', 'demand_el'), 'flow'),
(('bel', 'excess_el'), 'flow'),
(('bel', 'heat_pump'), 'flow')]
cdict = {(('pp_chp', 'bel'), 'flow'): '#eeac7e',
(('pp_coal', 'bel'), 'flow'): '#0f2e2e',
(('pp_gas', 'bel'), 'flow'): '#c76c56',
(('pp_lig', 'bel'), 'flow'): '#56201d',
(('pp_oil', 'bel'), 'flow'): '#494a19',
(('pv', 'bel'), 'flow'): '#ffde32',
(('wind', 'bel'), 'flow'): '#4ca7c3',
(('bel', 'demand_el'), 'flow'): '#ce4aff',
(('bel', 'excess_el'), 'flow'): '#555555',
(('bel', 'heat_pump'), 'flow'): '#42c77a'}
fig = plt.figure(figsize=(13, 5))
my_plot = oev.plot.io_plot('bel', df,
inorder=inorder,
outorder=outorder,
cdict=cdict,
ax=fig.add_subplot(1, 1, 1),
smooth=False)
ax = my_plot['ax']
oev.plot.set_datetime_ticks(ax, df.index, tick_distance=32,
date_format='%d-%m-%H', offset=12)
my_plot['ax'].set_ylabel('Power in MW')
my_plot['ax'].set_xlabel('2012')
my_plot['ax'].set_title("Electricity bus")
legend = ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) # place legend outside of plot
# save figure
fig = ax.get_figure()
fig.savefig('myplot.png', bbox_inches='tight')
Explanation: Use the functions of oemof_visio to create plots
See also: oemof_examples/examples/oemof_0.2/plotting_examples/storage_investment_plot.py
Use color palette generators to generate a suitable color list, e.g.:
http://javier.xyz/cohesive-colors/
https://colourco.de/
http://seaborn.pydata.org/tutorial/color_palettes.html
End of explanation |
9,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Procedure
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Visualize | Python Code:
import numpy as np
A_det = np.matrix('10 0; -2 100') #A-matrix
B_det = np.matrix('1 10') #B-matrix
f = np.matrix('1000; 0') #Functional unit vector f
g_LCA = B_det * A_det.I * f
print("The deterministic result is:", g_LCA[0,0])
Explanation: Procedure: Global sensitivity analysis for matrix-based LCA
Method: Squared standardized regression coefficients (SSRC) & MCS: Monte Carlo simulation (normal random)
Author: Evelyne Groen {evelyne [dot] groen [at] gmail [dot] com}
Last update: 25/10/2016
End of explanation
N = 1000 #Sample size
CV = 0.05 #Coefficient of variation (CV = sigma/mu)
import random
A1 = [random.gauss(A_det[0,0], CV*A_det[0,0]) for i in range(N)]
A3 = [random.gauss(A_det[1,0], CV*A_det[1,0]) for i in range(N)]
A4 = [random.gauss(A_det[1,1], CV*A_det[1,1]) for i in range(N)]
B1 = [random.gauss(B_det[0,0], CV*B_det[0,0]) for i in range(N)]
B2 = [random.gauss(B_det[0,1], CV*B_det[0,1]) for i in range(N)]
As = [np.matrix([[A1[i], 0],[A3[i], A4[i]]]) for i in range(N)]
Bs = [np.matrix([[B1[i], B2[i]]]) for i in range(N)]
f = np.matrix('1000; 0')
gs = [B * A.I * f for A, B in zip(As, Bs)]
g_list =[g[0,0] for g in gs]
import statistics as stats
var_g = stats.variance(g_list)
print("The output variance equals:", var_g)
Explanation: Step 1: Uncertainty propagation
Monte Carlo simulation using normal distribution functions for all input parameters
The mean values are equal to the initial values of A and B.
The standard deviation equals 5% of the mean of A and B.
End of explanation
#Reshape the data
g_list = np.reshape([g[0,0] for g in gs], (N,1))
As_list = np.reshape(As, (N,4))
Bs_list = np.reshape(Bs, (N,2))
Ps_list = np.concatenate((np.ones((N,1)), As_list[:,:1], As_list[:,2:], Bs_list), axis=1)
from numpy.linalg import inv
RC = np.dot( np.dot( inv( (np.dot(Ps_list.T, Ps_list)) ), Ps_list.T), g_list)
print("Regression coefficients:", RC)
Explanation: Step 2: Calculate the regression coefficients
End of explanation
import statistics as stats
var_g = stats.variance(g_list[:,0])
var_x = [stats.variance(Ps_list[:,k]) for k in range(1,6)]
SSRC = (var_x/var_g) * (RC[1:6,0]**2)
print("squared standardized regression coefficients:", SSRC)
Explanation: Step 3: calculate the squared standardized regression coefficients
End of explanation
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
SSRC_procent = SSRC * 100
x_label=[ 'A(1,1)', 'A(2,1)', 'A(2,2)', 'B(1,1)', 'B(1,2)']
x_pos = range(5)
plt.bar(x_pos, SSRC_procent, align='center')
plt.xticks(x_pos, x_label)
plt.title('Global sensitivity analysis: squared standardized regression coefficients')
plt.ylabel('SSRC (%)')
plt.xlabel('Parameter')
plt.show()
Explanation: Visualize
End of explanation |
9,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step6: Text Classification Using a Convolutional Neural Network on MXNet
This tutorial is based off of Yoon Kim's paper on using convolutional neural networks for scentence sentiment classification.
For this tutorial we will train a convolutional deep network model on Rotten Tomatoes movie review sentences labled with their sentiment. The result will be a model that can classify a sentence based on it's sentiment (with 1 being a purely positive sentiment, 0 being a purely negative sentiment and 0.5 being neutral).
Our first step will be to fetch the labeled training data of positive and negative senitment sentences and process it into sets of vectors that are then randomly split into train and test sets.
Step7: Now that we prepared the training and test data by loading, vectorizing and shuffling it we can go on to defining the network architecture we want to train with the data.
We will first set up some placeholders for the input and output of the network then define the first layer, an embedding layer, which learns to map word vectors into a lower dimensional vector space where distances between words correspond to how related they are (with respect to sentiment they convey).
Step8: The next layer in the network performs convolutions over the ordered embedded word vectors in a sentence using multiple filter sizes, sliding over 3, 4 or 5 words at a time. This is the equivalent of looking at all 3-grams, 4-grams and 5-grams in a sentence and will allow us to understand how words contribute to sentiment in the context of those around them.
After each convolution we add a max-pool layer to extract the most significant elements in each convolution and turn them into a feature vector.
Because each convolution+pool filter produces tensors of different shapes we need to create a layer for each of them, and then concatonate the results of these layers into one big feature vector.
Step9: Next, we add dropout regularization, which will randomly disable a fraction of neruons in the layer (set to 50% here) to ensure that that model does not overfit. This works by preventing neurons from co-adapting and forcing them to learn individually useful features.
This is nessecary in our model becasuse the dataset has a vocabulary of size around 20k and only around 10k examples so since this data set is pretty small we’re likely to overfit with a powerful model (like this neural net).
Step10: Finally we add a fully connected layer to add non-linearity to the model. We then classify the resulting output of this layer using a softmax function, yeilding a result between 0 (negative sentimet) and 1 (positive).
Step11: Now that we have defined our CNN model we will define the device on our machine that we will train and execute this model on, as well as the datasets to train and test this model with.
If you are running this code be sure that you have a GPU on your machine if your ctx is set to mx.gpu(0) otherwise you can set your ctx to mx.cpu(0) which will run the training much slower
Step12: We can now execute the training and testing of our network, which in-part mxnet automatically does for us with its forward and backwards propogation methods, along with its automatic gradient calculations. | Python Code:
import urllib2
import numpy as np
import re
import itertools
from collections import Counter
def clean_str(string):
Tokenization/string cleaning for all datasets except for SST.
Original taken from https://github.com/yoonkim/CNN_sentence/blob/master/process_data.py
string = re.sub(r"[^A-Za-z0-9(),!?\'\`]", " ", string)
string = re.sub(r"\'s", " \'s", string)
string = re.sub(r"\'ve", " \'ve", string)
string = re.sub(r"n\'t", " n\'t", string)
string = re.sub(r"\'re", " \'re", string)
string = re.sub(r"\'d", " \'d", string)
string = re.sub(r"\'ll", " \'ll", string)
string = re.sub(r",", " , ", string)
string = re.sub(r"!", " ! ", string)
string = re.sub(r"\(", " \( ", string)
string = re.sub(r"\)", " \) ", string)
string = re.sub(r"\?", " \? ", string)
string = re.sub(r"\s{2,}", " ", string)
return string.strip().lower()
def load_data_and_labels():
Loads MR polarity data from files, splits the data into words and generates labels.
Returns split sentences and labels.
# Pull scentences with positive sentiment
pos_file = urllib2.urlopen('https://raw.githubusercontent.com/yoonkim/CNN_sentence/master/rt-polarity.pos')
# Pull scentences with negative sentiment
neg_file = urllib2.urlopen('https://raw.githubusercontent.com/yoonkim/CNN_sentence/master/rt-polarity.neg')
# Load data from files
positive_examples = list(pos_file.readlines())
positive_examples = [s.strip() for s in positive_examples]
negative_examples = list(neg_file.readlines())
negative_examples = [s.strip() for s in negative_examples]
# Split by words
x_text = positive_examples + negative_examples
x_text = [clean_str(sent) for sent in x_text]
x_text = [s.split(" ") for s in x_text]
# Generate labels
positive_labels = [1 for _ in positive_examples]
negative_labels = [0 for _ in negative_examples]
y = np.concatenate([positive_labels, negative_labels], 0)
return [x_text, y]
def pad_sentences(sentences, padding_word="</s>"):
Pads all sentences to the same length. The length is defined by the longest sentence.
Returns padded sentences.
sequence_length = max(len(x) for x in sentences)
padded_sentences = []
for i in range(len(sentences)):
sentence = sentences[i]
num_padding = sequence_length - len(sentence)
new_sentence = sentence + [padding_word] * num_padding
padded_sentences.append(new_sentence)
return padded_sentences
def build_vocab(sentences):
Builds a vocabulary mapping from word to index based on the sentences.
Returns vocabulary mapping and inverse vocabulary mapping.
# Build vocabulary
word_counts = Counter(itertools.chain(*sentences))
# Mapping from index to word
vocabulary_inv = [x[0] for x in word_counts.most_common()]
# Mapping from word to index
vocabulary = {x: i for i, x in enumerate(vocabulary_inv)}
return [vocabulary, vocabulary_inv]
def build_input_data(sentences, labels, vocabulary):
Maps sentencs and labels to vectors based on a vocabulary.
x = np.array([[vocabulary[word] for word in sentence] for sentence in sentences])
y = np.array(labels)
return [x, y]
Loads and preprocessed data for the MR dataset.
Returns input vectors, labels, vocabulary, and inverse vocabulary.
# Load and preprocess data
sentences, labels = load_data_and_labels()
sentences_padded = pad_sentences(sentences)
vocabulary, vocabulary_inv = build_vocab(sentences_padded)
x, y = build_input_data(sentences_padded, labels, vocabulary)
vocab_size = len(vocabulary)
# randomly shuffle data
np.random.seed(10)
shuffle_indices = np.random.permutation(np.arange(len(y)))
x_shuffled = x[shuffle_indices]
y_shuffled = y[shuffle_indices]
# split train/dev set
# there are a total of 10662 labled examples to train on
x_train, x_dev = x_shuffled[:-1000], x_shuffled[-1000:]
y_train, y_dev = y_shuffled[:-1000], y_shuffled[-1000:]
sentence_size = x_train.shape[1]
print 'Train/Dev split: %d/%d' % (len(y_train), len(y_dev))
print 'train shape:', x_train.shape
print 'dev shape:', x_dev.shape
print 'vocab_size', vocab_size
print 'sentence max words', sentence_size
Explanation: Text Classification Using a Convolutional Neural Network on MXNet
This tutorial is based off of Yoon Kim's paper on using convolutional neural networks for scentence sentiment classification.
For this tutorial we will train a convolutional deep network model on Rotten Tomatoes movie review sentences labled with their sentiment. The result will be a model that can classify a sentence based on it's sentiment (with 1 being a purely positive sentiment, 0 being a purely negative sentiment and 0.5 being neutral).
Our first step will be to fetch the labeled training data of positive and negative senitment sentences and process it into sets of vectors that are then randomly split into train and test sets.
End of explanation
import mxnet as mx
import sys,os
'''
Define batch size and the place holders for network inputs and outputs
'''
batch_size = 50 # the size of batches to train network with
print 'batch size', batch_size
input_x = mx.sym.Variable('data') # placeholder for input data
input_y = mx.sym.Variable('softmax_label') # placeholder for output label
'''
Define the first network layer (embedding)
'''
# create embedding layer to learn representation of words in a lower dimensional subspace (much like word2vec)
num_embed = 300 # dimensions to embed words into
print 'embedding dimensions', num_embed
embed_layer = mx.sym.Embedding(data=input_x, input_dim=vocab_size, output_dim=num_embed, name='vocab_embed')
# reshape embedded data for next layer
conv_input = mx.sym.Reshape(data=embed_layer, target_shape=(batch_size, 1, sentence_size, num_embed))
Explanation: Now that we prepared the training and test data by loading, vectorizing and shuffling it we can go on to defining the network architecture we want to train with the data.
We will first set up some placeholders for the input and output of the network then define the first layer, an embedding layer, which learns to map word vectors into a lower dimensional vector space where distances between words correspond to how related they are (with respect to sentiment they convey).
End of explanation
# create convolution + (max) pooling layer for each filter operation
filter_list=[3, 4, 5] # the size of filters to use
print 'convolution filters', filter_list
num_filter=100
pooled_outputs = []
for i, filter_size in enumerate(filter_list):
convi = mx.sym.Convolution(data=conv_input, kernel=(filter_size, num_embed), num_filter=num_filter)
relui = mx.sym.Activation(data=convi, act_type='relu')
pooli = mx.sym.Pooling(data=relui, pool_type='max', kernel=(sentence_size - filter_size + 1, 1), stride=(1,1))
pooled_outputs.append(pooli)
# combine all pooled outputs
total_filters = num_filter * len(filter_list)
concat = mx.sym.Concat(*pooled_outputs, dim=1)
# reshape for next layer
h_pool = mx.sym.Reshape(data=concat, target_shape=(batch_size, total_filters))
Explanation: The next layer in the network performs convolutions over the ordered embedded word vectors in a sentence using multiple filter sizes, sliding over 3, 4 or 5 words at a time. This is the equivalent of looking at all 3-grams, 4-grams and 5-grams in a sentence and will allow us to understand how words contribute to sentiment in the context of those around them.
After each convolution we add a max-pool layer to extract the most significant elements in each convolution and turn them into a feature vector.
Because each convolution+pool filter produces tensors of different shapes we need to create a layer for each of them, and then concatonate the results of these layers into one big feature vector.
End of explanation
# dropout layer
dropout=0.5
print 'dropout probablity', dropout
if dropout > 0.0:
h_drop = mx.sym.Dropout(data=h_pool, p=dropout)
else:
h_drop = h_pool
Explanation: Next, we add dropout regularization, which will randomly disable a fraction of neruons in the layer (set to 50% here) to ensure that that model does not overfit. This works by preventing neurons from co-adapting and forcing them to learn individually useful features.
This is nessecary in our model becasuse the dataset has a vocabulary of size around 20k and only around 10k examples so since this data set is pretty small we’re likely to overfit with a powerful model (like this neural net).
End of explanation
# fully connected layer
num_label=2
cls_weight = mx.sym.Variable('cls_weight')
cls_bias = mx.sym.Variable('cls_bias')
fc = mx.sym.FullyConnected(data=h_drop, weight=cls_weight, bias=cls_bias, num_hidden=num_label)
# softmax output
sm = mx.sym.SoftmaxOutput(data=fc, label=input_y, name='softmax')
# set CNN pointer to the "back" of the network
cnn = sm
Explanation: Finally we add a fully connected layer to add non-linearity to the model. We then classify the resulting output of this layer using a softmax function, yeilding a result between 0 (negative sentimet) and 1 (positive).
End of explanation
from collections import namedtuple
import time
import math
# Define the structure of our CNN Model (as a named tuple)
CNNModel = namedtuple("CNNModel", ['cnn_exec', 'symbol', 'data', 'label', 'param_blocks'])
# Define what device to train/test on
ctx=mx.gpu(0)
# If you have no GPU on your machine change this to
# ctx=mx.cpu(0)
arg_names = cnn.list_arguments()
input_shapes = {}
input_shapes['data'] = (batch_size, sentence_size)
arg_shape, out_shape, aux_shape = cnn.infer_shape(**input_shapes)
arg_arrays = [mx.nd.zeros(s, ctx) for s in arg_shape]
args_grad = {}
for shape, name in zip(arg_shape, arg_names):
if name in ['softmax_label', 'data']: # input, output
continue
args_grad[name] = mx.nd.zeros(shape, ctx)
cnn_exec = cnn.bind(ctx=ctx, args=arg_arrays, args_grad=args_grad, grad_req='add')
param_blocks = []
arg_dict = dict(zip(arg_names, cnn_exec.arg_arrays))
initializer=mx.initializer.Uniform(0.1)
for i, name in enumerate(arg_names):
if name in ['softmax_label', 'data']: # input, output
continue
initializer(name, arg_dict[name])
param_blocks.append( (i, arg_dict[name], args_grad[name], name) )
out_dict = dict(zip(cnn.list_outputs(), cnn_exec.outputs))
data = cnn_exec.arg_dict['data']
label = cnn_exec.arg_dict['softmax_label']
cnn_model= CNNModel(cnn_exec=cnn_exec, symbol=cnn, data=data, label=label, param_blocks=param_blocks)
Explanation: Now that we have defined our CNN model we will define the device on our machine that we will train and execute this model on, as well as the datasets to train and test this model with.
If you are running this code be sure that you have a GPU on your machine if your ctx is set to mx.gpu(0) otherwise you can set your ctx to mx.cpu(0) which will run the training much slower
End of explanation
'''
Train the cnn_model using back prop
'''
optimizer='rmsprop'
max_grad_norm=5.0
learning_rate=0.0005
epoch=50
print 'optimizer', optimizer
print 'maximum gradient', max_grad_norm
print 'learning rate (step size)', learning_rate
print 'epochs to train for', epoch
# create optimizer
opt = mx.optimizer.create(optimizer)
opt.lr = learning_rate
updater = mx.optimizer.get_updater(opt)
# create logging output
logs = sys.stderr
# For each training epoch
for iteration in range(epoch):
tic = time.time()
num_correct = 0
num_total = 0
# Over each batch of training data
for begin in range(0, x_train.shape[0], batch_size):
batchX = x_train[begin:begin+batch_size]
batchY = y_train[begin:begin+batch_size]
if batchX.shape[0] != batch_size:
continue
cnn_model.data[:] = batchX
cnn_model.label[:] = batchY
# forward
cnn_model.cnn_exec.forward(is_train=True)
# backward
cnn_model.cnn_exec.backward()
# eval on training data
num_correct += sum(batchY == np.argmax(cnn_model.cnn_exec.outputs[0].asnumpy(), axis=1))
num_total += len(batchY)
# update weights
norm = 0
for idx, weight, grad, name in cnn_model.param_blocks:
grad /= batch_size
l2_norm = mx.nd.norm(grad).asscalar()
norm += l2_norm * l2_norm
norm = math.sqrt(norm)
for idx, weight, grad, name in cnn_model.param_blocks:
if norm > max_grad_norm:
grad *= (max_grad_norm / norm)
updater(idx, grad, weight)
# reset gradient to zero
grad[:] = 0.0
# Decay learning rate for this epoch to ensure we are not "overshooting" optima
if iteration % 50 == 0 and iteration > 0:
opt.lr *= 0.5
print >> logs, 'reset learning rate to %g' % opt.lr
# End of training loop for this epoch
toc = time.time()
train_time = toc - tic
train_acc = num_correct * 100 / float(num_total)
# Saving checkpoint to disk
if (iteration + 1) % 10 == 0:
prefix = 'cnn'
cnn_model.symbol.save('./%s-symbol.json' % prefix)
save_dict = {('arg:%s' % k) :v for k, v in cnn_model.cnn_exec.arg_dict.items()}
save_dict.update({('aux:%s' % k) : v for k, v in cnn_model.cnn_exec.aux_dict.items()})
param_name = './%s-%04d.params' % (prefix, iteration)
mx.nd.save(param_name, save_dict)
print >> logs, 'Saved checkpoint to %s' % param_name
# Evaluate model after this epoch on dev (test) set
num_correct = 0
num_total = 0
# For each test batch
for begin in range(0, x_dev.shape[0], batch_size):
batchX = x_dev[begin:begin+batch_size]
batchY = y_dev[begin:begin+batch_size]
if batchX.shape[0] != batch_size:
continue
cnn_model.data[:] = batchX
cnn_model.cnn_exec.forward(is_train=False)
num_correct += sum(batchY == np.argmax(cnn_model.cnn_exec.outputs[0].asnumpy(), axis=1))
num_total += len(batchY)
dev_acc = num_correct * 100 / float(num_total)
print >> logs, 'Iter [%d] Train: Time: %.3fs, Training Accuracy: %.3f \
--- Dev Accuracy thus far: %.3f' % (iteration, train_time, train_acc, dev_acc)
Explanation: We can now execute the training and testing of our network, which in-part mxnet automatically does for us with its forward and backwards propogation methods, along with its automatic gradient calculations.
End of explanation |
9,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bias on Wikipedia
Todd Schultz
Due
Step1: Import data of politicians by country
Import the data of policitcians by country provided by Oliver Keyes and found at https
Step2: Import population by country
Import the population by country provided PRB and found at http
Step3: Combined data
Combine the data frames into a single data frame with the following variables.
Column, country, article_name, revision_id, article_quality, population
Make a placeholder, empty variable for article_quality to be filled in in the next section using the Wikipedia ORES API for predicting article quality. Merging the data sets here also eliminates any entries in the policitian names who countries population is unavailable and removes any countries that have no English Wikipedia articles about their policitians.
Step4: ORES article quality data
Retrieve the predicted article quality using the ORES service. ORES ("Objective Revision Evaluation Service") is a machine learning system trained on pre-graded Wikipedia articles for the purpose of predicting artcle quality. The service is found at https
Step5: Analysis
The data set is now processed to acculumate counts of the number of articles for each country and to consider the percentage of articles from each country that are predicted to be 'high-quaility'. For the purpose of this analysis, high-quality articles are defined to be articles with a predicted ORES quality grade of either 'FA', a featured article, or 'GA', a good article. The total number of articles for each country is also normalized by the countries population.
Visualizations
Along with generating the numeric analysis results, four visualizations are created to help better understand the data. The four visualizations are plots of the numeric results for one of the processed paramters, number of articles for each country normalized by population, and the percentage of high-quality articles for each county, each for the top 10 and bottom 10 ranked countries. The results are then reviewed for any observed trends.
Step6: Create bar graphs for the top 10 and bottom 10 countries with respect number of politician articles normalized by popoluations.
Step7: Create bar graphs for the top 10 and bottom 10 countries with respect percentage of high-quality articles. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import requests
import json
import copy
%matplotlib notebook
Explanation: Bias on Wikipedia
Todd Schultz
Due: November 2, 2017
Bias is an increasing important topic with today's reliance on data and aglorithms. Here, bias in policitical articles on the English Wikipedia will be investigated in terms of number of articles about politicians for each country normalized by population and the percentage of total number of articles about policitians that are considered high-quality as predicted by a machine learning model. The results can then be reviewed to observe any biases or trends present.
Imports
The Python libraries used in the analysis throughout this notebook are imported here.
End of explanation
politicianFile = 'PolbyCountry_data.csv'
politicianNames = pd.read_csv(politicianFile)
# rename variables
politicianNames.rename(columns = {'page':'article_name'}, inplace = True)
politicianNames.rename(columns = {'rev_id':'revision_id'}, inplace = True)
politicianNames[0:4]
politicianNames.shape
Explanation: Import data of politicians by country
Import the data of policitcians by country provided by Oliver Keyes and found at https://figshare.com/articles/Untitled_Item/5513449. This data set contains the name of the country, the name of the politician as representented by the name of the English Wikipedia article about them, and the revision or article identification number in the English Wikipedia.
End of explanation
countryFile = 'Population Mid-2015.csv'
tempDF = pd.read_csv(countryFile, header=1)
# change population to a numeric value
a = np.zeros(tempDF.shape[0])
for idata in range(0,tempDF.shape[0]):
b = tempDF['Data'][idata]
a[idata] = float(b.replace(',', ''))
#countryPop = pd.DataFrame(data={'country': tempDF['Location'], 'population': tempDF['Data']})
countryPop = pd.DataFrame(data={'country': tempDF['Location'], 'population': a})
countryPop[0:5]
Explanation: Import population by country
Import the population by country provided PRB and found at http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14. The data is from mid-2015 and includes the name of the country and the population estimate.
End of explanation
# First add placeholder to politicianNames dataframe for article quality
politicianNames = politicianNames.assign(article_quality = "")
# Next, join politicianNames with countryPop
politicData = politicianNames.merge(countryPop,how = 'inner')
#politicianNames[0:5]
politicData[0:5]
politicData.shape
Explanation: Combined data
Combine the data frames into a single data frame with the following variables.
Column, country, article_name, revision_id, article_quality, population
Make a placeholder, empty variable for article_quality to be filled in in the next section using the Wikipedia ORES API for predicting article quality. Merging the data sets here also eliminates any entries in the policitian names who countries population is unavailable and removes any countries that have no English Wikipedia articles about their policitians.
End of explanation
# ORES
# Construct API call
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/{revid}/{model}'
headers = {'User-Agent' : 'https://github.com/your_github_username', 'From' : 'your_uw_email@uw.edu'}
# loop over all articles to retrieve predicted quality grades
for irevid in range(0, politicData.shape[0]):
revidstr = str(politicData['revision_id'][irevid])
#print(revidstr)
params = {'project' : 'enwiki',
'model' : 'wp10',
'revid' : revidstr
}
try:
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
#print(json.dumps(response, indent=4, sort_keys=True))
# Store article quality in the dataframe
politicData.loc[irevid,'article_quality'] = response['enwiki']['scores'][revidstr]['wp10']['score']['prediction']
except:
print('Error at ' + str(irevid))
if irevid % 500 == 0:
print(irevid)
# Write out csv file
politicData.to_csv('en-wikipedia_bias_2015.csv', index=False)
politicData[0:4]
# Drop the row without article quality scores
# politicData.drop(politicData.index[[14258,14259]])
#politicData['article_quality'][14258,14259]
print(politicData.shape)
politicData = politicData.loc[~(politicData['article_quality'] == '')]
print(politicData.shape)
# Read in csv file if needed
# The ORES calls to retrieve all the predicted article quality grades can be long, thus storing the
# results locally as a file can save time reloading if needed.
#politicData = pd.read_csv('en-wikipedia_bias_2015.csv')
#politicData[0:4]
Explanation: ORES article quality data
Retrieve the predicted article quality using the ORES service. ORES ("Objective Revision Evaluation Service") is a machine learning system trained on pre-graded Wikipedia articles for the purpose of predicting artcle quality. The service is found at https://www.mediawiki.org/wiki/ORES and documentaiton is found at https://ores.wikimedia.org/v3/#!/scoring/get_v3_scores_context_revid_model. The output of the API service is a prediction of the proabability of the article quality being assigned to one of six different classes listed below from best to worst:
FA - Featured article
GA - Good article
B - B-class article
C - C-class article
Start - Start-class article
Stub - Stub-class article
The category with the highest probability is selected as the predicted quality grade.
End of explanation
# Create dataframe variables
# Find all unique countries with politician articles
uniquecountries = copy.deepcopy(politicData.country.unique())
# Initialize dataframe for the results
countryData = pd.DataFrame(data={'country': uniquecountries})
countryData = countryData.assign(**{'article_per_pop_percent': np.zeros(uniquecountries.shape[0])})
countryData = countryData.assign(**{'highqual_art_percent': np.zeros(uniquecountries.shape[0])})
countryData = copy.deepcopy(countryData)
print(countryData.shape)
countryData[0:4]
# Compute the processed results
# disable warning about sliced variable assignment in the dataframe, found on stackoverflow.com
pd.options.mode.chained_assignment = None # default='warn'
# Compute articles-per-population for each country, and percent high-quality articles for each country
for icountry in range(0,countryData.shape[0]):
loopcountry = countryData['country'][icountry]
looppop = countryPop['population'][countryPop['country'] == loopcountry]
# find articles for politicians from loopcountry
Idxarts = politicData['country'] == loopcountry
looparticles = copy.copy(politicData['article_quality'][Idxarts])
IdxGA = looparticles == 'GA'
IdxFA = looparticles == 'FA'
nHQarts = sum(IdxGA) + sum(IdxFA)
#countryData.loc[icountry,'article_per_pop_percent'] = 100*sum(Idxarts)/looppop
#countryData.loc[icountry,'highqual_art_percent'] = 100*nHQarts/sum(Idxarts)
countryData['article_per_pop_percent'][icountry] = 100*sum(Idxarts)/looppop
countryData['highqual_art_percent'][icountry] = 100*nHQarts/sum(Idxarts)
countryData[0:4]
Explanation: Analysis
The data set is now processed to acculumate counts of the number of articles for each country and to consider the percentage of articles from each country that are predicted to be 'high-quaility'. For the purpose of this analysis, high-quality articles are defined to be articles with a predicted ORES quality grade of either 'FA', a featured article, or 'GA', a good article. The total number of articles for each country is also normalized by the countries population.
Visualizations
Along with generating the numeric analysis results, four visualizations are created to help better understand the data. The four visualizations are plots of the numeric results for one of the processed paramters, number of articles for each country normalized by population, and the percentage of high-quality articles for each county, each for the top 10 and bottom 10 ranked countries. The results are then reviewed for any observed trends.
End of explanation
# sort countryData by article_per_pop_percent
cdsorted = countryData.sort_values(by='article_per_pop_percent', ascending=0)
cdsorted[0:4]
# 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
plt.figure(figsize=(6,5))
plt.bar(range(0,10), cdsorted['article_per_pop_percent'][0:10])
plt.title('Top 10 Countries for Articles per Population')
plt.ylabel('Politician Articles per Population (%)')
plt.xticks(range(0,10), cdsorted['country'][0:10], rotation=90)
plt.ylim((0,0.5))
plt.tight_layout()
plt.savefig('Top10ArticlesperPopulation.jpg')
# 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
plt.figure(figsize=(6,5))
plt.bar(range(0,10), cdsorted['article_per_pop_percent'][-10:])
plt.title('Bottom 10 Countries for Articles per Population')
plt.ylabel('Politician Articles per Population (%)')
plt.xticks(range(0,10), cdsorted['country'][-10:], rotation=90)
plt.ylim((0,0.0005))
plt.tight_layout()
plt.savefig('Bottom10ArticlesperPopulation.jpg')
Explanation: Create bar graphs for the top 10 and bottom 10 countries with respect number of politician articles normalized by popoluations.
End of explanation
# sort countryData by article_per_pop_percent
cdsorted = countryData.sort_values(by='highqual_art_percent', ascending=0)
cdsorted[0:4]
# 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
plt.figure(figsize=(6,5))
plt.bar(range(0,10), cdsorted['highqual_art_percent'][0:10])
plt.title('Top 10 Countries for Percentage of High-quality Articles')
plt.ylabel('Percent of high-quality articles (%)')
plt.xticks(range(0,10), cdsorted['country'][0:10], rotation=90)
plt.ylim((0,15))
plt.tight_layout()
plt.savefig('Top10HQArticlespercent.jpg')
# 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
plt.figure(figsize=(6,5))
plt.bar(range(0,10), cdsorted['highqual_art_percent'][-10:])
plt.title('Bottom 10 Countries for Percentage of High-quality Articles')
plt.ylabel('Percent of high-quality articles (%)')
plt.xticks(range(0,10), cdsorted['country'][-10:], rotation=90)
plt.ylim((0,0.0005))
plt.tight_layout()
plt.savefig('Bottom10HQArticlespercent.jpg')
# Investigate bottom 10 for percentage of high-quality articles
cdsorted['highqual_art_percent'][-10:]
# Okay, they are all zero. So, let's find all the countries that have no high-quality articles.
InoHQ = countryData['highqual_art_percent']==0
print('Total number of countries without high-quality articles: ' + str(sum(InoHQ)))
countryData['country'][InoHQ]
Explanation: Create bar graphs for the top 10 and bottom 10 countries with respect percentage of high-quality articles.
End of explanation |
9,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
============================================
4D Neuroimaging/BTi phantom dataset tutorial
============================================
Here we read 4DBTi epochs data obtained with a spherical phantom
using four different dipole locations. For each condition we
compute evoked data and compute dipole fits.
Data are provided by Jean-Michel Badier from MEG center in Marseille, France.
Step1: Read data and compute a dipole fit at the peak of the evoked response
Step2: Compute localisation errors
Step3: Plot the dipoles in 3D | Python Code:
# Authors: Alex Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from mayavi import mlab
from mne.datasets import phantom_4dbti
import mne
Explanation: ============================================
4D Neuroimaging/BTi phantom dataset tutorial
============================================
Here we read 4DBTi epochs data obtained with a spherical phantom
using four different dipole locations. For each condition we
compute evoked data and compute dipole fits.
Data are provided by Jean-Michel Badier from MEG center in Marseille, France.
End of explanation
data_path = phantom_4dbti.data_path()
raw_fname = op.join(data_path, '%d/e,rfhp1.0Hz')
dipoles = list()
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.080)
t0 = 0.07 # peak of the response
pos = np.empty((4, 3))
for ii in range(4):
raw = mne.io.read_raw_bti(raw_fname % (ii + 1,),
rename_channels=False, preload=True)
raw.info['bads'] = ['A173', 'A213', 'A232']
events = mne.find_events(raw, 'TRIGGER', mask=4350, mask_type='not_and')
epochs = mne.Epochs(raw, events=events, event_id=8192, tmin=-0.2, tmax=0.4,
preload=True)
evoked = epochs.average()
evoked.plot(time_unit='s')
cov = mne.compute_covariance(epochs, tmax=0.)
dip = mne.fit_dipole(evoked.copy().crop(t0, t0), cov, sphere)[0]
pos[ii] = dip.pos[0]
Explanation: Read data and compute a dipole fit at the peak of the evoked response
End of explanation
actual_pos = 0.01 * np.array([[0.16, 1.61, 5.13],
[0.17, 1.35, 4.15],
[0.16, 1.05, 3.19],
[0.13, 0.80, 2.26]])
actual_pos = np.dot(actual_pos, [[0, 1, 0], [-1, 0, 0], [0, 0, 1]])
errors = 1e3 * np.linalg.norm(actual_pos - pos, axis=1)
print("errors (mm) : %s" % errors)
Explanation: Compute localisation errors
End of explanation
def plot_pos(pos, color=(0., 0., 0.)):
mlab.points3d(pos[:, 0], pos[:, 1], pos[:, 2], scale_factor=0.005,
color=color)
mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces=[])
# Plot the position of the actual dipole
plot_pos(actual_pos, color=(1., 0., 0.))
# Plot the position of the estimated dipole
plot_pos(pos, color=(1., 1., 0.))
Explanation: Plot the dipoles in 3D
End of explanation |
9,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Methods
Data obtained from the Citizens Police Data Project.
This data includes only the FOIA dataset from 2011 to present (i.e. the Bond and Moore datasets have been removed).
This was accomplished by entering FOIA in the search bar.
The resulting table was saved to GitHub as a .xslx.
The Allegations, Complaining Witnesses, and Officer Profile tabs were then saved as allegations.csv, citizens.csv, and officers.csv respectively.
Disclaimer
The following disclaimer is included with the data by the Invisible Institute.
This dataset is compiled from three lists of allegations against Chicago Police Department officers,
spanning approximately 2002 - 2008 and 2010 - 2014, produced by the City of Chicago in response
to litigation and to FOIA requests.
The City of Chicago's production of this information is accompanied by a disclaimer that
not all information contained in the City's database may be correct.
No independent verification of the City's records has taken place and this dataset does not
purport to be an accurate reflection of either the City's database or its veracity.
Step1: What data do we have?
We can see the column names for the three tables below.
The Allegations table includes data on each allegation, including an ID for the complaint witness, the officer, and the outcome of the allegation.
The Citizens table includes additional information for each complaint witness.
The Officers table includes additional information for each officer.
Step2: For this analysis, we will be removing several columns for the following reasons
Step3: For ease of use, let's join our tables.
Step4: There are some allegations where no officer ID was provided. For this analysis, we will discard those allegations.
Step5: Now, let's encode our data numerically
Step6: For convenience, we'll build every possible categorical directive
Step7: Now, we can build intuitive masks as combinations of our human-readable directives
Step8: Let's generate a potentially interesting new feature from our existing data, and pull out all non-numeric data
Step9: We understand what data we have, and we have some tools to easily slice and dice. Let's dive in and learn something. | Python Code:
#Record arrays
allegations = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Allegations.csv',parse_datetimes=['IncidentDate','StartDate','EndDate'])
citizens = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Citizens.csv')
officers = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Officers.csv')
Explanation: Methods
Data obtained from the Citizens Police Data Project.
This data includes only the FOIA dataset from 2011 to present (i.e. the Bond and Moore datasets have been removed).
This was accomplished by entering FOIA in the search bar.
The resulting table was saved to GitHub as a .xslx.
The Allegations, Complaining Witnesses, and Officer Profile tabs were then saved as allegations.csv, citizens.csv, and officers.csv respectively.
Disclaimer
The following disclaimer is included with the data by the Invisible Institute.
This dataset is compiled from three lists of allegations against Chicago Police Department officers,
spanning approximately 2002 - 2008 and 2010 - 2014, produced by the City of Chicago in response
to litigation and to FOIA requests.
The City of Chicago's production of this information is accompanied by a disclaimer that
not all information contained in the City's database may be correct.
No independent verification of the City's records has taken place and this dataset does not
purport to be an accurate reflection of either the City's database or its veracity.
End of explanation
#I shouldn't have to nest function calls just to get a summary of my data. This needs to be a single call.
#Most of the data isn't numeric, so we should find a way to be more helpful than this.
#Also, what is the "None" printing at the end of this?
print display.pprint_sa(display.describe_cols(allegations))
print display.pprint_sa(display.describe_cols(citizens))
print display.pprint_sa(display.describe_cols(officers))
Explanation: What data do we have?
We can see the column names for the three tables below.
The Allegations table includes data on each allegation, including an ID for the complaint witness, the officer, and the outcome of the allegation.
The Citizens table includes additional information for each complaint witness.
The Officers table includes additional information for each officer.
End of explanation
import datetime
#TODO: there is a typo in the "OfficerFirst" column in allegations.
#Should pass this on to Kalven at Invisible Institute along with questions about data.
allegations = utils.remove_cols(allegations,['OfficeFirst','OfficerLast','Investigator','AllegationCode','RecommendedFinding','RecommendedOutcome','FinalFinding','FinalOutcome','Beat','Add1','Add2','City'])
officers = utils.remove_cols(officers,['OfficerFirst','OfficerLast','Star'])
#Convert appointment date days since 1900-1-1 to years prior to today
def tenure(vector):
today = datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%d')
started = np.add(np.datetime64('1900-01-01'),map(lambda x: np.timedelta64(int(x), 'D'),vector))
tenure = np.subtract(np.datetime64(today),started)
return np.divide(tenure,np.timedelta64(1,'D')) / 365
#Impute median date for missing values
officers['ApptDate'] = modify.replace_missing_vals(officers['ApptDate'], strategy='median')
tenure_days = modify.combine_cols(officers,tenure,['ApptDate'])
officers = utils.append_cols(officers,[tenure_days],['Tenure'])
Explanation: For this analysis, we will be removing several columns for the following reasons:
To anonymize our data, names of officers and investiagtors have been removed.
Many of the columns in Allegations are redundant as they code for other columns. We will preserve only the human readable columns.
The Beat column has no data, so it will be removed.
We will only focus on final outcomes, so the "recommended" columns have been removed from Allegations.
We will be limiting our geographic analysis to Location, so the address information has been removed.
We will also translate ApptDate, which specifies the number of days between the hire date and 1900-1-1, to the number of years working.
End of explanation
master = utils.join(allegations,citizens,'left',['CRID'],['CRID'])
#Rename Race and Gender, since citizens and officers have these columns
temp_col_names = list(master.dtype.names)
gender_index = temp_col_names.index("Gender")
race_index = temp_col_names.index("Race")
temp_col_names[gender_index] = "CitizenGender"
temp_col_names[race_index] = "CitizenRace"
master.dtype.names = tuple(temp_col_names)
master = utils.join(master,officers,'left',['OfficerID'],['OfficerID'])
temp_col_names = list(master.dtype.names)
gender_index = temp_col_names.index("Gender")
race_index = temp_col_names.index("Race")
temp_col_names[gender_index] = "OfficerGender"
temp_col_names[race_index] = "OfficerRace"
master.dtype.names = tuple(temp_col_names)
Explanation: For ease of use, let's join our tables.
End of explanation
#This is a pretty awkward way to remove nan, is there a better way I missed?
master = modify.choose_rows_where(master,[{'func': modify.row_val_between, 'col_name': 'OfficerID', 'vals': [-np.inf,np.inf]}])
Explanation: There are some allegations where no officer ID was provided. For this analysis, we will discard those allegations.
End of explanation
#Unit is interpreted as numeric, but we really want to analyze it categorically
#There should be an easier way to treat a numeric column as categorical data
master = utils.append_cols(master,master['Unit'].astype('|S10'),['UnitCat'])
master = utils.remove_cols(master,['Unit'])
master_data, master_classes = modify.label_encode(master)
Explanation: Now, let's encode our data numerically
End of explanation
#Directives
def cat_directives(array,classes):
cat_directives = {}
for column in classes:
cat_directives[column] = {v:[{'func': modify.row_val_eq, 'col_name': column, 'vals': i}] for i,v in enumerate(classes[column])}
return cat_directives
where = cat_directives(master_data,master_classes)
Explanation: For convenience, we'll build every possible categorical directive
End of explanation
#Masks
#Gender
female_officers = modify.where_all_are_true(master_data,where['OfficerGender']['F'])
male_officers = modify.where_all_are_true(master_data,where['OfficerGender']['M'])
female_citizens = modify.where_all_are_true(master_data,where['CitizenGender']['F'])
male_citizens = modify.where_all_are_true(master_data,where['CitizenGender']['M'])
#Race
white_officers = modify.where_all_are_true(master_data,where['OfficerRace']['White'])
black_officers = modify.where_all_are_true(master_data,where['OfficerRace']['Black'])
hispanic_officers = modify.where_all_are_true(master_data,where['OfficerRace']['Hispanic'])
white_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['White'])
black_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['Black'])
hispanic_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['Hispanic'])
#Cross-sections
white_M_officers_black_F_citizens = modify.where_all_are_true(master_data,where['OfficerRace']['White']+
where['OfficerGender']['M']+
where['CitizenRace']['Black']+
where['CitizenGender']['F'])
Explanation: Now, we can build intuitive masks as combinations of our human-readable directives
End of explanation
duration = modify.combine_cols(master_data,np.subtract,['EndDate','StartDate'])
durationDays = duration / np.timedelta64(1, 'D')
duration_data = utils.append_cols(master_data,[durationDays],['InvestigationDuration'])
numeric_data = utils.remove_cols(master_data,['StartDate','EndDate','IncidentDate'])
Explanation: Let's generate a potentially interesting new feature from our existing data, and pull out all non-numeric data
End of explanation
#Ex 1: What percentage of allegations have a black female citizen and a white male officer?
print np.sum(white_M_officers_black_F_citizens.astype(np.float))/np.size(white_M_officers_black_F_citizens.astype(np.float))
#Ex 2: What is the breakdown of officers with complaints by race?
#This seems a little clunky to me
#Would be nice if plot_simple_histogram could handle categorical labels for me
display.plot_simple_histogram(master_data['OfficerRace'],verbose=False)
display.plt.xticks(range(len(master_classes['OfficerRace'])), master_classes['OfficerRace'])
#Ex 3: What does the distribution of complaints look like?
complaint_counter = display.Counter(numeric_data['OfficerID'])
officer_list, complaint_counts = zip(*complaint_counter.items())
display.plot_simple_histogram(complaint_counts)
#Ex 4: What can we learn from the 100 officers who receive the most complaints?
#FYI: Wikipedia says 12,244 officers total, so this is roughly the top 1% of all Chicago officers.
#Obviously, all officers do not have the same quantity and quality of interactions with citizens.
#Need to account for this fact for any real analysis.
#Median imputation makes histogram look unnatural
#Top 100 Officers
top_100 = counts.most_common(100)
top_100_officers = map(lambda x: x[0],top_100)
#We should add this to modify.py for categorical data
def row_val_in(M,col_name,boundary):
return [x in boundary for x in M[col_name]]
top_100_profile = modify.choose_rows_where(officers,[{'func': row_val_in, 'col_name': 'OfficerID', 'vals': top_100_officers}])
#Can't check this against CPDB, their allegation counts are for the whole time period
#Not just 2011 - present.
display.plot_simple_histogram(master_data['Tenure'],verbose=False)
display.plot_simple_histogram(top_100_profile['Tenure'],verbose=False)
#Ex 5: What does the distribution of outcomes look like?
#Hastily written, possibly not useful. Just curious.
#Almost everything is unknown or no action taken
def sortedFrequencies(array,classes,col_name):
if col_name not in classes:
raise ValueError('col_name must be categorical')
counts = display.Counter(array[col_name])
total = float(sum(counts.values()))
for key in counts:
counts[key] /= total
count_dict = {}
for value in counts:
count_dict[classes[col_name][value]] = counts[value]
return sorted(count_dict.items(), key=lambda x: x[1],reverse=True)
print sortedFrequencies(numeric_data,master_classes,'Outcome')
#Ex 6: What has the number of complaints over time been like?
#Looks seasonal (peaking in summer), and declining over time (coud the decline just be a collection issue?)
def numpy_to_month(dt64):
ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
dt = datetime.datetime.utcfromtimestamp(ts)
d = datetime.date(dt.year, dt.month, 1) #round to month
return d
months, counts = zip(*display.Counter(map(numpy_to_month,duration_data['IncidentDate'])).items())
display.plt.plot_date(months,counts)
#How does it look to split complaints by location?
#Very disproportionate. Locations 17,19,3,4 have almost all complaints.
display.plot_simple_histogram(numeric_data['Location'],verbose=False)
display.plt.xticks(range(len(master_classes['Location'])), master_classes['Location'])
#Unit?
#Still uneven, but more even than location.
display.plot_simple_histogram(numeric_data['UnitCat'],verbose=False)
display.plt.xticks(range(len(master_classes['UnitCat'])), master_classes['UnitCat'])
#Are there officers getting a lot of complaints not from the high yield locations?
#What does the social network of concomitant officers look like?
Explanation: We understand what data we have, and we have some tools to easily slice and dice. Let's dive in and learn something.
End of explanation |
9,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM5-0
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:04
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
9,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lambda Function and More
<font color='red'>Reference Documents</font>
<OL>
<LI> <A HREF="http
Step1: Lambda as macro
Step2: Map Function
map() is a function with two arguments
Step3: The first argument func is the name of a function and
the second a sequence (e.g. a list) seq. map() applies
the function func to all the elements of the sequence seq.
It returns a new list with the elements changed by func
Step4: <u>Problem 1</u>
Step5: Use list comprehension and lambda/map function to define <b>stuff</b>.
Filter Function
The function <B>filter(func, iterableType)</B> offers an elegant way to
filter out all the elements of any iterable type (list, tuple, string, etc.), for which the function func returns True.
Step6: <u>Problem 2</u>
Use the filter function to remove all the vowels from the sentence
Step7: Reduce Function
The function <B>reduce(func, seq)</B> continually applies the function func() to the sequence seq. It returns a single value.
Syntax
Step8: Examples
Step9: <u>Problem 3</u>
Use the reduce function to find the product of all the entries in the list
[47,11,42,102,13]
Step10: <font color='red'>Exception Handling</font>
Step11: <UL>
<LI> An exception is an error that happens during the execution of a program.
<LI> Exceptions are known to non-programmers as instances that do not conform to a general rule.
<LI> Exception handling is a construct to handle or deal with errors automatically.
<LI> The code, which harbours the risk of an exception, is embedded in a try block.
</UL>
Simple example
Step12: Some Exception Errors
<UL>
<LI> <B>IOError</B>
Step13: Clean-up Actions (try ... finally)
Step14: Raising Exceptions | Python Code:
lambda argument_list: expression
# The argument list consists of a comma separated list of arguments and
# the expression is an arithmetic expression using these arguments.
f = lambda x, y : x + y
f(2,1)
Explanation: Lambda Function and More
<font color='red'>Reference Documents</font>
<OL>
<LI> <A HREF="http://www.u.arizona.edu/~erdmann/mse350/topics/list_comprehensions.html">Map, Filter, Lambda, and List Comprehensions in Python</A>
</OL>
<font color='red'>Lambda, Filter, Reduce and Map</font>
<UL>
<LI> The lambda operator or lambda function is a way to create small anonymous functions.
<LI> These functions are throw-away functions, i.e. they are just needed where they have been created.
<LI> Lambda functions are mainly used in combination with the functions filter(), map() and reduce().
</UL>
Basic Syntax of a Lambda Function
End of explanation
line1 = "A cat, a dog "
line2 = " a bird, a mountain"
# Use X as an alias for two methods.
x = lambda s: s.strip().upper()
# Call the lambda to shorten the program's source.
line1b = x(line1)
line2b = x(line2)
print(line1b)
print(line2b)
Explanation: Lambda as macro
End of explanation
r = map(func, seq)
Explanation: Map Function
map() is a function with two arguments:
End of explanation
def fahrenheit(T):
return ((float(9)/5)*T + 32)
def celsius(T):
return (float(5)/9)*(T-32)
temp = (36.5, 37, 37.5,39)
F = map(fahrenheit, temp)
print F
C = map(celsius, F)
print C
# map() can be applied to more than one list.
# The lists have to have the same length.
a = [1,2,3,4]
b = [17,12,11,10]
c = [-1,-4,5,9]
map(lambda x,y:x+y, a,b)
map(lambda x,y,z:x+y+z, a,b,c)
map(lambda x,y,z:x+y-z, a,b,c)
Explanation: The first argument func is the name of a function and
the second a sequence (e.g. a list) seq. map() applies
the function func to all the elements of the sequence seq.
It returns a new list with the elements changed by func
End of explanation
words = 'The quick brown fox jumps over the lazy dog'.split()
print words
stuff = []
for w in words:
stuff.append([w.upper(), w.lower(), len(w)])
for i in stuff:
print i
Explanation: <u>Problem 1</u>
End of explanation
fib = [0,1,1,2,3,5,8,13,21,34,55]
result = filter(lambda x: x % 2, fib)
print result
result = filter(lambda x: x % 2 == 0, fib)
print result
Explanation: Use list comprehension and lambda/map function to define <b>stuff</b>.
Filter Function
The function <B>filter(func, iterableType)</B> offers an elegant way to
filter out all the elements of any iterable type (list, tuple, string, etc.), for which the function func returns True.
End of explanation
sentence = "It's a myth that there are no words in English without vowels."
vowels = 'aeiou'
Explanation: <u>Problem 2</u>
Use the filter function to remove all the vowels from the sentence
End of explanation
def reduce( aFunction, aSequence, init= 0 ):
r= init
for s in aSequence:
r= aFunction( r, s )
return r
Explanation: Reduce Function
The function <B>reduce(func, seq)</B> continually applies the function func() to the sequence seq. It returns a single value.
Syntax
End of explanation
A = reduce(lambda x,y: x+y, [47,11,42,13])
print A
# Determining the maximum of a list of numerical values by using reduce
f = lambda a,b: a if (a > b) else b
B = reduce(f, [47,11,42,102,13])
print B
# Calculating the sum of the numbers from 1 to n:
n = 300
C = reduce(lambda x, y: x+y, range(1,n+1))
print C
def x100y(x,y):
return 100*x+y
reduce(x100y, [13])
reduce(x100y, [2, 5, 9])
reduce(x100y, [2, 5, 9], 7)
Explanation: Examples
End of explanation
print reduce(lambda x,y: x*y, [47,11,42,102,13])
# note that you can improve the speed of the calculation using built-in functions
# or better still: using the numpy module
from operator import mul
import numpy as np
a = range(1, 101)
print "reduce(lambda x, y: x * y, a)"
%timeit reduce(lambda x, y: x * y, a) # (1)
print "reduce(mul, a)"
%timeit reduce(mul, a) # (2)
print "np.prod(a)"
a = np.array(a)
%timeit np.prod(a) # (3)
Explanation: <u>Problem 3</u>
Use the reduce function to find the product of all the entries in the list
[47,11,42,102,13]
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("hrR0WrQMhSs")
Explanation: <font color='red'>Exception Handling</font>
End of explanation
def enter_number0():
n = int(raw_input("Please enter a number: "))
enter_number0()
def enter_number1():
while True:
try:
n = raw_input("Please enter an integer: ")
n = int(n)
break
except ValueError:
print "No valid integer! Please try again ..."
print "Great, you successfully entered an integer!"
enter_number1()
Explanation: <UL>
<LI> An exception is an error that happens during the execution of a program.
<LI> Exceptions are known to non-programmers as instances that do not conform to a general rule.
<LI> Exception handling is a construct to handle or deal with errors automatically.
<LI> The code, which harbours the risk of an exception, is embedded in a try block.
</UL>
Simple example
End of explanation
def inverse_number0():
try:
x = float(raw_input("Your number: "))
inverse = 1.0 / x
except ValueError:
print "You should have given either an int or a float"
except ZeroDivisionError:
print "Infinity"
else:
print "OK"
inverse_number0()
# import module sys to get the type of exception
import sys
def inverse_number1():
while True:
try:
x = int(raw_input("Enter an integer: "))
r = 1/x
break
except:
print "Oops!",sys.exc_info()[0],"occured."
print "Please try again."
print
print "The reciprocal of",x,"is",r
inverse_number1()
Explanation: Some Exception Errors
<UL>
<LI> <B>IOError</B>: The file cannot be opened
<LI> <B>ImportError</B>: Python cannot find the module
<LI> <B>ValueError</B>: A built-in operation or function receives an argument that has the
right type but an inappropriate value.
<LI> <B>KeyboardInterrupt</B>: The user hits the interrupt key (normally Control-C or Delete)
<LI> <B>EOFError</B>: One of the built-in functions (input() or raw_input()) hits an
end-of-file condition (EOF) without reading any data.
<LI> <B> OverflowError, ZeroDivisionError, FloatingPointError</B>:
</UL>
An exhaustive list of built-in exceptions can be found here:
https://docs.python.org/2/library/exceptions.html
else...
End of explanation
def inverse_number2():
try:
x = float(raw_input("Your number: "))
inverse = 1.0 / x
finally:
print "There may or may not have been an exception."
print "The inverse: ", inverse
inverse_number2()
def inverse_number3():
try:
x = float(raw_input("Your number: "))
inverse = 1.0 / x
except ValueError:
print "You should have given either an int or a float"
except ZeroDivisionError:
print "Infinity"
finally:
print "There may or may not have been an exception."
inverse_number3()
Explanation: Clean-up Actions (try ... finally)
End of explanation
def achilles_arrow(x):
if abs(x - 1) < 1e-3:
raise StopIteration
x = 1 - (1-x)/2.
return x
x=0.0
while True:
try:
x = achilles_arrow(x)
except StopIteration:
break
print "x = ", x
Explanation: Raising Exceptions
End of explanation |
9,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Group sizes
Get all unique size labels from the database.
Step1: Sizes per distributor
Step2: Print joint table with first 60 sizes.
Step3: Calculate entropy
Step4: Create new collection from data only with '_id', 'source' and 'size' fields
Step5: Sizes list per distributor
Step8: Tagging according to size
Since the number of sizes is low (1117 uniq sizes), the task could be resolved using tivial brute force, i.e. map sizes using mapping table.
During the observation of data i noticed that sizes are defined for adult, youth, toddler and baby
Step9: Let's calculate data entropy for results | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import entropy
from tabulate import tabulate
from pymongo import MongoClient
import matplotlib.pyplot as plt
plt.style.use('seaborn')
plt.rcParams["figure.figsize"] = (20,8)
db = MongoClient()['stores']
TOTAL_NUMBER_OF_PRODUCTS = db.data.count()
results = db.data.aggregate(
[
{
"$group": {
"_id": "$size",
"count": {"$sum": 1},
}
},
{
"$sort": {
"count": -1,
}
}
]
)
ALL_SIZES = [(str(x['_id']), x['count']) for x in list(results)]
print('Number of uniq. sizes: {}'.format(len(ALL_SIZES)))
Explanation: Group sizes
Get all unique size labels from the database.
End of explanation
DISTRIBUTORS = list(db.data.distinct("source"))
results = db.data.aggregate(
[
{
"$group": {
"_id": "$source",
"sizes": {"$addToSet": "$size"},
}
},
{
"$project": {
"_id": 1,
"count": {"$size": "$sizes"}
}
},
{
"$sort": {
"count": -1,
}
}
]
)
SIZES_PER_DISTRIBUTOR = [
(str(x['_id']), x['count'])
for x in list(results)
]
print(tabulate(SIZES_PER_DISTRIBUTOR,
headers=['Distributor', 'Number of uniq. Sizes'],
tablefmt="simple"))
df_values_by_key = pd.DataFrame(SIZES_PER_DISTRIBUTOR,
index=[x[0] for x in SIZES_PER_DISTRIBUTOR],
columns=['Distributor', 'Sizes'])
df_values_by_key.iloc[::-1].plot.barh()
Explanation: Sizes per distributor
End of explanation
import operator
all_sizes_table = []
number_of_sizes = 180
for sizes in zip(ALL_SIZES[0:number_of_sizes:3],
ALL_SIZES[1:number_of_sizes:3],
ALL_SIZES[2:number_of_sizes:3]):
all_sizes_table.append(list(reduce(operator.add, sizes)))
print(
tabulate(
all_sizes_table[:60],
headers=3*['Size', 'Number of Products'],
tablefmt="simple"))
Explanation: Print joint table with first 60 sizes.
End of explanation
# calculate probability vector
p = [x[1] for x in ALL_SIZES]
size_prob_vector = np.array(p) / TOTAL_NUMBER_OF_PRODUCTS
# calculate entropy
first_entropy = entropy(size_prob_vector)
print("Data entropy:", first_entropy)
Explanation: Calculate entropy
End of explanation
# create new collection
db.data.aggregate(
[
{
"$project": {
"_id": 1,
"source": 1,
"size": 1,
},
},
{
"$out": "size_mapping"
}
]
)
print('Db "size_mapping" created')
# create indexes
db.size_mapping.create_index([("size", 1)])
db.size_mapping.create_index([("source", 1)])
print('Indexes "size", "source" for "size_mapping" created.')
print(list(db.size_mapping.find().limit(5)))
Explanation: Create new collection from data only with '_id', 'source' and 'size' fields
End of explanation
SIZES_LIST_PER_DISTRIBUTOR = db.size_mapping.aggregate(
[
{
"$group": {
"_id": "$source",
"sizes": {"$addToSet": "$size"},
},
},
{
"$project": {
"_id": 1,
"sizes": 1,
"number_of_sizes": {"$size": "$sizes"},
}
},
{
"$sort": {
"number_of_sizes": -1
}
}
]
)
TABLE_SIZES_LIST_PER_DISTRIBUTOR = [
(str(x['_id']), x['sizes'], x['number_of_sizes'])
for x in SIZES_LIST_PER_DISTRIBUTOR
]
for distr, sizes, num in TABLE_SIZES_LIST_PER_DISTRIBUTOR:
print('Sizes for: "{}"'.format(distr))
print(", ".join(sizes))
print(80*"-")
Explanation: Sizes list per distributor
End of explanation
SIZES_MAPPING = {
'ALL': [],
'NO SIZE': ['PLAIN', 'CONE', 'BLANKET'],
'ONE': ['OS', 'ONE SIZE', '1 SIZ', 'O/S'],
'XS': ['XXS', 'XX-SMALL', '2XS'],
'S': ['SMALL', 'S/M'],
'M': ['MEDIUM', 'S/M', 'M/L'],
'L': ['LARGE', 'L/XL', 'M/L'],
'XL': ['EXTRA', 'XLT', 'XT', 'L/XL'],
'2XL': ['2X', 'XXL', '2XT', '2XLL', '2X/', '2XLT'],
'3XL': ['3X', '3XT', '3XLL', '3XLT'],
'4XL': ['4X', '4XT', '4XLT'],
'5XL': ['5X', '5XT', '5XLT'],
'6XL': ['6X'],
}
def build_matching_table(matching_rules):
Build matching table from matching rules
:param matching_rules: matching rules used to build matching table
:type matching_rules: dict
:return: matching table `{'S/M: ['S', 'M'], '2X': ['2XL'], ...}`
:rtype: dict
matching_table = {}
# transform matching rules to the "shortcut": "group_key" table
for key, values in matching_rules.items():
if not values: # skip undefined rules i.e. "[]"
continue
# add rule for key
if key not in matching_table:
# NOTE: set('ab') would be {'a', 'b'}
# so it's impossible to matching_table[key] = set(key)
matching_table[key] = set()
matching_table[key].add(key)
for value in values:
if value not in matching_table:
matching_table[value] = set()
matching_table[value].add(key)
else:
matching_table[value].add(key)
return matching_table
MATCHING_RULES = build_matching_table(SIZES_MAPPING)
print(tabulate(MATCHING_TABLE.items(), headers=['From', 'To'], tablefmt="simple"))
# process data into the new table
# def get_groups(mtable, size):
# Get size groups for the given `size` according to matching table
# :param size: size (case insensetive)
# :type size: str
# :return: list of strings i.e. size groups or ``['UNDEFINED']``
# if not found
# :rtype: list or ['UNDEFINED']
#
# return list(mtable.get(size, default=size))
# for k, v in MATCHING_TABLE.items():
# res = db.size_mapping.update_many(
# {"size": k},
# {"$set": {"size": get_groups(MATCHING_TABLE, k)}})
# print(res.raw_result)
Explanation: Tagging according to size
Since the number of sizes is low (1117 uniq sizes), the task could be resolved using tivial brute force, i.e. map sizes using mapping table.
During the observation of data i noticed that sizes are defined for adult, youth, toddler and baby:
Adult: 'S', 'M', 'L' etc.
Youth: 'YS', 'YL' etc.
Kid: '4', '6' etc.
Toddler: '2T', '3T' etc.
Baby: '3M', '6M', 'NB' (new born) etc.
kid, toddler, baby sizes chart
youth sizes chart
I.e. could tag products accodring to the size.
python
TAG_FROM_SIZE = {
'adult': ['XS', 'S', 'M', 'L', 'XL', '2XL', '3XL', '4XL', '5XL', '6XL'],
'youth': ['YXS', 'YSM', 'YMD', 'YLG', 'YXL', '8H', '10H', '12H', '14H', '16H', '18H', '20H'],
'kid': []
}
End of explanation
results = db.size_mapping.aggregate(
[
{
"$group": {
"_id": "$size",
"count": {"$sum": 1},
}
},
{
"$sort": {
"count": -1,
}
}
]
)
NEW_SIZES = [(str(x['_id']), x['count']) for x in list(results)]
print(
"\n" +
tabulate(NEW_SIZES[:20], headers=['Size', 'Number of Products'], tablefmt="orgtbl") +
"\n"
)
# calculate probability vector
p = []
for _, count in NEW_SIZES:
p.append(count)
size_prob_vector = np.array(p) / TOTAL_NUMBER_OF_PRODUCTS
# calculate entropy
first_entropy = entropy(size_prob_vector)
print("Data entropy: ", first_entropy)
from functools import reduce
total_matched_products = (sum([x[1] for x in NEW_SIZES[:11]]))
percent_from_db_total = round((total_matched_products / TOTAL_NUMBER_OF_PRODUCTS) * 100, 2)
print("Matched: {} Percent from total: {}".format(total_matched_products, percent_from_db_total))
Explanation: Let's calculate data entropy for results
End of explanation |
9,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: JoinNode, synchronize and itersource
JoinNode has the opposite effect of iterables. Where iterables split up the execution workflow into many different branches, a JoinNode merges them back into on node. A JoinNode generalizes MapNode to operate in conjunction with an upstream iterable node to reassemble downstream results, e.g.
Step3: Now, let's look at the input and output of the joinnode
Step4: Extending to multiple nodes
We extend the workflow by using three nodes. Note that even this workflow, the joinsource corresponds to the node containing iterables and the joinfield corresponds to the input port of the JoinNode that aggregates the iterable branches. As before the graph below shows how the execution process is set up.
Step5: Exercise 1
You have list of DOB of the subjects in a few various format | Python Code:
from nipype import JoinNode, Node, Workflow
from nipype.interfaces.utility import Function, IdentityInterface
def get_data_from_id(id):
Generate a random number based on id
import numpy as np
return id + np.random.rand()
def merge_and_scale_data(data2):
Scale the input list by 1000
import numpy as np
return (np.array(data2) * 1000).tolist()
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = JoinNode(Function(input_names=['data2'],
output_names=['data_scaled'],
function=merge_and_scale_data),
name='scale_data',
joinsource=node1,
joinfield=['data2'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
eg = wf.run()
wf.write_graph(graph2use='exec')
from IPython.display import Image
Image(filename='graph_detailed.png')
Explanation: JoinNode, synchronize and itersource
JoinNode has the opposite effect of iterables. Where iterables split up the execution workflow into many different branches, a JoinNode merges them back into on node. A JoinNode generalizes MapNode to operate in conjunction with an upstream iterable node to reassemble downstream results, e.g.:
<img src="../static/images/joinnode.png" width="240">
Simple example
Let's consider the very simple example depicted at the top of this page:
```python
from nipype import Node, JoinNode, Workflow
Specify fake input node A
a = Node(interface=A(), name="a")
Iterate over fake node B's input 'in_file?
b = Node(interface=B(), name="b")
b.iterables = ('in_file', [file1, file2])
Pass results on to fake node C
c = Node(interface=C(), name="c")
Join forked execution workflow in fake node D
d = JoinNode(interface=D(),
joinsource="b",
joinfield="in_files",
name="d")
Put everything into a workflow as usual
workflow = Workflow(name="workflow")
workflow.connect([(a, b, [('subject', 'subject')]),
(b, c, [('out_file', 'in_file')])
(c, d, [('out_file', 'in_files')])
])
```
As you can see, setting up a JoinNode is rather simple. The only difference to a normal Node is the joinsource and the joinfield. joinsource specifies from which node the information to join is coming and the joinfield specifies the input field of the JoinNode where the information to join will be entering the node.
This example assumes that interface A has one output subject, interface B has two inputs subject and in_file and one output out_file, interface C has one input in_file and one output out_file, and interface D has one list input in_files. The images variable is a list of three input image file names.
As with iterables and the MapNode iterfield, the joinfield can be a list of fields. Thus, the declaration in the previous example is equivalent to the following:
python
d = JoinNode(interface=D(),
joinsource="b",
joinfield=["in_files"],
name="d")
The joinfield defaults to all of the JoinNode input fields, so the declaration is also equivalent to the following:
python
d = JoinNode(interface=D(),
joinsource="b",
name="d")
In this example, the node C out_file outputs are collected into the JoinNode D in_files input list. The in_files order is the same as the upstream B node iterables order.
The JoinNode input can be filtered for unique values by specifying the unique flag, e.g.:
python
d = JoinNode(interface=D(),
joinsource="b",
unique=True,
name="d")
synchronize
The Node iterables parameter can be be a single field or a list of fields. If it is a list, then execution is performed over all permutations of the list items. For example:
python
b.iterables = [("m", [1, 2]), ("n", [3, 4])]
results in the execution graph:
<img src="../static/images/synchronize_1.png" width="325">
where B13 has inputs m = 1, n = 3, B14 has inputs m = 1, n = 4, etc.
The synchronize parameter synchronizes the iterables lists, e.g.:
python
b.iterables = [("m", [1, 2]), ("n", [3, 4])]
b.synchronize = True
results in the execution graph:
<img src="../static/images/synchronize_2.png" width="160">
where the iterable inputs are selected in lock-step by index, i.e.:
(*m*, *n*) = (1, 3) and (2, 4)
for B13 and B24, resp.
itersource
The itersource feature allows you to expand a downstream iterable based on a mapping of an upstream iterable. For example:
python
a = Node(interface=A(), name="a")
b = Node(interface=B(), name="b")
b.iterables = ("m", [1, 2])
c = Node(interface=C(), name="c")
d = Node(interface=D(), name="d")
d.itersource = ("b", "m")
d.iterables = [("n", {1:[3,4], 2:[5,6]})]
my_workflow = Workflow(name="my_workflow")
my_workflow.connect([(a,b,[('out_file','in_file')]),
(b,c,[('out_file','in_file')])
(c,d,[('out_file','in_file')])
])
results in the execution graph:
<img src="../static/images/itersource_1.png" width="350">
In this example, all interfaces have input in_file and output out_file. In addition, interface B has input m and interface D has input n. A Python dictionary associates the B node input value with the downstream D node n iterable values.
This example can be extended with a summary JoinNode:
python
e = JoinNode(interface=E(), joinsource="d",
joinfield="in_files", name="e")
my_workflow.connect(d, 'out_file',
e, 'in_files')
resulting in the graph:
<img src="../static/images/itersource_2.png" width="350">
The combination of iterables, MapNode, JoinNode, synchronize and itersource enables the creation of arbitrarily complex workflow graphs. The astute workflow builder will recognize that this flexibility is both a blessing and a curse. These advanced features are handy additions to the Nipype toolkit when used judiciously.
More realistic JoinNode example
Let's consider another example where we have one node that iterates over 3 different numbers and generates random numbers. Another node joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the Function interface to do something with those numbers, before we spit them out again.
End of explanation
res = [node for node in eg.nodes() if 'scale_data' in node.name][0].result
res.outputs
res.inputs
Explanation: Now, let's look at the input and output of the joinnode:
End of explanation
def get_data_from_id(id):
import numpy as np
return id + np.random.rand()
def scale_data(data2):
import numpy as np
return data2
def replicate(data3, nreps=2):
return data3 * nreps
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = Node(Function(input_names=['data2'],
output_names=['data_scaled'],
function=scale_data),
name='scale_data')
node3 = JoinNode(Function(input_names=['data3'],
output_names=['data_repeated'],
function=replicate),
name='replicate_data',
joinsource=node1,
joinfield=['data3'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
wf.connect(node2, 'data_scaled', node3, 'data3')
eg = wf.run()
wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.png')
Explanation: Extending to multiple nodes
We extend the workflow by using three nodes. Note that even this workflow, the joinsource corresponds to the node containing iterables and the joinfield corresponds to the input port of the JoinNode that aggregates the iterable branches. As before the graph below shows how the execution process is set up.
End of explanation
# write your solution here
# the list of all DOB
dob_subjects = ["10 February 1984", "March 5 1990", "April 2 1782", "June 6, 1988", "12 May 1992"]
# let's start from creating Node with iterable to split all strings from the list
from nipype import Node, JoinNode, Function, Workflow
def split_dob(dob_string):
return dob_string.split()
split_node = Node(Function(input_names=["dob_string"],
output_names=["split_list"],
function=split_dob),
name="splitting")
#split_node.inputs.dob_string = "10 February 1984"
split_node.iterables = ("dob_string", dob_subjects)
# and now let's work on the date format more, independently for every element
# sometimes the second element has an extra "," that we should remove
def remove_comma(str_list):
str_list[1] = str_list[1].replace(",", "")
return str_list
cleaning_node = Node(Function(input_names=["str_list"],
output_names=["str_list_clean"],
function=remove_comma),
name="cleaning")
# now we can extract year, month, day from our list and create ``datetime.datetim`` object
def datetime_format(date_list):
import datetime
# year is always the last
year = int(date_list[2])
#day and month can be in the first or second position
# we can use datetime.datetime.strptime to convert name of the month to integer
try:
day = int(date_list[0])
month = datetime.datetime.strptime(date_list[1], "%B").month
except(ValueError):
day = int(date_list[1])
month = datetime.datetime.strptime(date_list[0], "%B").month
# and create datetime.datetime format
return datetime.datetime(year, month, day)
datetime_node = Node(Function(input_names=["date_list"],
output_names=["datetime"],
function=datetime_format),
name="datetime")
# now we are ready to create JoinNode and sort the list of DOB
def sorting_dob(datetime_list):
datetime_list.sort()
return datetime_list
sorting_node = JoinNode(Function(input_names=["datetime_list"],
output_names=["dob_sorted"],
function=sorting_dob),
joinsource=split_node, # this is the node that used iterables for x
joinfield=['datetime_list'],
name="sorting")
# and we're ready to create workflow
ex1_wf = Workflow(name="sorting_dob")
ex1_wf.connect(split_node, "split_list", cleaning_node, "str_list")
ex1_wf.connect(cleaning_node, "str_list_clean", datetime_node, "date_list")
ex1_wf.connect(datetime_node, "datetime", sorting_node, "datetime_list")
# you can check the graph
from IPython.display import Image
ex1_wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.png')
# and run the workflow
ex1_res = ex1_wf.run()
# you can check list of all nodes
ex1_res.nodes()
# and check the results from sorting_dob.sorting
list(ex1_res.nodes())[0].result.outputs
Explanation: Exercise 1
You have list of DOB of the subjects in a few various format : ["10 February 1984", "March 5 1990", "April 2 1782", "June 6, 1988", "12 May 1992"], and you want to sort the list.
You can use Node with iterables to extract day, month and year, and use datetime.datetime to unify the format that can be compared, and JoinNode to sort the list.
End of explanation |
9,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 3
Step1: Load feature data set
We have previously created the labeled feature data set in the Code\2_feature_engineering.ipynb Jupyter notebook. Since the Azure Blob storage account name and account key are not passed between notebooks, you'll need your credentials here again.
Step2: Load the data and dump a short summary of the resulting DataFrame.
Step3: Prepare the Training/Testing data
A fundamental practice in machine learning is to calibrate and test your model parameters on data that has not been used to train the model. Evaluation of the model requires splitting the available data into a training portion, a calibration portion and an evaluation portion. Typically, 80% of data is used to train the model and 10% each to calibrate any parameter selection and evaluate your model.
In general random splitting can be used, but since time series data have an inherent correlation between observations. For predictive maintenance problems, a time-dependent spliting strategy is often a better approach to estimate performance. For a time-dependent split, a single point in time is chosen, the model is trained on examples up to that point in time, and validated on the examples after that point. This simulates training on current data and score data collected in the future data after the splitting point is not known. However, care must be taken on labels near the split point. In this case, feature records within 7 days of the split point can not be labeled as a failure, since that is unobserved data.
In the following code blocks, we split the data at a single point to train and evaluate this model.
Step4: Spark models require a vectorized data frame. We transform the dataset here and then split the data into a training and test set. We use this split data to train the model on 9 months of data (training data), and evaluate on the remaining 3 months (test data) going forward.
Step5: Classification models
A particualar troubling behavior in predictive maintenance is machine failures are usually rare occurrences compared to normal operation. This is fortunate for the business as maintenance and saftey issues are few, but causes an imbalance in the label distribution. This imbalance leads to poor performance as algorithms tend to classify majority class examples at the expense of minority class, since the total misclassification error is much improved when majority class is labeled correctly. This causes low recall or precision rates, although accuracy can be high. It becomes a larger problem when the cost of false alarms is very high. To help with this problem, sampling techniques such as oversampling of the minority examples can be used. These methods are not covered in this notebook. Because of this, it is also important to look at evaluation metrics other than accuracy alone.
We will build and compare two different classification model approaches
Step6: To evaluate this model, we predict the component failures over the test data set. Since the test set has been created from data the model has not been seen before, it simulates future data. The evaluation then can be generalize to how the model could perform when operationalized and used to score the data in real time.
Step7: The confusion matrix lists each true component failure in rows and the predicted value in columns. Labels numbered 0.0 corresponds to no component failures. Labels numbered 1.0 through 4.0 correspond to failures in one of the four components in the machine. As an example, the third number in the top row indicates how many days we predicted component 2 would fail, when no components actually did fail. The second number in the second row, indicates how many days we correctly predicted a component 1 failure within the next 7 days.
We read the confusion matrix numbers along the diagonal as correctly classifying the component failures. Numbers above the diagonal indicate the model incorrectly predicting a failure when non occured, and those below indicate incorrectly predicting a non-failure for the row indicated component failure.
When evaluating classification models, it is convenient to reduce the results in the confusion matrix into a single performance statistic. However, depending on the problem space, it is impossible to always use the same statistic in this evaluation. Below, we calculate four such statistics.
Accuracy
Step8: Remember that this is a simulated data set. We would expect a model built on real world data to behave very differently. The accuracy may still be close to one, but the precision and recall numbers would be much lower.
Persist the model
We'll save the latest model for use in deploying a webservice for operationalization in the next notebook. We store this local to the Jupyter notebook kernel because the model is stored in a hierarchical format that does not translate to Azure Blob storage well. | Python Code:
# import the libraries
import os
import glob
import time
# for creating pipelines and model
from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler, VectorIndexer
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.classification import GBTClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.sql.functions import col
from pyspark.sql import SparkSession
# For some data handling
import pandas as pd
import numpy as np
# For Azure blob storage access
from azure.storage.blob import BlockBlobService
from azure.storage.blob import PublicAccess
# For logging model evaluation parameters back into the
# AML Workbench run history plots.
import logging
from azureml.logging import get_azureml_logger
amllog = logging.getLogger("azureml")
amllog.level = logging.INFO
# Turn on cell level logging.
%azureml history on
%azureml history show
# Time the notebook execution.
# This will only make sense if you "Run all cells"
tic = time.time()
logger = get_azureml_logger() # logger writes to AMLWorkbench runtime view
spark = SparkSession.builder.getOrCreate()
# Telemetry
logger.log('amlrealworld.predictivemaintenance.feature_engineering','true')
Explanation: Step 3: Model Building
Using the labeled feature data set constructed in the Code/2_feature_engineering.ipynb Jupyter notebook, this notebook loads the data from the Azure Blob container and splits it into a training and test data set. We then build a machine learning model (a decision tree classifier or a random forest classifier) to predict when different components within our machine population will fail. We store the better performing model for deployment in an Azure web service in the next. We will prepare and build the web service in the Code/4_operationalization.ipynb Jupyter notebook.
Note: This notebook will take about 2-4 minutes to execute all cells, depending on the compute configuration you have setup.
End of explanation
# Enter your Azure blob storage details here
ACCOUNT_NAME = "<your blob storage account name>"
# You can find the account key under the _Access Keys_ link in the
# [Azure Portal](portal.azure.com) page for your Azure storage container.
ACCOUNT_KEY = "<your blob storage account key>"
#-------------------------------------------------------------------------------------------
# The data from the feature engineering note book is stored in the feature engineering container.
CONTAINER_NAME = CONTAINER_NAME = "featureengineering"
# Connect to your blob service
az_blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)
# We will store and read each of these data sets in blob storage in an
# Azure Storage Container on your Azure subscription.
# See https://github.com/Azure/ViennaDocs/blob/master/Documentation/UsingBlobForStorage.md
# for details.
# This is the final feature data file.
FEATURES_LOCAL_DIRECT = 'featureengineering_files.parquet'
# This is where we store the final model data file.
LOCAL_DIRECT = 'model_result.parquet'
Explanation: Load feature data set
We have previously created the labeled feature data set in the Code\2_feature_engineering.ipynb Jupyter notebook. Since the Azure Blob storage account name and account key are not passed between notebooks, you'll need your credentials here again.
End of explanation
# load the previous created final dataset into the workspace
# create a local path where we store results
if not os.path.exists(FEATURES_LOCAL_DIRECT):
os.makedirs(FEATURES_LOCAL_DIRECT)
print('DONE creating a local directory!')
# download the entire parquet result folder to local path for a new run
for blob in az_blob_service.list_blobs(CONTAINER_NAME):
if FEATURES_LOCAL_DIRECT in blob.name:
local_file = os.path.join(FEATURES_LOCAL_DIRECT, os.path.basename(blob.name))
az_blob_service.get_blob_to_path(CONTAINER_NAME, blob.name, local_file)
feat_data = spark.read.parquet(FEATURES_LOCAL_DIRECT)
feat_data.limit(10).toPandas().head(10)
type(feat_data)
Explanation: Load the data and dump a short summary of the resulting DataFrame.
End of explanation
# define list of input columns for downstream modeling
# We'll use the known label, and key variables.
label_var = ['label_e']
key_cols =['machineID','dt_truncated']
# Then get the remaing feature names from the data
input_features = feat_data.columns
# We'll use the known label, key variables and
# a few extra columns we won't need.
remove_names = label_var + key_cols + ['failure','model_encoded','model' ]
# Remove the extra names if that are in the input_features list
input_features = [x for x in input_features if x not in set(remove_names)]
input_features
Explanation: Prepare the Training/Testing data
A fundamental practice in machine learning is to calibrate and test your model parameters on data that has not been used to train the model. Evaluation of the model requires splitting the available data into a training portion, a calibration portion and an evaluation portion. Typically, 80% of data is used to train the model and 10% each to calibrate any parameter selection and evaluate your model.
In general random splitting can be used, but since time series data have an inherent correlation between observations. For predictive maintenance problems, a time-dependent spliting strategy is often a better approach to estimate performance. For a time-dependent split, a single point in time is chosen, the model is trained on examples up to that point in time, and validated on the examples after that point. This simulates training on current data and score data collected in the future data after the splitting point is not known. However, care must be taken on labels near the split point. In this case, feature records within 7 days of the split point can not be labeled as a failure, since that is unobserved data.
In the following code blocks, we split the data at a single point to train and evaluate this model.
End of explanation
# assemble features
va = VectorAssembler(inputCols=(input_features), outputCol='features')
feat_data = va.transform(feat_data).select('machineID','dt_truncated','label_e','features')
# set maxCategories so features with > 10 distinct values are treated as continuous.
featureIndexer = VectorIndexer(inputCol="features",
outputCol="indexedFeatures",
maxCategories=10).fit(feat_data)
# fit on whole dataset to include all labels in index
labelIndexer = StringIndexer(inputCol="label_e", outputCol="indexedLabel").fit(feat_data)
# split the data into train/test based on date
split_date = "2015-10-30"
training = feat_data.filter(feat_data.dt_truncated < split_date)
testing = feat_data.filter(feat_data.dt_truncated >= split_date)
print(training.count())
print(testing.count())
Explanation: Spark models require a vectorized data frame. We transform the dataset here and then split the data into a training and test set. We use this split data to train the model on 9 months of data (training data), and evaluate on the remaining 3 months (test data) going forward.
End of explanation
model_type = 'RandomForest' # Use 'DecisionTree', or 'RandomForest'
# train a model.
if model_type == 'DecisionTree':
model = DecisionTreeClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures",
# Maximum depth of the tree. (>= 0)
# E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.'
maxDepth=15,
# Max number of bins for discretizing continuous features.
# Must be >=2 and >= number of categories for any categorical feature.
maxBins=32,
# Minimum number of instances each child must have after split.
# If a split causes the left or right child to have fewer than
# minInstancesPerNode, the split will be discarded as invalid. Should be >= 1.
minInstancesPerNode=1,
# Minimum information gain for a split to be considered at a tree node.
minInfoGain=0.0,
# Criterion used for information gain calculation (case-insensitive).
# Supported options: entropy, gini')
impurity="gini")
##=======================================================================================================================
#elif model_type == 'GBTClassifier':
# cls_mthd = GBTClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures")
##=======================================================================================================================
else:
model = RandomForestClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures",
# Passed to DecisionTreeClassifier
maxDepth=15,
maxBins=32,
minInstancesPerNode=1,
minInfoGain=0.0,
impurity="gini",
# Number of trees to train (>= 1)
numTrees=50,
# The number of features to consider for splits at each tree node.
# Supported options: auto, all, onethird, sqrt, log2, (0.0-1.0], [1-n].
featureSubsetStrategy="sqrt",
# Fraction of the training data used for learning each
# decision tree, in range (0, 1].'
subsamplingRate = 0.632)
# chain indexers and model in a Pipeline
pipeline_cls_mthd = Pipeline(stages=[labelIndexer, featureIndexer, model])
# train model. This also runs the indexers.
model_pipeline = pipeline_cls_mthd.fit(training)
Explanation: Classification models
A particualar troubling behavior in predictive maintenance is machine failures are usually rare occurrences compared to normal operation. This is fortunate for the business as maintenance and saftey issues are few, but causes an imbalance in the label distribution. This imbalance leads to poor performance as algorithms tend to classify majority class examples at the expense of minority class, since the total misclassification error is much improved when majority class is labeled correctly. This causes low recall or precision rates, although accuracy can be high. It becomes a larger problem when the cost of false alarms is very high. To help with this problem, sampling techniques such as oversampling of the minority examples can be used. These methods are not covered in this notebook. Because of this, it is also important to look at evaluation metrics other than accuracy alone.
We will build and compare two different classification model approaches:
Decision Tree Classifier: Decision trees and their ensembles are popular methods for the machine learning tasks of classification and regression. Decision trees are widely used since they are easy to interpret, handle categorical features, extend to the multiclass classification setting, do not require feature scaling, and are able to capture non-linearities and feature interactions.
Random Forest Classifier: A random forest is an ensemble of decision trees. Random forests combine many decision trees in order to reduce the risk of overfitting. Tree ensemble algorithms such as random forests and boosting are among the top performers for classification and regression tasks.
We will to compare these models in the AML Workbench runs screen. The next code block creates the model. You can choose between a DecisionTree or RandomForest by setting the 'model_type' variable. We have also included a series of model hyperparameters to guide your exploration of the model space.
End of explanation
# make predictions. The Pipeline does all the same operations on the test data
predictions = model_pipeline.transform(testing)
# Create the confusion matrix for the multiclass prediction results
# This result assumes a decision boundary of p = 0.5
conf_table = predictions.stat.crosstab('indexedLabel', 'prediction')
confuse = conf_table.toPandas()
confuse.head()
Explanation: To evaluate this model, we predict the component failures over the test data set. Since the test set has been created from data the model has not been seen before, it simulates future data. The evaluation then can be generalize to how the model could perform when operationalized and used to score the data in real time.
End of explanation
# select (prediction, true label) and compute test error
# select (prediction, true label) and compute test error
# True positives - diagonal failure terms
tp = confuse['1.0'][1]+confuse['2.0'][2]+confuse['3.0'][3]+confuse['4.0'][4]
# False positves - All failure terms - True positives
fp = np.sum(np.sum(confuse[['1.0', '2.0','3.0','4.0']])) - tp
# True negatives
tn = confuse['0.0'][0]
# False negatives total of non-failure column - TN
fn = np.sum(np.sum(confuse[['0.0']])) - tn
# Accuracy is diagonal/total
acc_n = tn + tp
acc_d = np.sum(np.sum(confuse[['0.0','1.0', '2.0','3.0','4.0']]))
acc = acc_n/acc_d
# Calculate precision and recall.
prec = tp/(tp+fp)
rec = tp/(tp+fn)
# Print the evaluation metrics to the notebook
print("Accuracy = %g" % acc)
print("Precision = %g" % prec)
print("Recall = %g" % rec )
print("F1 = %g" % (2.0 * prec * rec/(prec + rec)))
print("")
# logger writes information back into the AML Workbench run time page.
# Each title (i.e. "Model Accuracy") can be shown as a graph to track
# how the metric changes between runs.
logger.log("Model Accuracy", (acc))
logger.log("Model Precision", (prec))
logger.log("Model Recall", (rec))
logger.log("Model F1", (2.0 * prec * rec/(prec + rec)))
importances = model_pipeline.stages[2].featureImportances
importances
Explanation: The confusion matrix lists each true component failure in rows and the predicted value in columns. Labels numbered 0.0 corresponds to no component failures. Labels numbered 1.0 through 4.0 correspond to failures in one of the four components in the machine. As an example, the third number in the top row indicates how many days we predicted component 2 would fail, when no components actually did fail. The second number in the second row, indicates how many days we correctly predicted a component 1 failure within the next 7 days.
We read the confusion matrix numbers along the diagonal as correctly classifying the component failures. Numbers above the diagonal indicate the model incorrectly predicting a failure when non occured, and those below indicate incorrectly predicting a non-failure for the row indicated component failure.
When evaluating classification models, it is convenient to reduce the results in the confusion matrix into a single performance statistic. However, depending on the problem space, it is impossible to always use the same statistic in this evaluation. Below, we calculate four such statistics.
Accuracy: reports how often we correctly predicted the labeled data. Unfortunatly, when there is a class imbalance (a large number of one of the labels relative to others), this measure is biased towards the largest class. In this case non-failure days.
Because of the class imbalance inherint in predictive maintenance problems, it is better to look at the remaining statistics instead. Here positive predictions indicate a failure.
Precision: Precision is a measure of how well the model classifies the truely positive samples. Precision depends on falsely classifying negative days as positive.
Recall: Recall is a measure of how well the model can find the positive samples. Recall depends on falsely classifying positive days as negative.
F1: F1 considers both the precision and the recall. F1 score is the harmonic average of precision and recall. An F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0.
These metrics make the most sense for binary classifiers, though they are still useful for comparision in our multiclass setting. Below we calculate these evaluation statistics for the selected classifier, and post them back to the AML workbench run time page for tracking between experiments.
End of explanation
# save model
model_pipeline.write().overwrite().save(os.environ['AZUREML_NATIVE_SHARE_DIRECTORY']+'pdmrfull.model')
print("Model saved")
# Time the notebook execution.
# This will only make sense if you "Run All" cells
toc = time.time()
print("Full run took %.2f minutes" % ((toc - tic)/60))
logger.log("Model Building Run time", ((toc - tic)/60))
Explanation: Remember that this is a simulated data set. We would expect a model built on real world data to behave very differently. The accuracy may still be close to one, but the precision and recall numbers would be much lower.
Persist the model
We'll save the latest model for use in deploying a webservice for operationalization in the next notebook. We store this local to the Jupyter notebook kernel because the model is stored in a hierarchical format that does not translate to Azure Blob storage well.
End of explanation |
9,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al.'s mTRF toolbox in
MATLAB
Step1: sphinx_gallery_thumbnail_number = 3
Step2: Load the data from the publication
First we will load the data collected in
Step3: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
Step4: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from
Step5: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
Step6: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
Step7: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5_ from | Python Code:
# Authors: Chris Holdgraf <choldgraf@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Nicolas Barascud <nicolas.barascud@ens.fr>
#
# License: BSD-3-Clause
Explanation: Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al.'s mTRF toolbox in
MATLAB :footcite:CrosseEtAl2016. We will show how the
:class:mne.decoding.ReceptiveField class
can perform a similar function along with scikit-learn. We will first fit a
linear encoding model using the continuously-varying speech envelope to predict
activity of a 128 channel EEG system. Then, we will take the reverse approach
and try to predict the speech envelope from the EEG (known in the literature
as a decoding model, or simply stimulus reconstruction).
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from os.path import join
import mne
from mne.decoding import ReceptiveField
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale
Explanation: sphinx_gallery_thumbnail_number = 3
End of explanation
path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, 'speech_data.mat'))
raw = data['EEG'].T
speech = data['envelope'].T
sfreq = float(data['Fs'])
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, npad='auto')
raw = mne.filter.resample(raw, down=decim, npad='auto')
# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.make_standard_montage('biosemi128')
info = mne.create_info(montage.ch_names, sfreq, 'eeg').set_montage(montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)
# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots()
lns = ax.plot(scale(raw[:, :800][0].T), color='k', alpha=.1)
ln1 = ax.plot(scale(speech[0, :800]), color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['EEG', 'Speech Envelope'], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
mne.viz.tight_layout()
Explanation: Load the data from the publication
First we will load the data collected in :footcite:CrosseEtAl2016.
In this experiment subjects
listened to natural speech. Raw EEG and the speech stimulus are provided.
We will load these below, downsampling the data in order to speed up
computation since we know that our features are primarily low-frequency in
nature. Then we'll visualize both the EEG and speech envelope.
End of explanation
# Define the delays that we will use in the receptive field
tmin, tmax = -.2, .4
# Initialize the model
rf = ReceptiveField(tmin, tmax, sfreq, feature_names=['envelope'],
estimator=1., scoring='corrcoef')
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Prepare model data (make time the first dimension)
speech = speech.T
Y, _ = raw[:] # Outputs for the model
Y = Y.T
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
scores = np.zeros((n_splits, n_channels))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
rf.fit(speech[train], Y[train])
scores[ii] = rf.score(speech[test], Y[test])
# coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature
coefs[ii] = rf.coef_[:, 0, :]
times = rf.delays_ / float(rf.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_scores = scores.mean(axis=0)
# Plot mean prediction scores across all channels
fig, ax = plt.subplots()
ix_chs = np.arange(n_channels)
ax.plot(ix_chs, mean_scores)
ax.axhline(0, ls='--', color='r')
ax.set(title="Mean prediction score", xlabel="Channel", ylabel="Score ($r$)")
mne.viz.tight_layout()
Explanation: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
End of explanation
# Print mean coefficients across all time delays / channels (see Fig 1)
time_plot = 0.180 # For highlighting a specific time.
fig, ax = plt.subplots(figsize=(4, 8))
max_coef = mean_coefs.max()
ax.pcolormesh(times, ix_chs, mean_coefs, cmap='RdBu_r',
vmin=-max_coef, vmax=max_coef, shading='gouraud')
ax.axvline(time_plot, ls='--', color='k', lw=2)
ax.set(xlabel='Delay (s)', ylabel='Channel', title="Mean Model\nCoefficients",
xlim=times[[0, -1]], ylim=[len(ix_chs) - 1, 0],
xticks=np.arange(tmin, tmax + .2, .2))
plt.setp(ax.get_xticklabels(), rotation=45)
mne.viz.tight_layout()
# Make a topographic map of coefficients for a given delay (see Fig 2C)
ix_plot = np.argmin(np.abs(time_plot - times))
fig, ax = plt.subplots()
mne.viz.plot_topomap(mean_coefs[:, ix_plot], pos=info, axes=ax, show=False,
vmin=-max_coef, vmax=max_coef)
ax.set(title="Topomap of model coefficients\nfor delay %s" % time_plot)
mne.viz.tight_layout()
Explanation: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from
:footcite:CrosseEtAl2016.
End of explanation
# We use the same lags as in :footcite:`CrosseEtAl2016`. Negative lags now
# index the relationship
# between the neural response and the speech envelope earlier in time, whereas
# positive lags would index how a unit change in the amplitude of the EEG would
# affect later stimulus activity (obviously this should have an amplitude of
# zero).
tmin, tmax = -.2, 0.
# Initialize the model. Here the features are the EEG data. We also specify
# ``patterns=True`` to compute inverse-transformed coefficients during model
# fitting (cf. next section and :footcite:`HaufeEtAl2014`).
# We'll use a ridge regression estimator with an alpha value similar to
# Crosse et al.
sr = ReceptiveField(tmin, tmax, sfreq, feature_names=raw.ch_names,
estimator=1e4, scoring='corrcoef', patterns=True)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
patterns = coefs.copy()
scores = np.zeros((n_splits,))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
sr.fit(Y[train], speech[train])
scores[ii] = sr.score(Y[test], speech[test])[0]
# coef_ is shape (n_outputs, n_features, n_delays). We have 128 features
coefs[ii] = sr.coef_[0, :, :]
patterns[ii] = sr.patterns_[0, :, :]
times = sr.delays_ / float(sr.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_patterns = patterns.mean(axis=0)
mean_scores = scores.mean(axis=0)
max_coef = np.abs(mean_coefs).max()
max_patterns = np.abs(mean_patterns).max()
Explanation: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
:class:mne.decoding.ReceptiveField class as we try to predict the stimulus
activity from the EEG data. This is known in the literature as a decoding, or
stimulus reconstruction model :footcite:CrosseEtAl2016.
A decoding model aims to find the
relationship between the speech signal and a time-delayed version of the EEG.
This can be useful as we exploit all of the available neural data in a
multivariate context, compared to the encoding case which treats each M/EEG
channel as an independent feature. Therefore, decoding models might provide a
better quality of fit (at the expense of not controlling for stimulus
covariance), especially for low SNR stimuli such as speech.
End of explanation
y_pred = sr.predict(Y[test])
time = np.linspace(0, 2., 5 * int(sfreq))
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(time, speech[test][sr.valid_samples_][:int(5 * sfreq)],
color='grey', lw=2, ls='--')
ax.plot(time, y_pred[sr.valid_samples_][:int(5 * sfreq)], color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['Envelope', 'Reconstruction'], frameon=False)
ax.set(title="Stimulus reconstruction")
ax.set_xlabel('Time (s)')
mne.viz.tight_layout()
Explanation: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
End of explanation
time_plot = (-.140, -.125) # To average between two timepoints.
ix_plot = np.arange(np.argmin(np.abs(time_plot[0] - times)),
np.argmin(np.abs(time_plot[1] - times)))
fig, ax = plt.subplots(1, 2)
mne.viz.plot_topomap(np.mean(mean_coefs[:, ix_plot], axis=1),
pos=info, axes=ax[0], show=False,
vmin=-max_coef, vmax=max_coef)
ax[0].set(title="Model coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.plot_topomap(np.mean(mean_patterns[:, ix_plot], axis=1),
pos=info, axes=ax[1],
show=False, vmin=-max_patterns, vmax=max_patterns)
ax[1].set(title="Inverse-transformed coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.tight_layout()
Explanation: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5_ from
:footcite:CrosseEtAl2016. The
decoding model weights reflect the channels that contribute most toward
reconstructing the stimulus signal, but are not directly interpretable in a
neurophysiological sense. Here we also look at the coefficients obtained
via an inversion procedure :footcite:HaufeEtAl2014, which have a more
straightforward
interpretation as their value (and sign) directly relates to the stimulus
signal's strength (and effect direction).
End of explanation |
9,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored in 2016 and has seen impressive results in generating new images; you can read the original paper, here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
<img src='assets/svhn_dcgan.png' width=80% />
So, our goal is to create a DCGAN that can generate new, realistic-looking images of house numbers. We'll go through the following steps to do this
Step1: Getting the data
Here you can download the SVHN dataset. It's a dataset built-in to the PyTorch datasets library. We can load in training data, transform it into Tensor datatypes, then create dataloaders to batch our data into a desired size.
Step2: Visualize the Data
Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real, training images that we'll pass to the discriminator. Notice that each image has one associated, numerical label.
Step3: Pre-processing
Step5: Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
Discriminator
Here you'll build the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers.
* The inputs to the discriminator are 32x32x3 tensor images
* You'll want a few convolutional, hidden layers
* Then a fully connected layer for the output; as before, we want a sigmoid output, but we'll add that in the loss function, BCEWithLogitsLoss, later
<img src='assets/conv_discriminator.png' width=80%/>
For the depths of the convolutional layers I suggest starting with 32 filters in the first layer, then double that depth as you add layers (to 64, 128, etc.). Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpooling layers.
You'll also want to use batch normalization with nn.BatchNorm2d on each layer except the first convolutional layer and final, linear output layer.
Helper conv function
In general, each layer should look something like convolution > batch norm > leaky ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a convolutional + an optional batch norm layer. We'll create these using PyTorch's Sequential container, which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.
Note
Step7: Generator
Next, you'll build the generator network. The input will be our noise vector z, as before. And, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
<img src='assets/conv_generator.png' width=80% />
What's new here is we'll use transpose convolutional layers to create our new images.
* The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x512.
* Then, we use batch normalization and a leaky ReLU activation.
* Next is a series of transpose convolutional layers, where you typically halve the depth and double the width and height of the previous layer.
* And, we'll apply batch normalization and ReLU to all but the last of these hidden layers. Where we will just apply a tanh activation.
Helper deconv function
For each of these layers, the general scheme is transpose convolution > batch norm > ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a transpose convolutional + an optional batch norm layer. We'll create these using PyTorch's Sequential container, which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.
Note
Step8: Build complete network
Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
Step9: Training on GPU
Check if you can train on GPU. If you can, set this as a variable and move your models to GPU.
Later, we'll also move any inputs our models and loss functions see (real_images, z, and ground truth labels) to GPU as well.
Step10: Discriminator and Generator Losses
Now we need to calculate the losses. And this will be exactly the same as before.
Discriminator Losses
For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss.
Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
The losses will by binary cross entropy loss with logits, which we can get with BCEWithLogitsLoss. This combines a sigmoid activation function and and binary cross entropy loss in one function.
For the real images, we want D(real_images) = 1. That is, we want the discriminator to classify the the real images with a label = 1, indicating that these are real. The discriminator loss for the fake data is similar. We want D(fake_images) = 0, where the fake images are the generator output, fake_images = G(z).
Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get D(fake_images) = 1. In this case, the labels are flipped to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
Step11: Optimizers
Not much new here, but notice how I am using a small learning rate and custom parameters for the Adam optimizers, This is based on some research into DCGAN model convergence.
Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Step12: Training
Training will involve alternating between training the discriminator and the generator. We'll use our functions real_loss and fake_loss to help us calculate the discriminator losses in all of the following cases.
Discriminator training
Compute the discriminator loss on real, training images
Generate fake images
Compute the discriminator loss on fake, generated images
Add up real and fake loss
Perform backpropagation + an optimization step to update the discriminator's weights
Generator training
Generate fake images
Compute the discriminator loss on fake images, using flipped labels!
Perform backpropagation + an optimization step to update the generator's weights
Saving Samples
As we train, we'll also print out some loss statistics and save some generated "fake" samples.
Evaluation mode
Notice that, when we call our generator to create the samples to display, we set our model to evaluation mode
Step13: Training loss
Here we'll plot the training losses for the generator and discriminator, recorded after each epoch.
Step14: Generator samples from training
Here we can view samples of images from the generator. We'll look at the images we saved during training. | Python Code:
# import libraries
import matplotlib.pyplot as plt
import numpy as np
import pickle as pkl
%matplotlib inline
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored in 2016 and has seen impressive results in generating new images; you can read the original paper, here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
<img src='assets/svhn_dcgan.png' width=80% />
So, our goal is to create a DCGAN that can generate new, realistic-looking images of house numbers. We'll go through the following steps to do this:
* Load in and pre-process the house numbers dataset
* Define discriminator and generator networks
* Train these adversarial networks
* Visualize the loss over time and some sample, generated images
Deeper Convolutional Networks
Since this dataset is more complex than our MNIST data, we'll need a deeper network to accurately identify patterns in these images and be able to generate new ones. Specifically, we'll use a series of convolutional or transpose convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get these convolutional networks to train.
Besides these changes in network structure, training the discriminator and generator networks should be the same as before. That is, the discriminator will alternate training on real and fake (generated) images, and the generator will aim to trick the discriminator into thinking that its generated images are real!
End of explanation
import torch
from torchvision import datasets
from torchvision import transforms
# Tensor transform
transform = transforms.ToTensor()
# SVHN training datasets
svhn_train = datasets.SVHN(root='data/', split='train', download=True, transform=transform)
batch_size = 128
num_workers = 0
# build DataLoaders for SVHN dataset
train_loader = torch.utils.data.DataLoader(dataset=svhn_train,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers)
Explanation: Getting the data
Here you can download the SVHN dataset. It's a dataset built-in to the PyTorch datasets library. We can load in training data, transform it into Tensor datatypes, then create dataloaders to batch our data into a desired size.
End of explanation
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.transpose(images[idx], (1, 2, 0)))
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
Explanation: Visualize the Data
Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real, training images that we'll pass to the discriminator. Notice that each image has one associated, numerical label.
End of explanation
# current range
img = images[0]
print('Min: ', img.min())
print('Max: ', img.max())
# helper scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
return x
# scaled range
scaled_img = scale(img)
print('Scaled min: ', scaled_img.min())
print('Scaled max: ', scaled_img.max())
Explanation: Pre-processing: scaling from -1 to 1
We need to do a bit of pre-processing; we know that the output of our tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
End of explanation
import torch.nn as nn
import torch.nn.functional as F
# helper conv function
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
Creates a convolutional layer, with optional batch normalization.
layers = []
conv_layer = nn.Conv2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append conv layer
layers.append(conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
# using Sequential container
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim=32):
super(Discriminator, self).__init__()
# complete init function
def forward(self, x):
# complete forward function
return x
Explanation: Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
Discriminator
Here you'll build the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers.
* The inputs to the discriminator are 32x32x3 tensor images
* You'll want a few convolutional, hidden layers
* Then a fully connected layer for the output; as before, we want a sigmoid output, but we'll add that in the loss function, BCEWithLogitsLoss, later
<img src='assets/conv_discriminator.png' width=80%/>
For the depths of the convolutional layers I suggest starting with 32 filters in the first layer, then double that depth as you add layers (to 64, 128, etc.). Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpooling layers.
You'll also want to use batch normalization with nn.BatchNorm2d on each layer except the first convolutional layer and final, linear output layer.
Helper conv function
In general, each layer should look something like convolution > batch norm > leaky ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a convolutional + an optional batch norm layer. We'll create these using PyTorch's Sequential container, which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.
Note: It is also suggested that you use a kernel_size of 4 and a stride of 2 for strided convolutions.
End of explanation
# helper deconv function
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
Creates a transposed-convolutional layer, with optional batch normalization.
## TODO: Complete this function
## create a sequence of transpose + optional batch norm layers
return None
class Generator(nn.Module):
def __init__(self, z_size, conv_dim=32):
super(Generator, self).__init__()
# complete init function
def forward(self, x):
# complete forward function
return x
Explanation: Generator
Next, you'll build the generator network. The input will be our noise vector z, as before. And, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
<img src='assets/conv_generator.png' width=80% />
What's new here is we'll use transpose convolutional layers to create our new images.
* The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x512.
* Then, we use batch normalization and a leaky ReLU activation.
* Next is a series of transpose convolutional layers, where you typically halve the depth and double the width and height of the previous layer.
* And, we'll apply batch normalization and ReLU to all but the last of these hidden layers. Where we will just apply a tanh activation.
Helper deconv function
For each of these layers, the general scheme is transpose convolution > batch norm > ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a transpose convolutional + an optional batch norm layer. We'll create these using PyTorch's Sequential container, which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.
Note: It is also suggested that you use a kernel_size of 4 and a stride of 2 for transpose convolutions.
End of explanation
# define hyperparams
conv_dim = 32
z_size = 100
# define discriminator and generator
D = Discriminator(conv_dim)
G = Generator(z_size=z_size, conv_dim=conv_dim)
print(D)
print()
print(G)
Explanation: Build complete network
Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
End of explanation
train_on_gpu = torch.cuda.is_available()
if train_on_gpu:
# move models to GPU
G.cuda()
D.cuda()
print('GPU available for training. Models moved to GPU')
else:
print('Training on CPU.')
Explanation: Training on GPU
Check if you can train on GPU. If you can, set this as a variable and move your models to GPU.
Later, we'll also move any inputs our models and loss functions see (real_images, z, and ground truth labels) to GPU as well.
End of explanation
def real_loss(D_out, smooth=False):
batch_size = D_out.size(0)
# label smoothing
if smooth:
# smooth, real labels = 0.9
labels = torch.ones(batch_size)*0.9
else:
labels = torch.ones(batch_size) # real labels = 1
# move labels to GPU if available
if train_on_gpu:
labels = labels.cuda()
# binary cross entropy with logits loss
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
batch_size = D_out.size(0)
labels = torch.zeros(batch_size) # fake labels = 0
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses. And this will be exactly the same as before.
Discriminator Losses
For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss.
Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
The losses will by binary cross entropy loss with logits, which we can get with BCEWithLogitsLoss. This combines a sigmoid activation function and and binary cross entropy loss in one function.
For the real images, we want D(real_images) = 1. That is, we want the discriminator to classify the the real images with a label = 1, indicating that these are real. The discriminator loss for the fake data is similar. We want D(fake_images) = 0, where the fake images are the generator output, fake_images = G(z).
Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get D(fake_images) = 1. In this case, the labels are flipped to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
End of explanation
import torch.optim as optim
# params
lr =
beta1=
beta2=
# Create optimizers for the discriminator and generator
d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2])
Explanation: Optimizers
Not much new here, but notice how I am using a small learning rate and custom parameters for the Adam optimizers, This is based on some research into DCGAN model convergence.
Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
End of explanation
import pickle as pkl
# training hyperparams
num_epochs = 30
# keep track of loss and generated, "fake" samples
samples = []
losses = []
print_every = 300
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# train the network
for epoch in range(num_epochs):
for batch_i, (real_images, _) in enumerate(train_loader):
batch_size = real_images.size(0)
# important rescaling step
real_images = scale(real_images)
# ============================================
# TRAIN THE DISCRIMINATOR
# ============================================
d_optimizer.zero_grad()
# 1. Train with real images
# Compute the discriminator losses on real images
if train_on_gpu:
real_images = real_images.cuda()
D_real = D(real_images)
d_real_loss = real_loss(D_real)
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
# move x to GPU, if available
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
# add up loss and perform backprop
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# =========================================
# TRAIN THE GENERATOR
# =========================================
g_optimizer.zero_grad()
# 1. Train with fake images and flipped labels
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
# using flipped labels!
D_fake = D(fake_images)
g_loss = real_loss(D_fake) # use real loss to flip labels
# perform backprop
g_loss.backward()
g_optimizer.step()
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, num_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# generate and save sample, fake images
G.eval() # for generating samples
if train_on_gpu:
fixed_z = fixed_z.cuda()
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
Training will involve alternating between training the discriminator and the generator. We'll use our functions real_loss and fake_loss to help us calculate the discriminator losses in all of the following cases.
Discriminator training
Compute the discriminator loss on real, training images
Generate fake images
Compute the discriminator loss on fake, generated images
Add up real and fake loss
Perform backpropagation + an optimization step to update the discriminator's weights
Generator training
Generate fake images
Compute the discriminator loss on fake images, using flipped labels!
Perform backpropagation + an optimization step to update the generator's weights
Saving Samples
As we train, we'll also print out some loss statistics and save some generated "fake" samples.
Evaluation mode
Notice that, when we call our generator to create the samples to display, we set our model to evaluation mode: G.eval(). That's so the batch normalization layers will use the population statistics rather than the batch statistics (as they do during training), and so dropout layers will operate in eval() mode; not turning off any nodes for generating samples.
End of explanation
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll plot the training losses for the generator and discriminator, recorded after each epoch.
End of explanation
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img +1)*255 / (2)).astype(np.uint8) # rescale to pixel range (0-255)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
_ = view_samples(-1, samples)
Explanation: Generator samples from training
Here we can view samples of images from the generator. We'll look at the images we saved during training.
End of explanation |
9,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 6.4.3
Theis wells introduction
Theis considered the transient flow due to a well with a constant extraction since $t=0$ placed in a uniform confined aquifer of infinite extent.
The solution may be opbtained by straighforward Lapace transformation and looking up de result from the Laplace inversions table. It reads
$$ s(r, t) = \frac Q {4 \pi kD} W(u),\,\,\,\, u = \frac {r^2 S} {4 kD t}$$
where W(..) is the so-called Theis well function, which is actually equal to the mathematical exponential integral
$$ W(z) = \mathtt{exp1}(z) = \intop _z ^\infty \frac {e^{-y}} {y} dy $$
The exponential integral lives in scipy special as the function $\mathtt{exp1}(z)$
After importing this function from the module scipy.special we can use exp1(u)
Step1: The firs thing to note is that the dradown depends on the combination of parameter contained in $u$. Any combination of parameters yieling the same $u$ will have the same drawdown, even though these sitautions may intuitively seem wide apart.
All possible outcomes are, therefore, represented by the exp1, which depends on a single parameter, $u$. The curve is called a type curve. However, we generally plot exp1$(u)$ versus $1/u$ instead of $u$. The reason os that 1/u is proportional to time, which makes the type curve more intuitiv as it is then a picture of drawdown versus (scaled) time.
Step2: The plot of W(u) vs u is simply not intuitive
Step3: Before computers became ubiquitous
Before computes became ubiquitous (all around), one had to compute $u = \frac {r^2 S} {4 kD t} and then look up $W(u)$ in a table or read it from the type curve. Clearly this was much more work than using a modern computer and a package that has the exponential integral on board.
A linear y scale, shows clearly that the drawdown increases forever
As the water from a well in an infinite aquifer without any head boundaries can only come from storage, the drawdown must increase indefinitely. This is clearly shown on the curve below. The dradown after some time starts to increase and then becomes a straigt line on logarithmic time scale. This means that the rate at which the drawdown increases slows down continuously, but it never becomes zero. In fact the is taken from storage farther and farther away from the well. | Python Code:
from scipy.special import exp1
import numpy as np
import matplotlib.pyplot as plt
Explanation: Section 6.4.3
Theis wells introduction
Theis considered the transient flow due to a well with a constant extraction since $t=0$ placed in a uniform confined aquifer of infinite extent.
The solution may be opbtained by straighforward Lapace transformation and looking up de result from the Laplace inversions table. It reads
$$ s(r, t) = \frac Q {4 \pi kD} W(u),\,\,\,\, u = \frac {r^2 S} {4 kD t}$$
where W(..) is the so-called Theis well function, which is actually equal to the mathematical exponential integral
$$ W(z) = \mathtt{exp1}(z) = \intop _z ^\infty \frac {e^{-y}} {y} dy $$
The exponential integral lives in scipy special as the function $\mathtt{exp1}(z)$
After importing this function from the module scipy.special we can use exp1(u)
End of explanation
u = np.logspace(-6, 1, 71)
plt.title('Theis type curve $W(u)$ vs $1/u$')
plt.xlabel('1/u')
plt.ylabel('W(u), exp1(u)')
plt.xscale('log')
plt.yscale('log')
plt.grid()
plt.plot(1/u, exp1(u))
plt.show()
Explanation: The firs thing to note is that the dradown depends on the combination of parameter contained in $u$. Any combination of parameters yieling the same $u$ will have the same drawdown, even though these sitautions may intuitively seem wide apart.
All possible outcomes are, therefore, represented by the exp1, which depends on a single parameter, $u$. The curve is called a type curve. However, we generally plot exp1$(u)$ versus $1/u$ instead of $u$. The reason os that 1/u is proportional to time, which makes the type curve more intuitiv as it is then a picture of drawdown versus (scaled) time.
End of explanation
plt.title('Theis type curve $W(u)$ vs $u$')
plt.xlabel('u')
plt.ylabel('W(u), exp1(u)')
plt.xscale('log')
plt.yscale('log')
plt.grid()
plt.plot(u, exp1(u))
plt.show()
Explanation: The plot of W(u) vs u is simply not intuitive
End of explanation
plt.title('Theis type curve $W(u)$ vs $1/u$')
plt.xlabel('1/u')
plt.ylabel('W(u), exp1(u)')
plt.xscale('log')
plt.yscale('linear')
plt.grid()
plt.plot(1/u, exp1(u))
plt.show()
Explanation: Before computers became ubiquitous
Before computes became ubiquitous (all around), one had to compute $u = \frac {r^2 S} {4 kD t} and then look up $W(u)$ in a table or read it from the type curve. Clearly this was much more work than using a modern computer and a package that has the exponential integral on board.
A linear y scale, shows clearly that the drawdown increases forever
As the water from a well in an infinite aquifer without any head boundaries can only come from storage, the drawdown must increase indefinitely. This is clearly shown on the curve below. The dradown after some time starts to increase and then becomes a straigt line on logarithmic time scale. This means that the rate at which the drawdown increases slows down continuously, but it never becomes zero. In fact the is taken from storage farther and farther away from the well.
End of explanation |
9,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feed-forward neural network
This is a simple tutorial on how to train a feed-forward neural network to predict protein subcellular localization.
Step1: Building the network
The first thing that we have to do is to define the network architecture. Here we are going to use an input layer, dense layer and output layer. These are the steps that we are going to follow
Step2: 2.- Define the input variables to our network
Step3: 3.- Define the layers of the network
Step4: 4.- Calculate the prediction and network loss for the training set and update the network weights
Step5: 5.- Calculate the prediction and network loss for the validation set
Step6: 6.- Build theano functions
Step7: Load dataset
Once that the network is built, the next step is to load the training and the validation set
Step8: Training
Once that the data is ready and the network compiled we can start with the training of the model.
Here we define the number of epochs that we want to perform
Step9: Model loss and accuracy
Here we plot the loss and the accuracy for the training and validation set at each epoch.
Step10: Confusion matrix
The confusion matrix allows us to visualize how well is predicted each class and which are the most common misclassifications. | Python Code:
# Import all the necessary modules
import os
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,optimizer=None,device=cpu,floatX=float32"
import sys
sys.path.insert(0,'..')
import numpy as np
import theano
import theano.tensor as T
import lasagne
from confusionmatrix import ConfusionMatrix
from utils import iterate_minibatches
import matplotlib.pyplot as plt
import time
import itertools
%matplotlib inline
Explanation: Feed-forward neural network
This is a simple tutorial on how to train a feed-forward neural network to predict protein subcellular localization.
End of explanation
batch_size = 128
seq_len = 400
n_feat = 20
n_hid = 30
n_class = 10
lr = 0.0025
drop_prob = 0.5
Explanation: Building the network
The first thing that we have to do is to define the network architecture. Here we are going to use an input layer, dense layer and output layer. These are the steps that we are going to follow:
1.- Specify the hyperparameters of the network:
End of explanation
# We use ftensor3 because the protein data is a 3D-matrix in float32
input_var = T.ftensor3('inputs')
# ivector because the labels is a single dimensional vector of integers
target_var = T.ivector('targets')
# Dummy data to check the size of the layers during the building of the network
X = np.random.randint(0,10,size=(batch_size,seq_len,n_feat)).astype('float32')
Explanation: 2.- Define the input variables to our network:
End of explanation
# Input layer, holds the shape of the data
l_in = lasagne.layers.InputLayer(shape=(batch_size, seq_len, n_feat), input_var=input_var, name='Input')
print('Input layer: {}'.format(
lasagne.layers.get_output(l_in, inputs={l_in: input_var}).eval({input_var: X}).shape))
# Dense layer with ReLu activation function
l_dense = lasagne.layers.DenseLayer(l_in, num_units=n_hid, name="Dense",
nonlinearity=lasagne.nonlinearities.rectify)
print('Dense layer: {}'.format(
lasagne.layers.get_output(l_dense, inputs={l_in: input_var}).eval({input_var: X}).shape))
# Output layer with a Softmax activation function
l_out = lasagne.layers.DenseLayer(lasagne.layers.dropout(l_dense, p=drop_prob), num_units=n_class,
name="Softmax", nonlinearity=lasagne.nonlinearities.softmax)
print('Output layer: {}'.format(
lasagne.layers.get_output(l_out, inputs={l_in: input_var}).eval({input_var: X}).shape))
Explanation: 3.- Define the layers of the network:
End of explanation
# Get output training, deterministic=False is used for training
prediction = lasagne.layers.get_output(l_out, inputs={l_in: input_var}, deterministic=False)
# Calculate the categorical cross entropy between the labels and the prediction
t_loss = T.nnet.categorical_crossentropy(prediction, target_var)
# Training loss
loss = T.mean(t_loss)
# Parameters
params = lasagne.layers.get_all_params([l_out], trainable=True)
# Get the network gradients and perform total norm constraint normalization
all_grads = lasagne.updates.total_norm_constraint(T.grad(loss, params),3)
# Update parameters using ADAM
updates = lasagne.updates.adam(all_grads, params, learning_rate=lr)
Explanation: 4.- Calculate the prediction and network loss for the training set and update the network weights:
End of explanation
# Get output validation, deterministic=True is only use for validation
val_prediction = lasagne.layers.get_output(l_out, inputs={l_in: input_var}, deterministic=True)
# Calculate the categorical cross entropy between the labels and the prediction
t_val_loss = lasagne.objectives.categorical_crossentropy(val_prediction, target_var)
# Validation loss
val_loss = T.mean(t_val_loss)
Explanation: 5.- Calculate the prediction and network loss for the validation set:
End of explanation
# Build functions
train_fn = theano.function([input_var, target_var], [loss, prediction], updates=updates)
val_fn = theano.function([input_var, target_var], [val_loss, val_prediction])
Explanation: 6.- Build theano functions:
End of explanation
# Load the encoded protein sequences, labels and masks
# The masks are not needed for the FFN or CNN models
train = np.load('data/reduced_train.npz')
X_train = train['X_train']
y_train = train['y_train']
mask_train = train['mask_train']
print(X_train.shape)
validation = np.load('data/reduced_val.npz')
X_val = validation['X_val']
y_val = validation['y_val']
mask_val = validation['mask_val']
print(X_val.shape)
Explanation: Load dataset
Once that the network is built, the next step is to load the training and the validation set
End of explanation
# Number of epochs
num_epochs = 80
# Lists to save loss and accuracy of each epoch
loss_training = []
loss_validation = []
acc_training = []
acc_validation = []
start_time = time.time()
min_val_loss = float("inf")
# Start training
for epoch in range(num_epochs):
# Full pass training set
train_err = 0
train_batches = 0
confusion_train = ConfusionMatrix(n_class)
# Generate minibatches and train on each one of them
for batch in iterate_minibatches(X_train.astype(np.float32), y_train.astype(np.int32),
mask_train.astype(np.float32), batch_size, shuffle=True, sort_len=False):
# Inputs to the network
inputs, targets, in_masks = batch
# Calculate loss and prediction
tr_err, predict = train_fn(inputs, targets)
train_err += tr_err
train_batches += 1
# Get the predicted class, the one with the maximum likelihood
preds = np.argmax(predict, axis=-1)
confusion_train.batch_add(targets, preds)
# Average loss and accuracy
train_loss = train_err / train_batches
train_accuracy = confusion_train.accuracy()
cf_train = confusion_train.ret_mat()
val_err = 0
val_batches = 0
confusion_valid = ConfusionMatrix(n_class)
# Generate minibatches and validate on each one of them, same procedure as before
for batch in iterate_minibatches(X_val.astype(np.float32), y_val.astype(np.int32),
mask_val.astype(np.float32), batch_size, shuffle=True, sort_len=False):
inputs, targets, in_masks = batch
err, predict_val = val_fn(inputs, targets)
val_err += err
val_batches += 1
preds = np.argmax(predict_val, axis=-1)
confusion_valid.batch_add(targets, preds)
val_loss = val_err / val_batches
val_accuracy = confusion_valid.accuracy()
cf_val = confusion_valid.ret_mat()
loss_training.append(train_loss)
loss_validation.append(val_loss)
acc_training.append(train_accuracy)
acc_validation.append(val_accuracy)
# Save the model parameters at the epoch with the lowest validation loss
if min_val_loss > val_loss:
min_val_loss = val_loss
np.savez('params/FFN_params.npz', *lasagne.layers.get_all_param_values(l_out))
print("Epoch {} of {} time elapsed {:.3f}s".format(epoch + 1, num_epochs, time.time() - start_time))
print(" training loss:\t\t{:.6f}".format(train_loss))
print(" validation loss:\t\t{:.6f}".format(val_loss))
print(" training accuracy:\t\t{:.2f} %".format(train_accuracy * 100))
print(" validation accuracy:\t\t{:.2f} %".format(val_accuracy * 100))
print("Minimum validation loss: {:.6f}".format(min_val_loss))
Explanation: Training
Once that the data is ready and the network compiled we can start with the training of the model.
Here we define the number of epochs that we want to perform
End of explanation
x_axis = range(num_epochs)
plt.figure(figsize=(8,6))
plt.plot(x_axis,loss_training)
plt.plot(x_axis,loss_validation)
plt.xlabel('Epoch')
plt.ylabel('Error')
plt.legend(('Training','Validation'));
plt.figure(figsize=(8,6))
plt.plot(x_axis,acc_training)
plt.plot(x_axis,acc_validation)
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(('Training','Validation'));
Explanation: Model loss and accuracy
Here we plot the loss and the accuracy for the training and validation set at each epoch.
End of explanation
# Plot confusion matrix
# Code based on http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
plt.figure(figsize=(8,8))
cmap=plt.cm.Blues
plt.imshow(cf_val, interpolation='nearest', cmap=cmap)
plt.title('Confusion matrix validation set')
plt.colorbar()
tick_marks = np.arange(n_class)
classes = ['Nucleus','Cytoplasm','Extracellular','Mitochondrion','Cell membrane','ER',
'Chloroplast','Golgi apparatus','Lysosome','Vacuole']
plt.xticks(tick_marks, classes, rotation=60)
plt.yticks(tick_marks, classes)
thresh = cf_val.max() / 2.
for i, j in itertools.product(range(cf_val.shape[0]), range(cf_val.shape[1])):
plt.text(j, i, cf_val[i, j],
horizontalalignment="center",
color="white" if cf_val[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True location')
plt.xlabel('Predicted location');
Explanation: Confusion matrix
The confusion matrix allows us to visualize how well is predicted each class and which are the most common misclassifications.
End of explanation |
9,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VGGNet in Keras
In this notebook, we fit a model inspired by the "very deep" convolutional network VGGNet to classify flowers into the 17 categories of the Oxford Flowers data set. Derived from these two earlier notebooks.
Set seed for reproducibility
Step1: Load dependencies
Step2: Load and preprocess data
Step3: Design neural network architecture
Step4: Configure model
Step5: Configure TensorBoard (for part 5 of lesson 3)
Step6: Train! | Python Code:
import numpy as np
np.random.seed(42)
Explanation: VGGNet in Keras
In this notebook, we fit a model inspired by the "very deep" convolutional network VGGNet to classify flowers into the 17 categories of the Oxford Flowers data set. Derived from these two earlier notebooks.
Set seed for reproducibility
End of explanation
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.callbacks import TensorBoard # for part 3.5 on TensorBoard
Explanation: Load dependencies
End of explanation
import tflearn.datasets.oxflower17 as oxflower17
X, Y = oxflower17.load_data(one_hot=True)
Explanation: Load and preprocess data
End of explanation
model = Sequential()
model.add(Conv2D(64, 3, activation='relu', input_shape=(224, 224, 3)))
model.add(Conv2D(64, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())
model.add(Conv2D(128, 3, activation='relu'))
model.add(Conv2D(128, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())
model.add(Conv2D(256, 3, activation='relu'))
model.add(Conv2D(256, 3, activation='relu'))
model.add(Conv2D(256, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())
model.add(Conv2D(512, 3, activation='relu'))
model.add(Conv2D(512, 3, activation='relu'))
model.add(Conv2D(512, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())
model.add(Conv2D(512, 3, activation='relu'))
model.add(Conv2D(512, 3, activation='relu'))
model.add(Conv2D(512, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(17, activation='softmax'))
model.summary()
Explanation: Design neural network architecture
End of explanation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Explanation: Configure model
End of explanation
# tensorbrd = TensorBoard('logs/vggnet')
Explanation: Configure TensorBoard (for part 5 of lesson 3)
End of explanation
model.fit(X, Y, batch_size=64, epochs=250, verbose=1, validation_split=0.1, shuffle=True) # callbacks=[tensorbrd])
Explanation: Train!
End of explanation |
9,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build a DNN using the Keras Functional API
Learning objectives
Review how to read in CSV file data using tf.data.
Specify input, hidden, and output layers in the DNN architecture.
Review and visualize the final DNN shape.
Train the model locally and visualize the loss curves.
Deploy and predict with the model using Cloud AI Platform.
Introduction
In this notebook, we will build a Keras DNN to predict the fare amount for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Step1: Locating the CSV files
We will start with the CSV files that we wrote out in the other notebook. Just so you don't have to run the notebook, we saved a copy in ../data/toy_data
Step2: Lab Task 1
Step3: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
Step4: Lab Task 2
Step5: Lab Task 3
Step6: Lab Task 4
Step7: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
Step8: Lab Task 5 | Python Code:
# You can use any Python source file as a module by executing an import statement in some other Python source file
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import os, json, math
# Import data processing libraries like Numpy and TensorFlow
import numpy as np
import tensorflow as tf
# Python shutil module enables us to operate with file objects easily and without diving into file objects a lot.
import shutil
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY
Explanation: Build a DNN using the Keras Functional API
Learning objectives
Review how to read in CSV file data using tf.data.
Specify input, hidden, and output layers in the DNN architecture.
Review and visualize the final DNN shape.
Train the model locally and visualize the loss curves.
Deploy and predict with the model using Cloud AI Platform.
Introduction
In this notebook, we will build a Keras DNN to predict the fare amount for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
# `ls` is a Linux shell command that lists directory contents
# `l` flag list all the files with permissions and details
!ls -l ../data/toy_data/*.csv
Explanation: Locating the CSV files
We will start with the CSV files that we wrote out in the other notebook. Just so you don't have to run the notebook, we saved a copy in ../data/toy_data
End of explanation
# Define columns of data
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
Explanation: Lab Task 1: Use tf.data to read the CSV files
First let's define our columns of data, which column we're predicting for, and the default values.
End of explanation
# Define features you want to use
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
Explanation: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
End of explanation
# Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
INPUT_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
# TODO 2
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in INPUT_COLS
}
# tf.feature_column.numeric_column() represents real valued or numerical features.
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='fare')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
Explanation: Lab Task 2: Build a DNN with Keras
Now let's build the Deep Neural Network (DNN) model in Keras and specify the input and hidden layers. We will print out the DNN architecture and then visualize it later on.
End of explanation
# tf.keras.utils.plot_model() Converts a Keras model to dot format and save to a file.
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
Explanation: Lab Task 3: Visualize the DNN
We can visualize the DNN using the Keras plot_model utility.
End of explanation
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 32 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../data/toy_data/taxi-traffic-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/toy_data/taxi-traffic-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
# Model Fit
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
Explanation: Lab Task 4: Train the model
To train the model, simply call model.fit().
Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
End of explanation
# plot
# Use matplotlib for visualizing the model
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
# The .figure() method will create a new figure, or activate an existing figure.
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
# The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y.
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
# The .title() method sets a title for the axes.
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
# The .legend() method will place a legend on the axes.
plt.legend(['train', 'validation'], loc='upper left');
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
End of explanation
# TODO 5
# Use the model to do prediction with `model.predict()`
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
}, steps=1)
Explanation: Lab Task 5: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for.
End of explanation |
9,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question. | Python Code:
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
wrds = []
ones = {1:'one',2:'two',3:'three',4:'four',5:'five',6:'six',7:'seven',8:'eight',
9:'nine',10:'ten', 11:'eleven',12:'twelve',13:'thirteen',14:'fourteen',
15:'fifteen',16:'sixteen',17:'seventeen',18:'eighteen',19:'nineteen'}
tens = {2:'twenty',3:'thirty',4:'forty',5:'fifty',6:'sixty',7:'seventy',8:'eighty',
9:'ninety'}
hundred = {1:'onehundred',2:'twohundred',3:'threehundred',4:'fourhundred',
5:'fivehundred',6:'sixhundred',7:'sevenhundred',8:'eighthundred',
9:'ninehundred'}
if n<20:
x=1
while x<=n:
wrds.append(ones[x])
x+=1
elif n<100:
x=1
while x<20:
wrds.append(ones[x])
x+=1
t=2
while t*10 <= n:
wrds.append(tens[t])
a=1
while x<n-t+2 and a<10:
wrds.append(tens[t] +ones[a])
a+=1
x+=1
t+=1
elif n<1000:
x=1
while x<20:
wrds.append(ones[x])
x+=1
t=2
while t*10 <= 99:
wrds.append(tens[t])
a=1
while x<99-t+2 and a<10:
wrds.append(tens[t] +ones[a])
a+=1
x+=1
t+=1
h=1
while h*100<=n:
wrds.append(hundred[h])
x=(h*100)+1
while x<=n and x<(h*100)+20:
b=x-(h*100)
wrds.append(hundred[h]+'and'+ones[b])
x+=1
t=2
while ((h*100)+(t*10)) <= n and t<10:
wrds.append(hundred[h]+'and'+tens[t])
a=1
while x<n-t+2 and a<10:
wrds.append(hundred[h]+'and'+tens[t]+ones[a])
a+=1
x+=1
t+=1
h+=1
elif n==1000:
n=999
x=1
while x<20:
wrds.append(ones[x])
x+=1
t=2
while t*10 <= 99:
wrds.append(tens[t])
a=1
while x<99-t+2 and a<10:
wrds.append(tens[t] +ones[a])
a+=1
x+=1
t+=1
h=1
while h*100<=n:
wrds.append(hundred[h])
x=(h*100)+1
while x<=n and x<(h*100)+20:
b=x-(h*100)
wrds.append(hundred[h]+'and'+ones[b])
x+=1
t=2
while ((h*100)+(t*10)) <= n and t<10:
wrds.append(hundred[h]+'and'+tens[t])
a=1
while x<n-t+2 and a<10:
wrds.append(hundred[h]+'and'+tens[t]+ones[a])
a+=1
x+=1
t+=1
h+=1
wrds.append('onethousand')
return wrds
number_to_words(1000)
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
assert len(number_to_words(582)) == 582
assert len(number_to_words(1000)) == 1000
assert number_to_words(5) == ['one', 'two', 'three', 'four', 'five']
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
x=0
nums = number_to_words(n)
for number in nums:
x+=len(number)
return x
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
# YOUR CODE HERE
assert count_letters(5) == 19
assert True # use this for grading the count_letters tests.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
answer = count_letters(1000)
print(answer)
assert True # use this for gradig the answer to the original question.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
9,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="center">
<h2> Méthodes quantitatives en neurosciences </h2>
</div>
<div align="center">
<b><i> Cours NSC-2006, année 2015</i></b><br>
<b>Laboratoire d'analyse de données multidimensionnelle-Réponses</b><br>
*Pierre Bellec, Yassine Ben Haj Ali*
</div>
Objectif
Step1: Section 1
Step2: <ol start="2">
<h4><li>La commande `whos` nous permet de déterminer quelles variables sont disponibles dans l’espace de travail, ainsi que leur type.</li></h4>
</ol>
Step3: <font color="red">Quelles variables sont présentes? Quel est le type de la variable spike? Quelle est sa taille?</font>
<font color="blue">Les variables sont direction, go, intruction, spike et unit. La variable spike est de type structure et de taille 1x47</font>
<ol start="3">
<h4><li>La variable spike contient les temps des potentiels d’action détectés pour
un neurone. Chaque entrée de la structure contient les données d’un
essai différent. Il est possible de lister les champs de la
structure avec la commande fieldnames
Step4: Le champ "spikes(1).times" contient les temps de décharges de potentiels d’action pour
le premier essai. La commande permet de déterminer la taille de ce
vecteur, c’est à dire le nombre de décharges détectées
Step5: <font color="red">Combien y-a-t-il eu de décharges pour l’essai 2? pour l’essai 10?</font>
Step6: <font color="blue">Il y a 51 décharges pour l'essai 2.</font>
Step7: <font color="blue">Il y a 85 décharges pour l'essai 10.</font>
<ol start="4">
<h4><li> La commande suivante va présenter l’ensemble des temps de décharge pour l’essai 1. </li></h4>
</ol>
>> spike(1).times
-0.9893
-0.9402
-0.9158
(...)
<font color="red">Quelle est l’unité probable de ces temps? Pourquoi y-a-t-il des
valeurs négatives?</font>
<font color="blue">L'unité est en secondes et les valeurs négatives indiquent un temps de décharge qui précède le signal de départ le de l'expérience</font>
<ol start="5">
<h4><li> On extrait les temps de décharges des deux premiers essais dans deux
variables `t1` et `t2`
Step8: <ol start="6">
<h4><li> On ouvre une nouvelle fenêtre, dédiée à la visualisation
Step9: Attention à la deuxième instruction! Elle permet de dessiner
plusieurs objets sur une même figure, l’un à la suite de l’autre,
sans ré-initialiser la figure.
<ol start="7">
<h4><li> Maintenant on va tracer la première ligne du diagramme. Notez que le
nombre de décharges dans l’essai 1 est . On applique une boucle
Step10: Notez l’utilisation de la commande line.
<ol start="8">
<h4><li> On va maintenant ajouter un label sur l’axe des x et y . </li></h4>
</ol>
<ol start="9">
<h4><li> Sauvegardez la figure dans un fichier png (utilisez la commande print), sous le nom figure_dispersion.png </li></h4>
</ol>
Step11: <font color="red"> <ol start="10">
<h4><li> Faire une nouvelle figure où chaque barre du diagramme a une hauteur
de 0.5, plutôt que 1. Sauvegardez ce fichier sous le nom "figure_dispersion_lignes.png". </li></h4>
</ol></font>
Step12: <font color="red"> <ol start="11">
<h4><li> Vous allez compléter, à partir du fichier , les 4 lignes de code
manquante à l’interieure de la boucle pour tracer tous les essais
(47) dans une même figure. Le résultat ressemblerait à la figure
suivante
Step13: Section 2
Step14: <ol start="2">
<h4><li> Chargeons de nouveau les données
Step15: <ol start="3">
<h4><li>Définissons les bords et le pas des catégories de l’histogramme </li></h4>
</ol>
Step16: <ol start="4">
<h4><li>Initialiser une matrice de zéros dont la longueur est égale au
nombre d’intervalles
Step17: <ol start="5">
<h4><li>Récupérez le nombre de décharges par intervalle de temps dans
l’essai numéro 1, à l’aide avec la fonction . </li></h4>
</ol>
Step18: <font color="red">Examinez le contenu de la variable histo. Quelle est sa taille ? Son minimum, maximum, sa moyenne (voir les fonctions matlab min, max, mean ).</font>
Step19: <ol start="6">
<h4><li>Dessinez l’histogramme avec la fonction `bar`. </li></h4>
</ol>
Step20: <font color="red"> <ol start="10">
<h4><li>Reprenez le code du fichier et remplissez la boucle afin de réaliser
un histogramme pour l’ensemble des essais </li></h4>
</ol></font>
Step21: Section 3
Step22: <ol start="2">
<h4><li>Maintenant nous allons récuperer les données de “notes” du cours. </li></h4>
</ol>
Step23: <font color="red"> Quels est la taille et le contenu des vecteurs x et y? A quoi sert l'opération ' ? </font>
Step24: <font color="blue"> Les variables x et y sont tout deux des vecteurs de 1 colonne et 8 lignes. L'operation ' transposer un vecteur ligne en colonne ou le contraire.</font>
<ol start="3">
<h4><li>On définit une nouvelle fonction en ligne
Step25: <font color="red"> Quel est le type de la variable ftheta?</font>
Step26: <font color="blue">ftheta est une fonction en ligne (inline).</font>
<ol start="4">
<h4><li>Estimez les coefficients de régression à l’aide de la fonction
Step27: <font color="red"> Quelles sont les valeurs de theta_chap? A quoi sert l’argument [1 1]? Essayez de reproduire l’estimation avec d’autres valeurs pour cet argument, est-ce que cela affecte le résultat?</font>
<font color="blue">Le paramètre theta_chap vaut
Step28: <font color="blue">L'argument [1 1] est une valeur initiale de la méthode qui cherche la valeur theta_chap. En répétant l'expérience pour plusieurs valeurs ([2 2], [30 30], [-30 -30]) on voit que la résultat theta_chap ne semble pas dépendre ici de ce paramètre. </font>
<ol start="5">
<h4><li>Maintenant représenter le résultat de la régression. </li></h4>
</ol>
Step29: <font color="red"><ol start="6">
<h4><li>Utilisez la fonction `ylim` pour changer les limites de l’axe y de 40 à 95.
Ajouter le label `taille` sur l’axe des x avec la commande `xlabel`, et le label `poids` sur
l’axe des y avec la commande `ylabel`. Faites une sauvegarde de cette image,
dans un fichier `regression_notes.png`. </li></h4>
</ol></font>
Step30: <ol start="7">
<h4><li>Maintenant nous allons ajuster une courbe plus complexe, un cosinus.
On commence par simuler des données
Step31: <font color="red">Quelle est la taille de x? La taille de y?</font>
Step32: <font color="blue">Les variables x et y sont des vecteurs lignes de longueur 301.</font>
<font color="red"> A quoi sert la fonction randn (utilisez la commande help ).</font>
Step33: <font color="blue">La commande randn permet de simuler du bruit suivant une distribution normale (Gaussienne) de moyenne nulle et de variance 1.</font>
<font color="red"> Générer un graphe de la relation entre x et y, et sauvegardez cette image dans un fichier donnees_cosinus.png.</font>
Step34: <ol start="8">
<h4><li>On va maintenant définir une fonction de trois paramètres
Step35: <font color="red">Quelle est la valeur de la fonction ftheta pour theta=[0 1 1] et x=0 ?</font>
Step36: <ol start="9">
<h4><li>Estimez les coefficients de régression à l’aide de la fonction
Step37: <font color="red">A quoi sert l’argument [0 1 1] ? Essayez de reproduire l’estimation avec d’autres valeurs pour cet argument, est-ce que cela affecte le résultat?</font>
<font color="blue">L'argument [0 1 1] jour le même rôle que le paramètre [1 1] à la question 4. En utilisant une valeur différente du paramètre (par exemple [0 1 10]) on trouve une valeur différente pour theta_chap. C'est parce que la fonction ftheta est périodique, et il existe donc une infinité de valeurs d'entrée qui donnent la même sortie.</font>
<font color="red"><ol start="10">
<h4><li> Maintenant représenter le résultat de la régression. </li></h4>
</ol></font> | Python Code:
%matplotlib inline
from pymatbridge import Matlab
mlab = Matlab()
mlab.start()
%load_ext pymatbridge
Explanation: <div align="center">
<h2> Méthodes quantitatives en neurosciences </h2>
</div>
<div align="center">
<b><i> Cours NSC-2006, année 2015</i></b><br>
<b>Laboratoire d'analyse de données multidimensionnelle-Réponses</b><br>
*Pierre Bellec, Yassine Ben Haj Ali*
</div>
Objectif:
Ce laboratoire a pour but de vous initier à la manipulation
d’informations multidimensionnelles avec Matlab. Nous allons pour cela
analyser des données électrophysiologiques de décharges d’activité
neuronale. Nous allons effectuer différentes opérations visant à
visualiser, résumer et modéliser ces données. Les données
sont tirée de l’expérience de Georgopoulos1982 sur l’encodage
neuronale du mouvement du bras chez un macaque avec des implants
neuronaux. L’animal commence l’expérience en fixant un curseur au centre
d’une cible, ensuite il doit rejoindre des cibles périphériques
apparaissent dans une des 8 directions arrangé en cercle. Une fois
la cible apparue, l’animal doit attendre ( 100-1500 ms) le signal de départ avant d’aller rejoindre la cible pour
une durée de 500ms, ensuite il retourne au point de départ (au centre).
Cette séquence de mouvement est appelée essai et dans cette expérience
il y en a 47. Le but de l’expérience de Georgopoulos et collègues est de
déterminer l’orientation spatiale préférentielle du neurone en question
dans la région MI, et qu’il est possible de prédire la direction du
mouvement à partir d’enregistrements physiologiques. Leurs résultats
indiquent qu’il y a bel et bien une préférence vers les angles de
mouvement entre 90 et 180 degrés. Durant ce travail nous allons reproduire certaines des analyses de données et
la visualisation des résultats de cette expérience.
Pour réaliser ce laboratoire, il est nécessaire de récupérer les
resources suivantes sur studium:
Chap17_Data.mat: le jeu de données tiré de Georgopoulos1982.
Les scripts diagramme_dispersion.m et diagramme_dispersion_essais.m pour la Section 1.
Les scripts histogramme_essai1.m, histogramme_essais.m pour la Section 2.
Notez que le laboratoire est noté. Il faudra remettre un rapport
détaillé incluant une réponse pour l’ensemble des questions numérotées
ci dessous. Chaque réponse fera typiquement quelques lignes, incluant du
code et une figure si demandé dans l’énoncé.
Ne pas tenir compte et ne pas executer cette partie du code:
End of explanation
%%matlab
load('Chap17_Data')
Explanation: Section 1 : Diagramme de dispersion
Nous allons commencer par effectuer un diagramme de dispersion
(scatter plot) de l’activation d’un neurone tout au long de la durée
d’un essai. Voir le script pour suivre ces étapes.
<ol start="1">
<h4><li>Commençons par charger les données:</li></h4>
</ol>
End of explanation
%%matlab
whos
Explanation: <ol start="2">
<h4><li>La commande `whos` nous permet de déterminer quelles variables sont disponibles dans l’espace de travail, ainsi que leur type.</li></h4>
</ol>
End of explanation
%%matlab
fieldnames(spike)
Explanation: <font color="red">Quelles variables sont présentes? Quel est le type de la variable spike? Quelle est sa taille?</font>
<font color="blue">Les variables sont direction, go, intruction, spike et unit. La variable spike est de type structure et de taille 1x47</font>
<ol start="3">
<h4><li>La variable spike contient les temps des potentiels d’action détectés pour
un neurone. Chaque entrée de la structure contient les données d’un
essai différent. Il est possible de lister les champs de la
structure avec la commande fieldnames:</li></h4>
</ol>
End of explanation
%%matlab
size(spike(1).times)
Explanation: Le champ "spikes(1).times" contient les temps de décharges de potentiels d’action pour
le premier essai. La commande permet de déterminer la taille de ce
vecteur, c’est à dire le nombre de décharges détectées:
End of explanation
%%matlab
size(spike(2).times) %nb de décharges pour l'essai 2
Explanation: <font color="red">Combien y-a-t-il eu de décharges pour l’essai 2? pour l’essai 10?</font>
End of explanation
%%matlab
size(spike(10).times) %nb de décharges pour l'essai 10
Explanation: <font color="blue">Il y a 51 décharges pour l'essai 2.</font>
End of explanation
%%matlab
t1 = spike(1).times;
t2 = spike(2).times;
Explanation: <font color="blue">Il y a 85 décharges pour l'essai 10.</font>
<ol start="4">
<h4><li> La commande suivante va présenter l’ensemble des temps de décharge pour l’essai 1. </li></h4>
</ol>
>> spike(1).times
-0.9893
-0.9402
-0.9158
(...)
<font color="red">Quelle est l’unité probable de ces temps? Pourquoi y-a-t-il des
valeurs négatives?</font>
<font color="blue">L'unité est en secondes et les valeurs négatives indiquent un temps de décharge qui précède le signal de départ le de l'expérience</font>
<ol start="5">
<h4><li> On extrait les temps de décharges des deux premiers essais dans deux
variables `t1` et `t2`: </li></h4>
</ol>
End of explanation
%%matlab
figure
hold on
Explanation: <ol start="6">
<h4><li> On ouvre une nouvelle fenêtre, dédiée à la visualisation: </li></h4>
</ol>
End of explanation
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 1])
end
Explanation: Attention à la deuxième instruction! Elle permet de dessiner
plusieurs objets sur une même figure, l’un à la suite de l’autre,
sans ré-initialiser la figure.
<ol start="7">
<h4><li> Maintenant on va tracer la première ligne du diagramme. Notez que le
nombre de décharges dans l’essai 1 est . On applique une boucle : </li></h4>
</ol>
End of explanation
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 1])
end
xlabel('Temp (sec)');
%Idem pour l’axe des y:
ylabel('Essai #')
%Enfin, on fixe les limites de l’axe des y
ylim([0 3])
% save the result
print('figure_dispersion.png','-dpng')
Explanation: Notez l’utilisation de la commande line.
<ol start="8">
<h4><li> On va maintenant ajouter un label sur l’axe des x et y . </li></h4>
</ol>
<ol start="9">
<h4><li> Sauvegardez la figure dans un fichier png (utilisez la commande print), sous le nom figure_dispersion.png </li></h4>
</ol>
End of explanation
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 0.5])
end
xlabel('Temp (sec)')
%Idem pour l’axe des y:
ylabel('Essai #')
%Enfin, on fixe les limites de l’axe des y
ylim([0 3])
Explanation: <font color="red"> <ol start="10">
<h4><li> Faire une nouvelle figure où chaque barre du diagramme a une hauteur
de 0.5, plutôt que 1. Sauvegardez ce fichier sous le nom "figure_dispersion_lignes.png". </li></h4>
</ol></font>
End of explanation
%%matlab
% Charger les donnees
load('Chap17_Data')
% Preparer une figure
figure
% permettre la superposition de plusieurs graphiques dans la meme figure
hold on
% Donner un label à l'axe des x
xlabel('Temp (sec)');
% Donner un label à l'axe des y
ylabel('Essai #');
% Ajuster les limites de l'axe des y
ylim([0 length(spike)]);
for num_spike = 1:length(spike) %faire une boucle pour tout les essaies
t = spike(num_spike).times; %definir la variable pour chaque essai
for num_temps=1:length(t) %faire une boucle pour tous les points temps
line([t(num_temps) t(num_temps)], [0+(num_spike-1) 1+(num_spike-1)]); %dessiner une line pour chaque point temps t1(i) avec longueur de 1
end
end
Explanation: <font color="red"> <ol start="11">
<h4><li> Vous allez compléter, à partir du fichier , les 4 lignes de code
manquante à l’interieure de la boucle pour tracer tous les essais
(47) dans une même figure. Le résultat ressemblerait à la figure
suivante: </li></h4>
</ol></font>
End of explanation
%%matlab
clear
Explanation: Section 2 : Histogramme
Nous allons continuer l’exploration des données à travers un histogramme
qui résumerait la somme des activations dans un intervalle de temps
donné. Voir le script histograme_essai1.m pour reproduire les commandes suivantes:
<ol start="1">
<h4><li>Commençons par nettoyer l’espace de travail: </li></h4>
</ol>
End of explanation
%%matlab
load('Chap17_Data')
Explanation: <ol start="2">
<h4><li> Chargeons de nouveau les données: </li></h4>
</ol>
End of explanation
%%matlab
centres = [-0.95:0.1:0.95];
Explanation: <ol start="3">
<h4><li>Définissons les bords et le pas des catégories de l’histogramme </li></h4>
</ol>
End of explanation
%%matlab
histo = zeros(1,length(centres));
Explanation: <ol start="4">
<h4><li>Initialiser une matrice de zéros dont la longueur est égale au
nombre d’intervalles: </li></h4>
</ol>
End of explanation
%%matlab
histo = hist(spike(1).times,centres);
Explanation: <ol start="5">
<h4><li>Récupérez le nombre de décharges par intervalle de temps dans
l’essai numéro 1, à l’aide avec la fonction . </li></h4>
</ol>
End of explanation
%%matlab
whos histo % elle est de taille 1x20
%%matlab
min(histo)
max(histo)
mean(histo)
Explanation: <font color="red">Examinez le contenu de la variable histo. Quelle est sa taille ? Son minimum, maximum, sa moyenne (voir les fonctions matlab min, max, mean ).</font>
End of explanation
%%matlab
bar(centres,histo);
%Ajuster les limites de l’axe des x
xlim([-1.1 1]);
xlabel('Temps (sec)'); %Donner un label à l’axe des x
ylabel('# essai');%Donner un label à l’axe des y
Explanation: <ol start="6">
<h4><li>Dessinez l’histogramme avec la fonction `bar`. </li></h4>
</ol>
End of explanation
%%matlab
%Charger les donnees
load('Chap17_Data')
% Definir les centres des intervalles pour l'histogramme
centres = [-0.95:0.1:0.95];
% Initialiser une matrice de zéros histo dont la longueur est égale au nombre d'intervalles:
histo = zeros(length(centres),1);
% Faire une boucle à travers tous les essais et recuperer le nombre de decharges par intervalle avec la fonction histc
for jj = 1:47
histo=histo+histc(spike(jj).times,centres);
end
% Dessiner l'histograme avec la fonction bar
bar(centres,histo);
%Ajuster les limites de l'axe des x
xlim([-1.1 1]);
%Donner un label à l’axe des x
xlabel('Temps (sec)');
%Donner un label à l’axe des y
ylabel('# essai');
Explanation: <font color="red"> <ol start="10">
<h4><li>Reprenez le code du fichier et remplissez la boucle afin de réaliser
un histogramme pour l’ensemble des essais </li></h4>
</ol></font>
End of explanation
%%matlab
clear
Explanation: Section 3 : Régression
Nous allons maintenant implémenter une régression à l’aide de Matlab.
<ol start="1">
<h4><li>Commençons par nettoyer l’espace de travail: </li></h4>
</ol>
End of explanation
%%matlab
x = [ 165 165 157 170 175 165 182 178 ]';
y = [ 47 56 49 60 82 52 78 90 ]';
Explanation: <ol start="2">
<h4><li>Maintenant nous allons récuperer les données de “notes” du cours. </li></h4>
</ol>
End of explanation
%%matlab
whos
Explanation: <font color="red"> Quels est la taille et le contenu des vecteurs x et y? A quoi sert l'opération ' ? </font>
End of explanation
%%matlab
ftheta = inline('theta(1)+theta(2)*x','theta','x');
Explanation: <font color="blue"> Les variables x et y sont tout deux des vecteurs de 1 colonne et 8 lignes. L'operation ' transposer un vecteur ligne en colonne ou le contraire.</font>
<ol start="3">
<h4><li>On définit une nouvelle fonction en ligne: </li></h4>
</ol>
End of explanation
%%matlab
whos ftheta
Explanation: <font color="red"> Quel est le type de la variable ftheta?</font>
End of explanation
%%matlab
theta_chap = nlinfit(x, y, ftheta, [1 1] );
Explanation: <font color="blue">ftheta est une fonction en ligne (inline).</font>
<ol start="4">
<h4><li>Estimez les coefficients de régression à l’aide de la fonction : </li></h4>
</ol>
End of explanation
%%matlab
theta_chap
Explanation: <font color="red"> Quelles sont les valeurs de theta_chap? A quoi sert l’argument [1 1]? Essayez de reproduire l’estimation avec d’autres valeurs pour cet argument, est-ce que cela affecte le résultat?</font>
<font color="blue">Le paramètre theta_chap vaut:</font>
End of explanation
%%matlab
figure
plot(x,y,'b.');
hold on
plot(x,ftheta(theta_chap,x),'r');
Explanation: <font color="blue">L'argument [1 1] est une valeur initiale de la méthode qui cherche la valeur theta_chap. En répétant l'expérience pour plusieurs valeurs ([2 2], [30 30], [-30 -30]) on voit que la résultat theta_chap ne semble pas dépendre ici de ce paramètre. </font>
<ol start="5">
<h4><li>Maintenant représenter le résultat de la régression. </li></h4>
</ol>
End of explanation
%%matlab
figure
plot(x,y,'b.');
hold on
plot(x,ftheta(theta_chap,x),'r');
ylim([40 95])
xlabel('taille')
ylabel('poids')
print('regression_notes.png','-dpng')
Explanation: <font color="red"><ol start="6">
<h4><li>Utilisez la fonction `ylim` pour changer les limites de l’axe y de 40 à 95.
Ajouter le label `taille` sur l’axe des x avec la commande `xlabel`, et le label `poids` sur
l’axe des y avec la commande `ylabel`. Faites une sauvegarde de cette image,
dans un fichier `regression_notes.png`. </li></h4>
</ol></font>
End of explanation
%%matlab
clear
x = 0:0.1:30;
y = cos(x) + randn(1,301);
Explanation: <ol start="7">
<h4><li>Maintenant nous allons ajuster une courbe plus complexe, un cosinus.
On commence par simuler des données: </li></h4>
</ol>
End of explanation
%%matlab
size(x)
size(y)
Explanation: <font color="red">Quelle est la taille de x? La taille de y?</font>
End of explanation
%%matlab
help randn
Explanation: <font color="blue">Les variables x et y sont des vecteurs lignes de longueur 301.</font>
<font color="red"> A quoi sert la fonction randn (utilisez la commande help ).</font>
End of explanation
%%matlab
figure
plot(x,y,'.')
print('donnees_cosinus.png','-dpng')
Explanation: <font color="blue">La commande randn permet de simuler du bruit suivant une distribution normale (Gaussienne) de moyenne nulle et de variance 1.</font>
<font color="red"> Générer un graphe de la relation entre x et y, et sauvegardez cette image dans un fichier donnees_cosinus.png.</font>
End of explanation
%%matlab
ftheta = inline('theta(1)+theta(2)*cos(x-theta(3))','theta','x');
Explanation: <ol start="8">
<h4><li>On va maintenant définir une fonction de trois paramètres: </li></h4>
</ol>
End of explanation
%%matlab
ftheta([0 1 1],0)
Explanation: <font color="red">Quelle est la valeur de la fonction ftheta pour theta=[0 1 1] et x=0 ?</font>
End of explanation
%%matlab
theta_chap = nlinfit(x, y, ftheta, [0 1 1] )
Explanation: <ol start="9">
<h4><li>Estimez les coefficients de régression à l’aide de la fonction : </li></h4>
</ol>
<font color="red">Quelles sont les valeurs de theta_chap ?</font>
End of explanation
%%matlab
figure
plot(x,y,'b');
hold on
plot(x,ftheta(theta_chap,x),'r');
Explanation: <font color="red">A quoi sert l’argument [0 1 1] ? Essayez de reproduire l’estimation avec d’autres valeurs pour cet argument, est-ce que cela affecte le résultat?</font>
<font color="blue">L'argument [0 1 1] jour le même rôle que le paramètre [1 1] à la question 4. En utilisant une valeur différente du paramètre (par exemple [0 1 10]) on trouve une valeur différente pour theta_chap. C'est parce que la fonction ftheta est périodique, et il existe donc une infinité de valeurs d'entrée qui donnent la même sortie.</font>
<font color="red"><ol start="10">
<h4><li> Maintenant représenter le résultat de la régression. </li></h4>
</ol></font>
End of explanation |
9,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec using MXNet Gluon API
The goal of this notebook is to show Word2Vec Skipgram implementation with Negative Sampling to train word vectors on the text8 dataset.
Please note that python based deep learning framworks are not suitable for Word2Vec. This is maily due to python's limitations that force the data iterator to be single threaded and does not allow asynchronous SGD - all CPU cores cannot do the optimization in parallel to each other as done in the original Word2Vec C implementation.
This notebook is for demo purposes only - highlighting the features of new Gluon API that makes it very easy to prototype complex models with custom losses.
Step1: Download the text8 dataset
Step2: Read the text8 file to build vocabulary, word-to-word_index and word_index-to-word mappings.
Step3: Define the constants / hyperparameters. Set the context to GPU
Step4: Avoid running the cell below this comment for demo. It takes around 20 min to generate training data for 1 epoch using Python. This is one of the primary reasons why Word2Vec is not suitable for Python based deep learning frameworks.
We have already generated the training dataset and uploaded it to S3. Please use the next cell to download the dataset.
Step5: Download and read the pickled training dataset
Step7: Word2Vec training
Word2vec represents each word $w$ in a vocabulary $V$ of size $T$ as a low-dimensional dense vector $v_w$ in an embedding space $\mathbb{R}^D$. It attempts to learn the continuous word vectors $v_w$, $\forall w \in V$ , from a training corpus such that the spatial distance between words then describes the similarity between words, e.g., the closer two words are in the embedding space, the more similar they are semantically and syntactically.
The skipgram architecture tries to predict the context given a word. The problem of predicting context words is framed as a set of independent binary classification tasks. Then the goal is to independently predict the presence (or absence) of context words. For the word at position $t$ we consider all context words as positive examples and sample negatives at random from the dictionary. For a chosen context position $c$, using the binary logistic loss, we obtain the following negative log-likelihood
Step8: Use a large learning rate since batch size is large - 512 in this example. In the original word2vec C implementation, stochastic gradient descent is used i.e. batch_size = 1. However, batch_size = 1 for deep learning frameworks slows down the training drastically, therefore a larger batch size is used. Larger the batch size, faster the training and slower the convergence.
Step9: The input embedding finally has the word vectors we are interessted in. Normalizing all the word vectors and checking nearest neighbours. The training was stopped early so nearest neighbours for all words won't look reasonable.
Step10: Looking at some nearest neighbours
Step11: t-SNE Visualization | Python Code:
import time
import numpy as np
import logging
import sys, random, time, math
import mxnet as mx
from mxnet import nd
from mxnet import gluon
from mxnet.gluon import Block, nn, autograd
import cPickle
from sklearn.preprocessing import normalize
Explanation: Word2Vec using MXNet Gluon API
The goal of this notebook is to show Word2Vec Skipgram implementation with Negative Sampling to train word vectors on the text8 dataset.
Please note that python based deep learning framworks are not suitable for Word2Vec. This is maily due to python's limitations that force the data iterator to be single threaded and does not allow asynchronous SGD - all CPU cores cannot do the optimization in parallel to each other as done in the original Word2Vec C implementation.
This notebook is for demo purposes only - highlighting the features of new Gluon API that makes it very easy to prototype complex models with custom losses.
End of explanation
!wget http://mattmahoney.net/dc/text8.zip -O text8.gz && gzip -d text8.gz -f
Explanation: Download the text8 dataset
End of explanation
buf = open("text8").read()
tks = buf.split(' ')
vocab = {}
wid_to_word = ["NA"]
freq = [0] # Store frequency of all tokens.
data = [] # Store word indices
for tk in tks:
if len(tk) == 0:
continue
if tk not in vocab:
vocab[tk] = len(vocab) + 1
freq.append(0)
wid_to_word.append(tk)
wid = vocab[tk]
data.append(wid)
freq[wid] += 1
negative = [] # Build this table for negative sampling for words from a Unigram distribution.
for i, v in enumerate(freq):
if i == 0 or v < 5:
continue
v = int(math.pow(v * 1.0, 0.75))
negative += [i for _ in range(v)]
Explanation: Read the text8 file to build vocabulary, word-to-word_index and word_index-to-word mappings.
End of explanation
VOCAB_SIZE = len(wid_to_word)
BATCH_SIZE = 512
WORD_DIM = 100
NEGATIVE_SAMPLES = 5
# Preferably use GPU for faster training.
ctx = mx.gpu()
class DataBatch(object):
def __init__(self, data, label):
self.data = data
self.label = label
Explanation: Define the constants / hyperparameters. Set the context to GPU
End of explanation
class Word2VecDataIterator(mx.io.DataIter):
def __init__(self,batch_size=512, negative_samples=5, window=5):
super(Word2VecDataIterator, self).__init__()
self.batch_size = batch_size
self.negative_samples = negative_samples
self.window = window
self.data, self.negative, self.vocab, self.freq = (data, negative, vocab, freq)
@property
def provide_data(self):
return [('contexts', (self.batch_size, 1))]
@property
def provide_label(self):
return [('targets', (self.batch_size, self.negative + 1))]
def sample_ne(self):
return self.negative[random.randint(0, len(self.negative) - 1)]
def __iter__(self):
center_data = []
targets = []
result = 0
for pos, word in enumerate(self.data):
boundary = random.randint(1,self.window) # `b` in the original word2vec code
for index in range(-boundary, boundary+1):
if (index != 0 and pos + boundary >= 0 and pos + boundary < len(self.data)):
center_word = word
context_word = self.data[pos + index]
if center_word != context_word:
targets_vec = []
center_data.append([word])
targets_vec.append(context_word)
while len(targets_vec) < self.negative_samples + 1:
w = self.sample_ne()
if w != word:
targets_vec.append(w)
targets.append(targets_vec)
# Check if batch size is full
if len(center_data) > self.batch_size:
data_all = [mx.nd.array(center_data[:self.batch_size])]
label_all = [mx.nd.array(targets[:self.batch_size])]
yield DataBatch(data_all, label_all)
center_data = center_data[self.batch_size:]
targets = targets[self.batch_size:]
data_iterator = Word2VecDataIterator(batch_size=BATCH_SIZE,
negative_samples=NEGATIVE_SAMPLES,
window=5)
all_batches = []
for batch in data_iterator:
all_batches.append(batch)
cPickle.dump(all_data, open('all_batches.p', 'wb'))
Explanation: Avoid running the cell below this comment for demo. It takes around 20 min to generate training data for 1 epoch using Python. This is one of the primary reasons why Word2Vec is not suitable for Python based deep learning frameworks.
We have already generated the training dataset and uploaded it to S3. Please use the next cell to download the dataset.
End of explanation
!wget https://s3-us-west-2.amazonaws.com/gsaur-dev/input.p
all_batches = cPickle.load(open('input.p', 'rb'))
Explanation: Download and read the pickled training dataset
End of explanation
class Model(gluon.HybridBlock):
def __init__(self, **kwargs):
super(Model, self).__init__(**kwargs)
with self.name_scope():
# Embedding for input words with dimensions VOCAB_SIZE X WORD_DIM
self.center = nn.Embedding(input_dim=VOCAB_SIZE,
output_dim=WORD_DIM,
weight_initializer=mx.initializer.Uniform(1.0/WORD_DIM))
# Embedding for output words with dimensions VOCAB_SIZE X WORD_DIM
self.target = nn.Embedding(input_dim=VOCAB_SIZE,
output_dim=WORD_DIM,
weight_initializer=mx.initializer.Zero())
def hybrid_forward(self, F, center, targets, labels):
Returns the word2vec skipgram with negative sampling network.
:param F: F is a function space that depends on the type of other inputs. If their type is NDArray, then F will be mxnet.nd otherwise it will be mxnet.sym
:param center: A symbol/NDArray with dimensions (batch_size, 1). Contains the index of center word for each batch.
:param targets: A symbol/NDArray with dimensions (batch_size, negative_samples + 1). Contains the indices of 1 target word and `n` negative samples (n=5 in this example)
:param labels: A symbol/NDArray with dimensions (batch_size, negative_samples + 1). For 5 negative samples, the array for each batch is [1,0,0,0,0,0] i.e. label is 1 for target word and 0 for negative samples
:return: Return a HybridBlock object
center_vector = self.center(center)
target_vectors = self.target(targets)
pred = F.broadcast_mul(center_vector, target_vectors)
pred = F.sum(data = pred, axis = 2)
sigmoid = F.sigmoid(pred)
loss = F.sum(labels * F.log(sigmoid) + (1 - labels) * F.log(1 - sigmoid), axis=1)
loss = loss * -1.0 / BATCH_SIZE
loss_layer = F.MakeLoss(loss)
return loss_layer
model = Model()
model.initialize(ctx=ctx)
model.hybridize() # Convert to a symbolic network for efficiency.
Explanation: Word2Vec training
Word2vec represents each word $w$ in a vocabulary $V$ of size $T$ as a low-dimensional dense vector $v_w$ in an embedding space $\mathbb{R}^D$. It attempts to learn the continuous word vectors $v_w$, $\forall w \in V$ , from a training corpus such that the spatial distance between words then describes the similarity between words, e.g., the closer two words are in the embedding space, the more similar they are semantically and syntactically.
The skipgram architecture tries to predict the context given a word. The problem of predicting context words is framed as a set of independent binary classification tasks. Then the goal is to independently predict the presence (or absence) of context words. For the word at position $t$ we consider all context words as positive examples and sample negatives at random from the dictionary. For a chosen context position $c$, using the binary logistic loss, we obtain the following negative log-likelihood:
$$ \log (1 + e^{-s(w_t, w_c)}) + \sum_{n \in \mathcal{N}_{t,c}}^{}{\log (1 + e^{s(w_t, n)})}$$
where $w_t$ is a center word, $w_c$ is a context word, $\mathcal{N}_{t,c}$ is a set of negative examples sampled from the vocabulary. By denoting the logistic loss function $l : x \mapsto \log(1 + e^{-x})$, we can re-write the objective as:
$$ \sum_{t=1}^{T}{ \sum_{c \in C_t}^{}{ \big[ l(s(w_t, w_c))} + \sum_{n \in \mathcal{N}_{t,c}}^{}{l(-s(w_t, n))} \big]} $$
where $s(w_t, w_c) = u_{w_t}^T v_{w_c}$
End of explanation
trainer = gluon.Trainer(model.collect_params(), 'sgd', {'learning_rate':4,'clip_gradient':5})
labels = nd.zeros((BATCH_SIZE, NEGATIVE_SAMPLES+1), ctx=ctx)
labels[:,0] = 1
start_time = time.time()
epochs = 5
for e in range(epochs):
moving_loss = 0.
for i, batch in enumerate(all_batches):
center_words = batch.data[0].as_in_context(ctx)
target_words = batch.label[0].as_in_context(ctx)
with autograd.record():
loss = model(center_words, target_words, labels)
loss.backward()
trainer.step(1, ignore_stale_grad=True)
# Keep a moving average of the losses
if (i == 0) and (e == 0):
moving_loss = loss.asnumpy().sum()
else:
moving_loss = .99 * moving_loss + .01 * loss.asnumpy().sum()
if (i + 1) % 50 == 0:
print("Epoch %s, batch %s. Moving avg of loss: %s" % (e, i, moving_loss))
if i > 15000:
break
print("1 epoch took %s seconds" % (time.time() - start_time))
Explanation: Use a large learning rate since batch size is large - 512 in this example. In the original word2vec C implementation, stochastic gradient descent is used i.e. batch_size = 1. However, batch_size = 1 for deep learning frameworks slows down the training drastically, therefore a larger batch size is used. Larger the batch size, faster the training and slower the convergence.
End of explanation
keys = model.collect_params().keys()
all_vecs = model.collect_params()[keys[0]].data().asnumpy()
all_vecs = weights
normalize(all_vecs, copy=False)
# Keep only the top 50K most frequent embeddings
top_50k = (-np.array(freq)).argsort()[0:50000]
word_to_index = {}
index_to_word = []
for newid, word_id in enumerate(top_50k):
index_to_word.append(wid_to_word[word_id])
word_to_index[wid_to_word[word_id]] = newid
# Load pretrained vectors from pickle
!wget https://s3-us-west-2.amazonaws.com/gsaur-dev/syn0.p
all_vecs = cPickle.load(open('syn0.p', 'rb'))
def find_most_similar(word):
if word not in word_to_index:
print("Sorry word not found. Please try another one.")
else:
i1 = word_to_index[word]
prod = all_vecs.dot(all_vecs[i1])
i2 = (-prod).argsort()[1:10]
for i in i2:
print index_to_word[i]
Explanation: The input embedding finally has the word vectors we are interessted in. Normalizing all the word vectors and checking nearest neighbours. The training was stopped early so nearest neighbours for all words won't look reasonable.
End of explanation
find_most_similar("earth")
find_most_similar("january")
find_most_similar("car")
Explanation: Looking at some nearest neighbours
End of explanation
from sklearn.manifold import TSNE
num_points = 450
tsne = TSNE(perplexity=50, n_components=2, init='pca', n_iter=10000)
two_d_embeddings = tsne.fit_transform(all_vecs[:num_points])
labels = index_to_word[:num_points]
from matplotlib import pylab
%matplotlib inline
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(20,20)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
plot(two_d_embeddings, labels)
Explanation: t-SNE Visualization
End of explanation |
9,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial Joins
A spatial join uses binary predicates
such as intersects and crosses to combine two GeoDataFrames based on the spatial relationship
between their geometries.
A common use case might be a spatial join between a point layer and a polygon layer where you want to retain the point geometries and grab the attributes of the intersecting polygons.
Types of spatial joins
We currently support the following methods of spatial joins. We refer to the left_df and right_df which are the correspond to the two dataframes passed in as args.
Left outer join
In a LEFT OUTER JOIN (how='left'), we keep all rows from the left and duplicate them if necessary to represent multiple hits between the two dataframes. We retain attributes of the right if they intersect and lose right rows that don't intersect. A left outer join implies that we are interested in retaining the geometries of the left.
This is equivalent to the PostGIS query
Step1: Joins
Step2: We're not limited to using the intersection binary predicate. Any of the Shapely geometry methods that return a Boolean can be used by specifying the op kwarg. | Python Code:
import os
from shapely.geometry import Point
from geopandas import GeoDataFrame, read_file
from geopandas.tools import overlay
# NYC Boros
zippath = os.path.abspath('nybb_14aav.zip')
polydf = read_file('/nybb_14a_av/nybb.shp', vfs='zip://' + zippath)
# Generate some points
b = [int(x) for x in polydf.total_bounds]
N = 8
pointdf = GeoDataFrame([
{'geometry' : Point(x, y), 'value1': x + y, 'value2': x - y}
for x, y in zip(range(b[0], b[2], int((b[2]-b[0])/N)),
range(b[1], b[3], int((b[3]-b[1])/N)))])
pointdf
polydf
pointdf.plot()
polydf.plot()
Explanation: Spatial Joins
A spatial join uses binary predicates
such as intersects and crosses to combine two GeoDataFrames based on the spatial relationship
between their geometries.
A common use case might be a spatial join between a point layer and a polygon layer where you want to retain the point geometries and grab the attributes of the intersecting polygons.
Types of spatial joins
We currently support the following methods of spatial joins. We refer to the left_df and right_df which are the correspond to the two dataframes passed in as args.
Left outer join
In a LEFT OUTER JOIN (how='left'), we keep all rows from the left and duplicate them if necessary to represent multiple hits between the two dataframes. We retain attributes of the right if they intersect and lose right rows that don't intersect. A left outer join implies that we are interested in retaining the geometries of the left.
This is equivalent to the PostGIS query:
```
SELECT pts.geom, pts.id as ptid, polys.id as polyid
FROM pts
LEFT OUTER JOIN polys
ON ST_Intersects(pts.geom, polys.geom);
geom | ptid | polyid
--------------------------------------------+------+--------
010100000040A9FBF2D88AD03F349CD47D796CE9BF | 4 | 10
010100000048EABE3CB622D8BFA8FBF2D88AA0E9BF | 3 | 10
010100000048EABE3CB622D8BFA8FBF2D88AA0E9BF | 3 | 20
0101000000F0D88AA0E1A4EEBF7052F7E5B115E9BF | 2 | 20
0101000000818693BA2F8FF7BF4ADD97C75604E9BF | 1 |
(5 rows)
```
Right outer join
In a RIGHT OUTER JOIN (how='right'), we keep all rows from the right and duplicate them if necessary to represent multiple hits between the two dataframes. We retain attributes of the left if they intersect and lose left rows that don't intersect. A right outer join implies that we are interested in retaining the geometries of the right.
This is equivalent to the PostGIS query:
```
SELECT polys.geom, pts.id as ptid, polys.id as polyid
FROM pts
RIGHT OUTER JOIN polys
ON ST_Intersects(pts.geom, polys.geom);
geom | ptid | polyid
----------+------+--------
01...9BF | 4 | 10
01...9BF | 3 | 10
02...7BF | 3 | 20
02...7BF | 2 | 20
00...5BF | | 30
(5 rows)
```
Inner join
In an INNER JOIN (how='inner'), we keep rows from the right and left only where their binary predicate is True. We duplicate them if necessary to represent multiple hits between the two dataframes. We retain attributes of the right and left only if they intersect and lose all rows that do not. An inner join implies that we are interested in retaining the geometries of the left.
This is equivalent to the PostGIS query:
```
SELECT pts.geom, pts.id as ptid, polys.id as polyid
FROM pts
INNER JOIN polys
ON ST_Intersects(pts.geom, polys.geom);
geom | ptid | polyid
--------------------------------------------+------+--------
010100000040A9FBF2D88AD03F349CD47D796CE9BF | 4 | 10
010100000048EABE3CB622D8BFA8FBF2D88AA0E9BF | 3 | 10
010100000048EABE3CB622D8BFA8FBF2D88AA0E9BF | 3 | 20
0101000000F0D88AA0E1A4EEBF7052F7E5B115E9BF | 2 | 20
(4 rows)
```
Spatial Joins between two GeoDataFrames
Let's take a look at how we'd implement these using GeoPandas. First, load up the NYC test data into GeoDataFrames:
End of explanation
from geopandas.tools import sjoin
join_left_df = sjoin(pointdf, polydf, how="left")
join_left_df
# Note the NaNs where the point did not intersect a boro
join_right_df = sjoin(pointdf, polydf, how="right")
join_right_df
# Note Staten Island is repeated
join_inner_df = sjoin(pointdf, polydf, how="inner")
join_inner_df
# Note the lack of NaNs; dropped anything that didn't intersect
Explanation: Joins
End of explanation
sjoin(pointdf, polydf, how="left", op="within")
Explanation: We're not limited to using the intersection binary predicate. Any of the Shapely geometry methods that return a Boolean can be used by specifying the op kwarg.
End of explanation |
9,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding Document Clustering
Clustering is one of the most important Unsupervised Machine Learning Techniques. These algorithms come in handy, especially in situations where labelled data is a luxury. Clustering techniques help us understand the underlying patterns in data (more so around them being similar) along with the ability to bootstrap certain supervised learning approaches as well.
Clustering techniques have been studied in depth over the years and there are some very powerful clustering algorithms available. For this tutorial, we will be working with a movie dataset containing movie plot, cast, genres and related other information. We will be working with K-Means and Ward-Hierarchical-Clustering methods.
Load Dataset
Step1: Your Turn
Step2: Extract TF-IDF Features
Step3: Cluster Movies using K-Means
Step4: Affinity Propagation
Step5: Hierarchical Clustering
So far, we were successfull in clustering movies using K-Means. But is there any further level of understanding we can extract from this dataset in an unsupervised manner?
Hierarchical Clustering to the rescue. K-Means helped us understand similarities amongst movies, with hierarchical clustering we can aim at understanding abstract or higher level concepts which are common across groups of movies. There are primarily two ways in which hierarchical clustering can be performed
Step6: Calculate Linkage Matrix using Cosine Similarity
Step7: Plot Hierarchical Structure as a Dendrogram | Python Code:
import pandas as pd
df = pd.read_csv('tmdb_5000_movies.csv.gz',
compression='gzip')
df.info()
df.head()
df = df[['title', 'tagline', 'overview', 'genres', 'popularity']]
df.tagline.fillna('', inplace=True)
df['description'] = df['tagline'].map(str) + ' ' + df['overview']
df.dropna(inplace=True)
df.info()
df.head()
Explanation: Understanding Document Clustering
Clustering is one of the most important Unsupervised Machine Learning Techniques. These algorithms come in handy, especially in situations where labelled data is a luxury. Clustering techniques help us understand the underlying patterns in data (more so around them being similar) along with the ability to bootstrap certain supervised learning approaches as well.
Clustering techniques have been studied in depth over the years and there are some very powerful clustering algorithms available. For this tutorial, we will be working with a movie dataset containing movie plot, cast, genres and related other information. We will be working with K-Means and Ward-Hierarchical-Clustering methods.
Load Dataset
End of explanation
import nltk
import re
import numpy as np
stop_words = nltk.corpus.stopwords.words('english')
def normalize_document(doc):
# lower case and remove special characters\whitespaces
doc = re.sub(r'[^a-zA-Z0-9\s]', '', doc, re.I|re.A)
doc = doc.lower()
doc = doc.strip()
# tokenize document
tokens = nltk.word_tokenize(doc)
# filter stopwords out of document
filtered_tokens = [token for token in tokens if token not in stop_words]
# re-create document from filtered tokens
doc = ' '.join(filtered_tokens)
return doc
normalize_corpus = np.vectorize(normalize_document)
norm_corpus = normalize_corpus(list(df['description']))
len(norm_corpus)
Explanation: Your Turn: Cluster Similar Movies
Here you will learn how to cluster text documents (in this case movies). We will use the following pipeline:
- Text pre-processing
- Feature Engineering
- Clustering Using K-Means
- Finding Optimal Value for K
- Prepare Movie Clusters
Clustering is an unsupervised approach to find groups of similar items in any given dataset. There are different clustering algorithms and K-Means is a pretty simple yet affect one. Most movies span different emotions and can be categorized into multiple genres (same is the case with movies listed in our current dataset). Can clustering of movie descriptions help us understand these groupings?
Similarity analysis (in the previous section) was a good starting point, but can we do better?
Text pre-processing
We will do some basic text pre-processing on our movie descriptions before we build our features
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
stop_words = stop_words + ['one', 'two', 'get']
cv = CountVectorizer(ngram_range=(1, 2), min_df=10, max_df=0.8, stop_words=stop_words)
cv_matrix = cv.fit_transform(norm_corpus)
cv_matrix.shape
Explanation: Extract TF-IDF Features
End of explanation
from sklearn.cluster import KMeans
NUM_CLUSTERS = 6
km = KMeans(n_clusters=NUM_CLUSTERS, max_iter=10000, n_init=50, random_state=42).fit(cv_matrix)
km
from collections import Counter
Counter(km.labels_)
df['kmeans_cluster'] = km.labels_
movie_clusters = (df[['title', 'kmeans_cluster', 'popularity']]
.sort_values(by=['kmeans_cluster', 'popularity'],
ascending=False)
.groupby('kmeans_cluster').head(20))
movie_clusters = movie_clusters.copy(deep=True)
feature_names = cv.get_feature_names()
topn_features = 15
ordered_centroids = km.cluster_centers_.argsort()[:, ::-1]
# get key features for each cluster
# get movies belonging to each cluster
for cluster_num in range(NUM_CLUSTERS):
key_features = [feature_names[index]
for index in ordered_centroids[cluster_num, :topn_features]]
movies = movie_clusters[movie_clusters['kmeans_cluster'] == cluster_num]['title'].values.tolist()
print('CLUSTER #'+str(cluster_num+1))
print('Key Features:', key_features)
print('Popular Movies:', movies)
print('-'*80)
from sklearn.metrics.pairwise import cosine_similarity
cosine_sim_features = cosine_similarity(cv_matrix)
km = KMeans(n_clusters=NUM_CLUSTERS, max_iter=10000, n_init=50, random_state=42).fit(cosine_sim_features)
Counter(km.labels_)
df['kmeans_cluster'] = km.labels_
movie_clusters = (df[['title', 'kmeans_cluster', 'popularity']]
.sort_values(by=['kmeans_cluster', 'popularity'],
ascending=False)
.groupby('kmeans_cluster').head(20))
movie_clusters = movie_clusters.copy(deep=True)
# get movies belonging to each cluster
for cluster_num in range(NUM_CLUSTERS):
movies = movie_clusters[movie_clusters['kmeans_cluster'] == cluster_num]['title'].values.tolist()
print('CLUSTER #'+str(cluster_num+1))
print('Popular Movies:', movies)
print('-'*80)
Explanation: Cluster Movies using K-Means
End of explanation
from sklearn.cluster import AffinityPropagation
ap = AffinityPropagation(max_iter=1000)
ap.fit(cosine_sim_features)
res = Counter(ap.labels_)
res.most_common(10)
df['affprop_cluster'] = ap.labels_
filtered_clusters = [item[0] for item in res.most_common(8)]
filtered_df = df[df['affprop_cluster'].isin(filtered_clusters)]
movie_clusters = (filtered_df[['title', 'affprop_cluster', 'popularity']]
.sort_values(by=['affprop_cluster', 'popularity'],
ascending=False)
.groupby('affprop_cluster').head(20))
movie_clusters = movie_clusters.copy(deep=True)
# get key features for each cluster
# get movies belonging to each cluster
for cluster_num in range(len(filtered_clusters)):
movies = movie_clusters[movie_clusters['affprop_cluster'] == filtered_clusters[cluster_num]]['title'].values.tolist()
print('CLUSTER #'+str(filtered_clusters[cluster_num]))
print('Popular Movies:', movies)
print('-'*80)
Explanation: Affinity Propagation
End of explanation
from scipy.cluster.hierarchy import ward, dendrogram
from sklearn.metrics.pairwise import cosine_similarity
Explanation: Hierarchical Clustering
So far, we were successfull in clustering movies using K-Means. But is there any further level of understanding we can extract from this dataset in an unsupervised manner?
Hierarchical Clustering to the rescue. K-Means helped us understand similarities amongst movies, with hierarchical clustering we can aim at understanding abstract or higher level concepts which are common across groups of movies. There are primarily two ways in which hierarchical clustering can be performed:
Divisive : The algorithm begins with every element in one big generic cluster and then goes on dividing them into specific clusters in a recursive manner.
Agglomerative : In this case, the algorithm starts by placing every element into a cluster of its own and then goes on merging them into more general clusters in a recursive manner (till they all merge into one big cluster).
For this tutorial, we will work with Ward clustering algorithm. Ward clustering is an agglomerative clustering method, i.e. at each stage, the pair of clusters with minimum between-cluster distance (or wcss) are merged.
To work with Ward Clustering Algorithm, we perform the following steps:
- Prepare a cosine distance matrix
- Calclate a linkage_matrix
- Plot the hierarchical structure as a dendrogram.
End of explanation
def ward_hierarchical_clustering(feature_matrix):
cosine_distance = 1 - cosine_similarity(feature_matrix)
linkage_matrix = ward(cosine_distance)
return linkage_matrix
Explanation: Calculate Linkage Matrix using Cosine Similarity
End of explanation
def plot_hierarchical_clusters(linkage_matrix, movie_data, p=100, figure_size=(8,12)):
# set size
fig, ax = plt.subplots(figsize=figure_size)
movie_titles = movie_data['title'].values.tolist()
# plot dendrogram
R = dendrogram(linkage_matrix, orientation="left", labels=movie_titles,
truncate_mode='lastp',
p=p,
no_plot=True)
temp = {R["leaves"][ii]: movie_titles[ii] for ii in range(len(R["leaves"]))}
def llf(xx):
return "{}".format(temp[xx])
ax = dendrogram(
linkage_matrix,
truncate_mode='lastp',
orientation="left",
p=p,
leaf_label_func=llf,
leaf_font_size=10.,
)
plt.tick_params(axis= 'x',
which='both',
bottom='off',
top='off',
labelbottom='off')
plt.tight_layout()
plt.savefig('movie_hierachical_clusters.png', dpi=200)
linkage_matrix = ward_hierarchical_clustering(cv_matrix)
plot_hierarchical_clusters(linkage_matrix,
p=100,
movie_data=df,
figure_size=(12, 14))
Explanation: Plot Hierarchical Structure as a Dendrogram
End of explanation |
9,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Arduino Joystick Shield Example
This example shows how to use the Sparkfun Joystick
on the board. The Joystick shield contains an analog joystick which is
connected to A0 and A1 analog channels of the Arduino connector. It also
contains four push buttons connected at D3-D6 pins of the Arduino connector.
For this notebook, an Arduino joystick shield is required.
Step1: 1. Use Microblaze to control the joystick
Make sure the joystick shield is plugged in. For the Microblaze to transfer
direction or button values back, we need to define a few additional constants.
Step2: The joystick can measure horizontal direction x
and vertical direction y.
The thresholds for raw values are
Step3: 2. Define Python wrapper for Microblaze functions
We will also need to initialize the joystick before we can read any value.
The following function returns 0 if the initialization is successful.
Step4: The following Python wrappers will call the Microblaze functions internally.
Step5: 3. Find direction
We can measure the direction by calling read_direction().
For the next cell, leave the joystick in its natural position.
Step6: Let's pull the joystick towards the bottom right corner.
Step7: 4. Read button values
Based on the schematic
of the shield, we can see the read value will go low if the corresponding
button has been pressed.
Run the next cell while pushing both button D4 and D6. | Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
Explanation: Arduino Joystick Shield Example
This example shows how to use the Sparkfun Joystick
on the board. The Joystick shield contains an analog joystick which is
connected to A0 and A1 analog channels of the Arduino connector. It also
contains four push buttons connected at D3-D6 pins of the Arduino connector.
For this notebook, an Arduino joystick shield is required.
End of explanation
DIRECTION_VALUE_MAP = {
0: 'up',
1: 'up_right',
2: 'right',
3: 'down_right',
4: 'down',
5: 'down_left',
6: 'left',
7: 'up_left',
8: 'center'
}
BUTTON_INDEX_MAP = {
'D3': 0,
'D4': 1,
'D5': 2,
'D6': 3
}
Explanation: 1. Use Microblaze to control the joystick
Make sure the joystick shield is plugged in. For the Microblaze to transfer
direction or button values back, we need to define a few additional constants.
End of explanation
%%microblaze base.ARDUINO
#include "xparameters.h"
#include "circular_buffer.h"
#include "gpio.h"
#include "xsysmon.h"
#include <pyprintf.h>
#define X_THRESHOLD_LOW 25000
#define X_THRESHOLD_HIGH 39000
#define Y_THRESHOLD_LOW 25000
#define Y_THRESHOLD_HIGH 39000
typedef enum directions {
up = 0,
right_up,
right,
right_down,
down,
left_down,
left,
left_up,
centered
}direction_e;
static gpio gpio_buttons[4];
static XSysMon SysMonInst;
XSysMon_Config *SysMonConfigPtr;
XSysMon *SysMonInstPtr = &SysMonInst;
int init_joystick(){
unsigned int i, status;
SysMonConfigPtr = XSysMon_LookupConfig(XPAR_SYSMON_0_DEVICE_ID);
if(SysMonConfigPtr == NULL)
return -1;
status = XSysMon_CfgInitialize(
SysMonInstPtr, SysMonConfigPtr, SysMonConfigPtr->BaseAddress);
if(XST_SUCCESS != status)
return -1;
for (i=0; i<4; i++){
gpio_buttons[i] = gpio_open(i+3);
gpio_set_direction(gpio_buttons[i], GPIO_IN);
}
return 0;
}
unsigned int get_direction_value(){
direction_e direction;
unsigned int x_position, y_position;
while ((XSysMon_GetStatus(SysMonInstPtr) &
XSM_SR_EOS_MASK) != XSM_SR_EOS_MASK);
x_position = XSysMon_GetAdcData(SysMonInstPtr, XSM_CH_AUX_MIN+1);
y_position = XSysMon_GetAdcData(SysMonInstPtr, XSM_CH_AUX_MIN+9);
if (x_position > X_THRESHOLD_HIGH) {
if (y_position > Y_THRESHOLD_HIGH) {
direction = right_up;
} else if (y_position < Y_THRESHOLD_LOW) {
direction = right_down;
} else {
direction = right;
}
} else if (x_position < X_THRESHOLD_LOW) {
if (y_position > Y_THRESHOLD_HIGH) {
direction = left_up;
} else if (y_position < Y_THRESHOLD_LOW) {
direction = left_down;
} else {
direction = left;
}
} else {
if (y_position > Y_THRESHOLD_HIGH) {
direction = up;
} else if (y_position < Y_THRESHOLD_LOW) {
direction = down;
} else {
direction = centered;
}
}
return direction;
}
unsigned int get_button_value(unsigned int btn_i){
unsigned int value;
value = gpio_read(gpio_buttons[btn_i]);
return value;
}
Explanation: The joystick can measure horizontal direction x
and vertical direction y.
The thresholds for raw values are:
Horizontal:
| Threshold | Direction |
| ------------------ |:------------:|
| x < 25000 | left |
| 25000 < x < 39000 | center |
| x > 39000 | right |
Vertical:
| Threshold | Direction |
| ------------------ |:------------:|
| y < 25000 | down |
| 25000 < y < 39000 | center |
| y > 39000 | up |
End of explanation
init_joystick()
Explanation: 2. Define Python wrapper for Microblaze functions
We will also need to initialize the joystick before we can read any value.
The following function returns 0 if the initialization is successful.
End of explanation
def read_direction():
direction_value = get_direction_value()
return DIRECTION_VALUE_MAP[direction_value]
def read_button(button):
return get_button_value(BUTTON_INDEX_MAP[button])
Explanation: The following Python wrappers will call the Microblaze functions internally.
End of explanation
read_direction()
Explanation: 3. Find direction
We can measure the direction by calling read_direction().
For the next cell, leave the joystick in its natural position.
End of explanation
read_direction()
Explanation: Let's pull the joystick towards the bottom right corner.
End of explanation
for button in BUTTON_INDEX_MAP:
if read_button(button):
print('Button {} is not pressed.'.format(button))
else:
print('Button {} is pressed.'.format(button))
Explanation: 4. Read button values
Based on the schematic
of the shield, we can see the read value will go low if the corresponding
button has been pressed.
Run the next cell while pushing both button D4 and D6.
End of explanation |
9,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detached Binary
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Adding Datasets
Now we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials
Step3: Running Compute
Let's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.
Step4: Now we'll compute synthetics at the times provided using the default options
Step5: Plotting | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Detached Binary: Roche vs Rotstar
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
b.add_dataset('mesh', compute_times=[0.75], dataset='mesh01')
Explanation: Adding Datasets
Now we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials:
End of explanation
b['requiv@primary@component'] = 1.8
Explanation: Running Compute
Let's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.
End of explanation
b.run_compute(irrad_method='none', distortion_method='roche', model='rochemodel')
b.run_compute(irrad_method='none', distortion_method='rotstar', model='rotstarmodel')
Explanation: Now we'll compute synthetics at the times provided using the default options
End of explanation
afig, mplfig = b.plot(model='rochemodel',show=True)
afig, mplfig = b.plot(model='rotstarmodel',show=True)
Explanation: Plotting
End of explanation |
9,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What country are most billionaires from? For the top ones, how many billionaires per billion people?
Step1: Who are the top 10 richest billionaires?
Step2: What's the average wealth of a billionaire? Male? Female?
What's the average wealth of a billionaire? Male? Female
Step3: Who is the poorest billionaire?
Step4: Given the richest person in a country, what % of the GDP is their wealth?
Step5: Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India
What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?
Step6: How many self made billionaires vs. others?
Step7: How old are billionaires?
Step8: How old are billionaires self made vs. non self made?
Step9: Maybe plot their net worth vs age (scatterplot)
Make a bar graph of the top 10 or 20 richest | Python Code:
df['citizenship'].value_counts().head()
df.groupby('citizenship')['networthusbillion'].sum().sort_values(ascending=False)
us_pop = 318.9 #billion (2014)
us_bill = df[df['citizenship'] == 'United States']
print("There are", us_pop/len(us_bill), "billionaires per billion people in the United States.")
germ_pop = 0.08062 #(2013)
germ_bill = df[df['citizenship'] == 'Germany']
print("There are", germ_pop/len(germ_bill), "billionaires per billion people in Germany.")
china_pop = 1.357 #(2013)
china_bill = df[df['citizenship'] == 'China']
print("There are", china_pop/len(china_bill), "billionaires per billion people in China.")
russia_pop = 0.1435 #(2013)
russia_bill = df[df['citizenship'] == 'Russia']
print("There are", russia_pop/len(russia_bill), "billionaires per billion people in Russia.")
japan_pop = 0.1273 # 2013
japan_bill = df[df['citizenship'] == 'Japan']
print("There are", japan_pop/len(japan_bill), "billionaires per billion people in Japan.")
print(df.columns)
Explanation: What country are most billionaires from? For the top ones, how many billionaires per billion people?
End of explanation
recent = df[df['year'] == 2014]
# if it is not recent then there are duplicates for diff years
recent.sort_values('rank').head(10)
recent['networthusbillion'].describe()
Explanation: Who are the top 10 richest billionaires?
End of explanation
print("The average wealth of a billionaire is", recent['networthusbillion'].mean(), "billion dollars")
male = recent[(recent['gender'] == 'male')]
female = recent[(recent['gender'] == 'female')]
print("The average wealth of a male billionaire is", male['networthusbillion'].mean(), "billion dollars")
print("The average wealth of a female billionaire is", female['networthusbillion'].mean(), "billion dollars")
Explanation: What's the average wealth of a billionaire? Male? Female?
What's the average wealth of a billionaire? Male? Female
End of explanation
recent.sort_values('networthusbillion').head(1)
# Who are the top 10 poorest billionaires?
# Who are the top 10 poorest billionaires
recent.sort_values('networthusbillion').head(10)
# 'What is relationship to company'? And what are the most common relationships?
#top 10 most common relationships to company
df['relationshiptocompany'].value_counts().head(10)
# Most common source of wealth? Male vs. female?
# Most common source of wealth? Male vs. female
print("The most common source of wealth is", df['sourceofwealth'].value_counts().head(1))
print("The most common source of wealth for males is", male['sourceofwealth'].value_counts().head(1))
print("The most common source of wealth for females is", female['sourceofwealth'].value_counts().head(1))
#need to figure out how to extract just the number nd not the data type 'Name: sourceofwealth, dtype: int64'
Explanation: Who is the poorest billionaire?
End of explanation
richest = df[df['citizenship'] == 'United States'].sort_values('rank').head(1)['networthusbillion'].to_dict()
# richest['networthusbillion']
richest[282]
## I JUST WANT THE VALUE -- 18.5.
## 16.77 TRILLION
US_GDP = 1.677 * (10^13)
US_GDP
Explanation: Given the richest person in a country, what % of the GDP is their wealth?
End of explanation
recent['sector'].value_counts().head(10)
df.groupby('sector')['networthusbillion'].sum()
Explanation: Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India
What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?
End of explanation
(recent['selfmade'] == 'self-made').value_counts()
Explanation: How many self made billionaires vs. others?
End of explanation
# recent['age'].value_counts().sort_values()
print("The average billionnaire is", round(recent['age'].mean()), "years old.")
Explanation: How old are billionaires?
End of explanation
df.groupby('selfmade')['age'].mean()
# or different industries?
df.groupby('sector')['age'].mean()
#youngest billionnaires
recent.sort_values('age').head(10)
#oldest billionnaires
recent.sort_values('age', ascending =False).head(10)
#Age distribution - maybe make a graph about it?
import matplotlib.pyplot as plt
%matplotlib inline
# This will scream we don't have matplotlib.
his = df['age'].hist(range=[0, 100])
his.set_title('Distribution of Age Amongst Billionaires')
his.set_xlabel('Age(years)')
his.set_ylabel('# of Billionnaires')
# Maybe just made a graph about how wealthy they are in general?
import matplotlib.pyplot as plt
%matplotlib inline
# This will scream we don't have matplotlib.
his = df['networthusbillion'].hist(range=[0, 45])
his.set_title('Distribution of Wealth Amongst Billionaires')
his.set_xlabel('Wealth(Billions)')
his.set_ylabel('# of Billionnaires')
Explanation: How old are billionaires self made vs. non self made?
End of explanation
recent.plot(kind='scatter', x='networthusbillion', y='age')
recent.plot(kind='scatter', x='age', y='networthusbillion', alpha = 0.2)
Explanation: Maybe plot their net worth vs age (scatterplot)
Make a bar graph of the top 10 or 20 richest
End of explanation |
9,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a python / R implementation for spatial analysis of radar rainfall fields. All courtesy for the R code implementation goes to Marc Schleiss
Notes before running
Step1: Inside the R environment import these geospatial packages
Step2: Set colors within the R environment
Step3: Read pandas coordinates of the radar grid
Step4: Read the 24h dataset and (re)arange the pandas DataFrame
Step5: Activate the pandas to R conversion interface
Step6: Select only gridpoints > 0mm rain (wet mask) and assign it in the R environment
Step7: Transform dataset in R to geospatial dataset
Step8: Plot map with python
Step9: Bypass plot R map
Step10: Generate a isotropic variogram (2km separated lags, max 100km)
Step11: Generate and save the 2D variogram map
Step12: Investigate the (an)isotropy of the dataset
Only possible with up to 1499 values. Therefore we sort the rainfall values descendingly and assign the sorted dataset to the R environment.
Step13: Returns the direction of the minumum variablity clockwise from North and the anisotropy ratio
Step14: Compute directional variograms with anisotropy direction
Step15: Fit initial spherical variogram to isotropic variogram
Step16: Save image of fitted variogram
Step17: fitted range
Step18: fitted nugget
Step19: fitted sill
Step20: sum of squared errors
Step21: Fit exponential model
Step22: sum of squared errors
Step23: Save image | Python Code:
from rpy2.robjects.packages import importr
from rpy2.robjects import r
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: This is a python / R implementation for spatial analysis of radar rainfall fields. All courtesy for the R code implementation goes to Marc Schleiss
Notes before running:
o) make sure to have installed R properly
o) install the python library rpy2
linux/mac users should be fine by just pip install rpy2
for windows, consider using anaconda and install rpy2
o) install the R libraries sp, gstat and intamap inside the R environment (best as sudo/adminstrator):
install.packages("sp")
install.packages("gstat")
install.packages("intamap")
To plot data in R:
png("myvariogram_map_24h.png",height=600,width=600)
print(plot(myisomap))
dev.off()
Import python / R interface packages
End of explanation
sp = importr('sp')
gstat = importr('gstat')
intamap = importr('intamap')
Explanation: Inside the R environment import these geospatial packages
End of explanation
r('jet.colors <- c("#00007F","blue","#007FFF","cyan","#7FFF7F","yellow","#FF7F00","red","#7F0000")')
r('col.palette <- colorRampPalette(jet.colors)')
Explanation: Set colors within the R environment
End of explanation
coords = pd.read_csv('./radar_xy.csv', header=None)
coords.columns = ['x', 'y']
coords.head()
Explanation: Read pandas coordinates of the radar grid
End of explanation
rainfall = pd.read_csv('./radar_sent/radar_snap_24h_2011_08_05-00_00.csv', header=None)
rainfall = pd.DataFrame(rainfall.iloc[0,5::])
rainfall.index = np.arange(0,len(rainfall),1)
rainfall.columns = ['R']
rainfall['x'] = coords['x']
rainfall['y'] = coords['y']
rainfall.head()
Explanation: Read the 24h dataset and (re)arange the pandas DataFrame
End of explanation
from rpy2.robjects import pandas2ri
pandas2ri.activate()
Explanation: Activate the pandas to R conversion interface
End of explanation
mask = rainfall.R>0
rainfall = rainfall[mask]
r_df = pandas2ri.py2ri(rainfall)
r.assign('mydata', r_df)
Explanation: Select only gridpoints > 0mm rain (wet mask) and assign it in the R environment
End of explanation
r('''
mydata <- data.frame(mydata)
coordinates(mydata) <- ~x+y
''')
Explanation: Transform dataset in R to geospatial dataset
End of explanation
cur_cmap = plt.cm.jet
plt.scatter(rainfall['x'], rainfall['y'], marker='.', c=rainfall['R'], cmap=cur_cmap)
plt.colorbar()
Explanation: Plot map with python
End of explanation
r('''
RAD24 <- read.table("./radar_sent/radar_snap_24h_2011_08_05-00_00.csv",sep=",",colClasses="numeric")
RAD24 <- as.numeric(as.vector(RAD24))
RAD24 <- RAD24[6:length(RAD24)]
png("map_24h.png",height=900,width=900)
ncuts <- 20
cuts <- seq(min(RAD24,na.rm=TRUE),max(RAD24,na.rm=TRUE),length=ncuts)
print(spplot(mydata["R"],xlab="East [m]",ylab="North [m]",key.space="right",cuts=cuts,region=TRUE,col.regions=col.palette(ncuts),main="Rainfall [mm]",scales=list(draw=TRUE)))
dev.off()
''')
Explanation: Bypass plot R map
End of explanation
p_myiso = r('myiso <- variogram(R~1,mydata,width=2,cutoff=100)')
p_myiso.head()
plt.plot(p_myiso['dist'], p_myiso['gamma'], '-o')
Explanation: Generate a isotropic variogram (2km separated lags, max 100km)
End of explanation
p_myiso_map = r('myisomap <- variogram(R~1,mydata,width=2,cutoff=50,map=TRUE)')
r('''
png("myvariogram_map_24h.png",height=600,width=600)
print(plot(myisomap))
dev.off()
''')
Explanation: Generate and save the 2D variogram map
End of explanation
rain_sorted = rainfall.sort_values('R', ascending=False)
rain_sorted = rain_sorted.iloc[0:1499]
rs_df = pandas2ri.py2ri(rain_sorted)
r.assign('data_sorted', rs_df)
r('''
data_sorted <- data.frame(data_sorted)
coordinates(data_sorted) <- ~x+y
''')
Explanation: Investigate the (an)isotropy of the dataset
Only possible with up to 1499 values. Therefore we sort the rainfall values descendingly and assign the sorted dataset to the R environment.
End of explanation
r('''
hat.anis <- estimateAnisotropy(data_sorted,"R")
anis <- c(90-hat.anis$direction,1/hat.anis$ratio)
''')
Explanation: Returns the direction of the minumum variablity clockwise from North and the anisotropy ratio
End of explanation
dir_var = r('directional_variograms <- variogram(R~1,mydata,width=2,cutoff=100,alpha=c(99.9,189.9),tol.hor=5)')
dir_1 = dir_var['dir.hor']==99.9
plt.figure()
plt.subplot(121)
plt.plot(dir_var.dist[dir_1], dir_var.gamma[dir_1])
plt.subplot(122)
plt.plot(dir_var.dist[~dir_1], dir_var.gamma[~dir_1])
Explanation: Compute directional variograms with anisotropy direction
End of explanation
r('initial_vario_sph <- vgm(psill=500,model="Sph",range=40,nugget=0)')
sph_fitted = r('fitted_vario_sph <- fit.variogram(myiso,initial_vario_sph)')
print(sph_fitted)
Explanation: Fit initial spherical variogram to isotropic variogram
End of explanation
r('''
png("fitted_isotropic_variogram_sph_24h.png",height=600,width=900)
print(plot(myiso,fitted_vario_sph))
dev.off()
''')
Explanation: Save image of fitted variogram
End of explanation
r('range <- fitted_vario_sph$range[2]')
Explanation: fitted range
End of explanation
r('nugget <- fitted_vario_sph$psill[1]')
Explanation: fitted nugget
End of explanation
r('sill <- sum(fitted_vario_sph$psill)')
Explanation: fitted sill
End of explanation
r('SSErr_sph <- attributes(fitted_vario_sph)$SSErr')
Explanation: sum of squared errors
End of explanation
r('initial_vario_exp <- vgm(psill=500,model="Exp",range=40/3,nugget=0)')
exp_fitted = r('fitted_vario_exp <- fit.variogram(myiso,initial_vario_exp)')
Explanation: Fit exponential model
End of explanation
r('SSErr_exp <- attributes(fitted_vario_exp)$SSErr')
Explanation: sum of squared errors
End of explanation
r('''
png("fitted_isotropic_variogram_exp_24h.png",height=600,width=900)
print(plot(myiso,fitted_vario_exp))
dev.off()
''')
Explanation: Save image
End of explanation |
9,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step18: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Batch Normalization (Added)
Implemented a batch normalization wrapper that during training accumulates and computes the batches population mean and variance. Thus, in test it only uses the final computed mean and variance for the predictions in the neural net. Parameter is_training comes from a tf.placeholder in function neural_net_batch_norm_mode_input(). This parameter was added to the conv2d_maxpool() and fully_conn() functions to be able to define if the network is training or predicting(test), so that batch normalization performs accordingly [remember that batch norm works different for training than for predicting(test)].
Additional inputs were added to the feed_dict of train_neural_net() for conv_net so that the batch normalization mode can be turn on/off or set to training/test mode.
Step23: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step26: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step29: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step35: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step38: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step40: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step41: Hyperparameters
Tune the following parameters
Step43: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step45: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step48: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 17
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
np_x = np.array(x)
norm_x = (np_x)/255 # current normalization only divides by max value
# ANOTHER METHOD with range from -1 to 1
# (x - mean)/std = in an image mean=128 (expected)
#and std=128 (expected) since we want to center our distribution in 0. If 255 colors, approx -128..0..128
return norm_x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
from sklearn import preprocessing
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
encode = preprocessing.LabelBinarizer()
encode.fit([0,1,2,3,4,5,6,7,8,9]) #possible values of labels to be encoded in vectors of 0's and 1's
#show the encoding that corresponds to each label
#print (encode.classes_)
#print (encode.transform([9,8,7,6,5,4,3,2,1,0]))
labels_one_hot_encode = encode.transform(x) # encodes the labels with 0,1 values based on labelID [0,9]
return labels_one_hot_encode
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
import numpy as np
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
batch_size = None
return tf.placeholder(tf.float32, shape=([batch_size, image_shape[0], image_shape[1], image_shape[2]]), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
batch_size = None
return tf.placeholder(tf.float32, shape=([batch_size, n_classes]), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=None, name="keep_prob")
#------------------Added the Batch Normalization option to the network--------------------------------
def neural_net_batch_norm_mode_input(use_batch_norm, batch_norm_mode):
Return a Tensor for batch normalization
Tensor 'use_batch_norm': Batch Normalization on/off (True = on, False = off)
Tensor 'batch_norm_mode': Batch Normalization mode (True = net is training, False = net in test mode)
: return: Tensor for batch normalization mode
return tf.Variable(use_batch_norm, name ="use_batch_norm"), tf.placeholder(batch_norm_mode, name="batch_norm_mode")
#------------------------------------------------------------------------------------------------------
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def batch_norm_wrapper(inputs, is_training,is_conv_layer=True, decay=0.999):
Function that implements batch normalization. Stores the population mean and variance as tf.Variables,
and decides whether to use the batch statistics or the population statistics for normalization.
inputs: the dataset to learn(train)/predict(test) * weights of layer that uses this batch normalization, also used to get num_outputs
is_training: set to True to learn the population and variance during training. False to use in test dataset
decay: is a moving average decay rate to estimate the population mean and variance during training
Batch_Norm = Gamma * X + Beta <=> BN(x*weights + bias)
References:
https://gist.github.com/tomokishii/0ce3bdac1588b5cca9fa5fbdf6e1c412
http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
https://r2rt.com/implementing-batch-normalization-in-tensorflow.html
epsilon = 1e-3
scale = tf.Variable(tf.ones([inputs.get_shape()[-1]])) #gamma
beta = tf.Variable(tf.zeros([inputs.get_shape()[-1]]))
pop_mean = tf.Variable(tf.zeros([inputs.get_shape()[-1]]), trainable = False) #False means we will train it rather than optimizer
pop_var = tf.Variable(tf.ones([inputs.get_shape()[-1]]), trainable = False) #False means we will train it rather than optimizer
if is_training:
# update/compute the population mean and variance of our total training dataset split into batches
# do this to know the value to use for the test and predictions
if is_conv_layer:
batch_mean, batch_var = tf.nn.moments(inputs,[0,1,2]) #conv layer needs 3 planes, dimensions -> [height,depth,colors]
else:
batch_mean, batch_var = tf.nn.moments(inputs,[0]) #fully connected layer only one plane (flat layer)
train_mean = tf.assign(pop_mean, pop_mean*decay + batch_mean*(1 - decay))
train_var = tf.assign(pop_var, pop_var * decay + batch_var * (1 - decay))
with tf.control_dependencies([train_mean, train_var]):
return tf.nn.batch_normalization(inputs, batch_mean, batch_var, beta, scale, epsilon)
else:
# when in test mode we need to use the population mean and var computed/learned from training
return tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, epsilon)
Explanation: Batch Normalization (Added)
Implemented a batch normalization wrapper that during training accumulates and computes the batches population mean and variance. Thus, in test it only uses the final computed mean and variance for the predictions in the neural net. Parameter is_training comes from a tf.placeholder in function neural_net_batch_norm_mode_input(). This parameter was added to the conv2d_maxpool() and fully_conn() functions to be able to define if the network is training or predicting(test), so that batch normalization performs accordingly [remember that batch norm works different for training than for predicting(test)].
Additional inputs were added to the feed_dict of train_neural_net() for conv_net so that the batch normalization mode can be turn on/off or set to training/test mode.
End of explanation
#def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): #original function definition
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, is_training=True, batch_norm_on=False):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
#weights
filter_height = conv_ksize[0]
filter_width = conv_ksize[1]
color_channels = x_tensor.get_shape().as_list()[-1] # to read last value of list that contains the number of color channels
# truncated normal std dev initialization of weights
weights = tf.Variable(tf.truncated_normal([filter_height,filter_width,color_channels,conv_num_outputs], mean=0.0, stddev=0.1))
# Xavier Initialization - needs different names for vars
#weights = tf.get_variable("w_conv",shape=[filter_height,filter_width,color_channels,conv_num_outputs],initializer=tf.contrib.layers.xavier_initializer())
#bias
bias = tf.Variable(tf.zeros(conv_num_outputs))
#Convolution
#batch and channel are commonly set to 1
conv_batch_size = 1
conv_channel_size = 1
conv_strides4D = [conv_batch_size, conv_strides[0], conv_strides[1], conv_channel_size]
conv_layer = tf.nn.conv2d(x_tensor, weights, conv_strides4D, padding='SAME')
#Add Bias
conv_layer = tf.nn.bias_add(conv_layer, bias)
#Non-linear activation
conv_layer = tf.nn.relu(conv_layer)
#----------------------- Added the Batch Normalization after ReLU --------------------
if batch_norm_on:
conv_layer = batch_norm_wrapper(conv_layer, is_training, True) #true means this is a conv_layer that uses Batch norm
#conv_layer = tf.cond(is_training, lambda: batch_norm_wrapper(conv_layer, 1), lambda: conv_layer)
# Apparently doing batch norm after ReLU works well, but you might want to do ReLU again and then pooling
conv_layer = tf.nn.relu(conv_layer)
#-------------------------------------------------------------------------------------
#Max pooling
# batch and channel are commonly set to 1
pool_batch_size = 1
pool_channel_size = 1
pool_ksize4D = [pool_batch_size, pool_ksize[0], pool_ksize[1], pool_channel_size]
pool_strides4D = [1, pool_strides[0], pool_strides[1], 1]
conv_layer = tf.nn.max_pool(conv_layer, pool_ksize4D, pool_strides4D, padding='SAME')
return conv_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
# First way to do this
# Need to convert the get_shape() result to int() since at this point is a class Dimension object
flat_image = np.prod(x_tensor.get_shape()[1:])
x_tensor_flatten = tf.reshape(x_tensor,[-1, int(flat_image)])
# Second way to do it
# No Need to convert the get_shape().as_list() result since it is already an int
#flat_image2 = np.prod(x_tensor.get_shape().as_list()[1:])
#x_tensor_flatten2 = tf.reshape(x_tensor,[-1, flat_image])
return x_tensor_flatten
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
#def fully_conn(x_tensor, num_outputs): #original function definition
def fully_conn(x_tensor, num_outputs, is_training=True, batch_norm_on=False):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
batch_size, num_inputs = x_tensor.get_shape().as_list()
# truncated normal std dev initialization of weights
weights = tf.Variable(tf.truncated_normal([num_inputs, num_outputs], mean=0.0, stddev=0.1))
# Xavier Initialization
#weights = tf.get_variable("w_fc", shape=[filter_height,filter_width,color_channels,conv_num_outputs],initializer=tf.contrib.layers.xavier_initializer())
bias = tf.Variable(tf.zeros(num_outputs))
#-------------------------------------------------------------------
#Batch normalization - Attempted to do it, but better not have dropout that is a unit test here, so not using it
# moreover the implementation requires an extended class that creates the model to receive a flag indicating
# if the model is training or in test (batch normalization takes a different behavior in each)
#epsilon = 1e-3 # epsilon for Batch Normalization - avoids div with 0
#z_BN = tf.matmul(x_tensor,weights)
#batch_mean, batch_var = tf.nn.moments(z_BN,[0])
#scale = tf.Variable(tf.ones(num_outputs))
#beta = tf.Variable(tf.zeros(num_outputs))
#fc_BN = tf.nn.batch_normalization(z_BN, batch_mean, batch_var, beta, scale, epsilon)
#fc = tf.nn.relu(fc_BN)
#-------------------------------------------------------------------
# Batch Norm wrapper
if batch_norm_on:
z_BN = tf.matmul(x_tensor,weights)
fc_BN = batch_norm_wrapper(z_BN, is_training, is_conv_layer=False)
fc = tf.nn.relu(fc_BN)
else:
#-------------------------------------------------------------------
#Normal FC - no BatchNormalization
fc = tf.matmul(x_tensor, weights) + bias
fc = tf.nn.relu(fc)
return fc
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
batch_size, num_inputs = x_tensor.get_shape().as_list()
# truncated normal std dev initialization of weights
weights = tf.Variable(tf.truncated_normal([num_inputs, num_outputs], mean=0.0, stddev=0.1))
# Xavier Initialization
#weights = tf.get_variable("w_out", shape=[filter_height,filter_width,color_channels,conv_num_outputs],initializer=tf.contrib.layers.xavier_initializer())
bias = tf.Variable(tf.zeros(num_outputs))
# Normal Linear prediction - no BN
linear_prediction = tf.matmul(x_tensor, weights) + bias #linear activation
#Batch normalization
#epsilon = 1e-3 # epsilon for Batch Normalization - avoids div with 0
#z_BN = tf.matmul(x_tensor,weights)
#batch_mean, batch_var = tf.nn.moments(z_BN,[0])
#scale = tf.Variable(tf.ones(num_outputs))
#beta = tf.Variable(tf.zeros(num_outputs))
#linear_prediction = tf.nn.batch_normalization(z_BN, batch_mean, batch_var, beta, scale, epsilon)
return linear_prediction
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
#---------------- Added to try to implement Batch Norm -----------------------------
# issue with tensors inside the conv2d_maxpool and fully_conn being apparently
# in different graphs than the feeded x tensor
def run_conv_layers(x, is_training, batch_norm):
conv_num_outputs = [16,40,60] #[36,70,100]
conv_ksize = [[5,5],[5,5],[5,5]] #[[3,3],[3,3],[1,1]]
conv_strides = [1,1]
pool_ksize = [[2,2],[2,2],[2,2]]
pool_strides = [2,2]
x_conv = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize[0], conv_strides, pool_ksize[0], pool_strides, is_training, batch_norm)
x_conv = conv2d_maxpool(x_conv, conv_num_outputs[1], conv_ksize[1], conv_strides, pool_ksize[1], pool_strides, is_training, batch_norm)
x_conv = conv2d_maxpool(x_conv, conv_num_outputs[2], conv_ksize[2], conv_strides, pool_ksize[2], pool_strides, is_training, batch_norm)
return x_conv
def run_fc_layer(x_flat, keep_prob, is_training, batch_norm):
x_fc = tf.nn.dropout(fully_conn(x_flat, 1300, is_training, batch_norm), keep_prob) #1320 #320,#120
x_fc = tf.nn.dropout(fully_conn(x_fc, 685, is_training, batch_norm), keep_prob) #685 #185,#85
x_fc = tf.nn.dropout(fully_conn(x_fc, 255, is_training, batch_norm), keep_prob) #255 #55,#25
return x_fc
#-------------------------------------------------------------------------------------
#def conv_net(x, keep_prob): #original function definition
#tf.constant only added to pass the unit test cases, this should be tf.Variable
def conv_net(x, keep_prob, is_training=tf.constant(True,tf.bool), batch_norm=tf.constant(False,tf.bool)):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# (x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = [16,40,60] #[36,70,100]
conv_ksize = [[5,5],[5,5],[5,5]] #[[3,3],[3,3],[1,1]]
conv_strides = [1,1]
pool_ksize = [[2,2],[2,2],[2,2]]
pool_strides = [2,2]
#function before batch norm
#x_conv = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize[0], conv_strides, pool_ksize[0], pool_strides)
#x_conv = conv2d_maxpool(x_conv, conv_num_outputs[1], conv_ksize[1], conv_strides, pool_ksize[1], pool_strides)
#x_conv = conv2d_maxpool(x_conv, conv_num_outputs[2], conv_ksize[2], conv_strides, pool_ksize[2], pool_strides)
#---------------------------- Added later to try to implement Batch Norm ---------------------------------------
#Hardcoded variables that drive the batch Norm - UNFORTUNATELY cannot change values for the Test cases where is_training should be false
batch_norm_bool = False
is_training_bool = True
# Unsuccessful tf.cond to use the run_conv_layer since TF complains that the weights Tensor inside
# conv2d_maxpool must be from the same graph/group as the tensor passed x/x_tensor, same issue in fc layer
#x_conv = tf.cond(is_training, lambda:run_conv_layers(x,True,batch_norm_bool), lambda:run_conv_layers(x,False,batch_norm_bool))
x_conv = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize[0], conv_strides, pool_ksize[0], pool_strides, is_training_bool, batch_norm_bool)
x_conv = conv2d_maxpool(x_conv, conv_num_outputs[1], conv_ksize[1], conv_strides, pool_ksize[1], pool_strides, is_training_bool, batch_norm_bool)
x_conv = conv2d_maxpool(x_conv, conv_num_outputs[2], conv_ksize[2], conv_strides, pool_ksize[2], pool_strides, is_training_bool, batch_norm_bool)
#----------------------------------------------------------------------------------------------------------------
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_flat = flatten(x_conv)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
# before batch norm
#x_fc = tf.nn.dropout(fully_conn(x_flat, 1300), keep_prob) #1320 #320,#120
#x_fc = tf.nn.dropout(fully_conn(x_fc, 685), keep_prob) #685 #185,#85
#x_fc = tf.nn.dropout(fully_conn(x_fc, 255), keep_prob) #255 #55,#25
#---------------------------- Added to try to implement Batch Norm ---------------------------------------
# Unsuccessful tf.cond to use the run_fc_layer since TF complains that the weights Tensor inside
# fully_conn must be from the same graph/group as the tensor passed x_flat/x_fc
#x_fc = tf.cond(is_training, lambda: run_fc_layer(x_flat, keep_prob, True,batch_norm_bool), lambda: run_fc_layer(x_flat, keep_prob,False,batch_norm_bool) )
x_fc = tf.nn.dropout(fully_conn(x_flat, 120, is_training_bool, batch_norm_bool), keep_prob) #1320 #320,#120
x_fc = tf.nn.dropout(fully_conn(x_fc, 85, is_training_bool, batch_norm_bool), keep_prob) #685 #185,#85
x_fc = tf.nn.dropout(fully_conn(x_fc, 25, is_training_bool, batch_norm_bool), keep_prob) #255 #55,#25
#----------------------------------------------------------------------------------------------------------------
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
num_outputs_pred = 10
x_predict =tf.nn.dropout(output(x_fc, num_outputs_pred), keep_prob)
# TODO: return output
return x_predict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
#-----------------Added the Batch Normalization Parameters------------
#currently not used, missing connection from the tf.Variable and the core of the network
batch_norm_on, batch_norm_mode = neural_net_batch_norm_mode_input(True,True)
#---------------------------------------------------------------------
# Model
#logits = conv_net(x, keep_prob) # original call to conv_net
#---------------------------- Added to try to implement Batch Norm ---------------------------------------
logits = conv_net(x, keep_prob, batch_norm_mode, batch_norm_on)
#---------------------------------------------------------------------------------------------------------
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
#def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): #original function declaration
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch, is_training=True, use_batch_norm=False):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
# train_feed_dict ={x: feature_batch, y: label_batch, keep_prob: keep_probability} #original train dict
#---------------------------- Added to try to implement Batch Norm ---------------------------------------
train_feed_dict ={x: feature_batch, y: label_batch, keep_prob: keep_probability, batch_norm_mode: is_training, batch_norm_on: use_batch_norm}
#---------------------------------------------------------------------------------------------------------
session.run(optimizer, feed_dict=train_feed_dict)
#pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
#pass
# train_feed_dict = {x: feature_batch, y: label_batch, keep_prob: 0.75} # original train dict
# val_feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0} # original val dict
#---------------------------- Added to try to implement Batch Norm ---------------------------------------
train_feed_dict = {x: feature_batch, y: label_batch, keep_prob: 0.75, batch_norm_mode: True, batch_norm_on: False}
val_feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0, batch_norm_mode: False, batch_norm_on: False}
#---------------------------------------------------------------------------------------------------------
validation_cost = session.run(cost, feed_dict=val_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=val_feed_dict)
train_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
print('Train_acc: {:8.14f} | Val_acc: {:8.14f} | loss: {:8.14f}'.format(train_accuracy, validation_accuracy, validation_cost))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 25
batch_size = 64
keep_probability = 0.75 #test with 0.75 seemed better
#---------------------------- Added to try to implement Batch Norm ---------------------------------------
# ACTUAL CONTROL FRO BATCH NORM IS INSIDE 'conv_net() -> batch_norm_bool, is_training_bool variables '
# Try to add the Batch Normalization parameters, but couldn't make the connection from the inside of
# convnet to the rest of the functions, everything is layed out to work except for that step in which
# from a tf.bool Tensor need to decide (try to use tf.cond) to use batch norm or not
batch_norm_is_training = True
use_batch_norm = False
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
# ORIGINAL call to train_neural_network
#train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
#---------------------------- Added to try to implement Batch Norm ---------------------------------------
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels, batch_norm_is_training, use_batch_norm)
#---------------------------------------------------------------------------------------------------------
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
# ORIGINAL call to train_neural_network
#train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
#---------------------------- Added to try to implement Batch Norm ---------------------------------------
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels, batch_norm_is_training, use_batch_norm)
#---------------------------------------------------------------------------------------------------------
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
9,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Two Vectors
Step2: Calculate Dot Product (Method 1)
Step3: Calculate Dot Product (Method 2) | Python Code:
# Load library
import numpy as np
Explanation: Title: Calculate Dot Product Of Two Vectors
Slug: calculate_dot_product_of_two_vectors
Summary: How to calculate the dot product of two vectors in Python.
Date: 2017-09-02 12:00
Category: Machine Learning
Tags: Vectors Matrices Arrays
Authors: Chris Albon
Preliminaries
End of explanation
# Create two vectors
vector_a = np.array([1,2,3])
vector_b = np.array([4,5,6])
Explanation: Create Two Vectors
End of explanation
# Calculate dot product
np.dot(vector_a, vector_b)
Explanation: Calculate Dot Product (Method 1)
End of explanation
# Calculate dot product
vector_a @ vector_b
Explanation: Calculate Dot Product (Method 2)
End of explanation |
9,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prediction (out of sample)
Step1: Artificial data
Step2: Estimation
Step3: In-sample prediction
Step4: Create a new sample of explanatory variables Xnew, predict and plot
Step5: Plot comparison
Step6: Predicting with Formulas
Using formulas can make both estimation and prediction a lot easier
Step7: We use the I to indicate use of the Identity transform. Ie., we don't want any expansion magic from using **2
Step8: Now we only have to pass the single variable and we get the transformed right-hand side variables automatically | Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np
import statsmodels.api as sm
Explanation: Prediction (out of sample)
End of explanation
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1-5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Explanation: Artificial data
End of explanation
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
Explanation: Estimation
End of explanation
ypred = olsres.predict(X)
print(ypred)
Explanation: In-sample prediction
End of explanation
x1n = np.linspace(20.5,25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n-5)**2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
Explanation: Create a new sample of explanatory variables Xnew, predict and plot
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, 'o', label="Data")
ax.plot(x1, y_true, 'b-', label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), 'r', label="OLS prediction")
ax.legend(loc="best");
Explanation: Plot comparison
End of explanation
from statsmodels.formula.api import ols
data = {"x1" : x1, "y" : y}
res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()
Explanation: Predicting with Formulas
Using formulas can make both estimation and prediction a lot easier
End of explanation
res.params
Explanation: We use the I to indicate use of the Identity transform. Ie., we don't want any expansion magic from using **2
End of explanation
res.predict(exog=dict(x1=x1n))
Explanation: Now we only have to pass the single variable and we get the transformed right-hand side variables automatically
End of explanation |
9,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The seminator() function can also perform complementation of automata. This works by converting the input into a semi-deterministic TBA, and then applying the NCSB construction, that produces a TBA. Two versions of the semi-deterministic construction are available
Step1: Here is a case where it is the opposite | Python Code:
f = spot.formula('G(a | (b U (Gc | Gd)))')
aut = f.translate(); aut
neg1 = seminator(aut, complement="spot"); neg1
neg2 = seminator(aut, complement="pldi"); neg2
nf = spot.formula_Not(f)
assert neg1.equivalent_to(nf)
assert neg2.equivalent_to(nf)
Explanation: The seminator() function can also perform complementation of automata. This works by converting the input into a semi-deterministic TBA, and then applying the NCSB construction, that produces a TBA. Two versions of the semi-deterministic construction are available:
- "spot" is the implementation available in Spot, which a a transition-based adaptation of the NCSB construction described in this TACAS'16 paper
- "pldi" is a variant described in section 5 of this PLDI'18 paper, implemented in Seminator for transition-based automata.
If complement=True is passed to the seminator() function, the smallest output produced by these two construction is used. To force one construction, use complement="spot" or complement="pldi".
The postproc_comp argument controls whether the result of the NCSB complementations should be postprocessed or not. It default to True unless the pure=True option is given.
Here is a case where the output where the "spot" variant is better than the "pldi" one (after simplification of the result):
End of explanation
f2 = spot.formula('G(a | X(!a | (a U (a & !b & X(a & b)))))')
aut2 = f2.translate(); aut2
neg3 = seminator(aut2, complement="spot"); neg3
neg4 = seminator(aut2, complement="pldi"); neg4
nf2 = spot.formula_Not(f2)
assert neg3.equivalent_to(nf2)
assert neg4.equivalent_to(nf2)
Explanation: Here is a case where it is the opposite:
End of explanation |
9,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step7: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step9: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step11: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step13: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step16: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step19: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step22: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step25: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step28: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step31: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step34: Build the Neural Network
Apply the functions you implemented above to
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step40: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step42: Save Parameters
Save the batch_size and save_path parameters for inference.
Step44: Checkpoint
Step47: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step49: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
x = [[source_vocab_to_int.get(word, 0) for word in sentence.split()] \
for sentence in source_text.split('\n')]
y = [[target_vocab_to_int.get(word, 0) for word in sentence.split()] \
for sentence in target_text.split('\n')]
source_id_text = []
target_id_text = []
found in a forum post. necessary?
n1 = len(x[i])
n2 = len(y[i])
n = n1 if n1 < n2 else n2
if abs(n1 - n2) <= 0.3 * n:
if n1 <= 17 and n2 <= 17:
for i in range(len(x)):
source_id_text.append(x[i])
target_id_text.append(y[i] + [target_vocab_to_int['<EOS>']])
return (source_id_text, target_id_text)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input_text = tf.placeholder(tf.string,[None, None], name="input")
target_text = tf.placeholder(tf.string,[None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return input_text, target_text, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
enc_cell_drop = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)
_, enc_state = tf.nn.dynamic_rnn(enc_cell_drop, rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
train_dec_fm = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_logits_drop, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_dec_fm, \
dec_embed_input, sequence_length, scope=decoding_scope)
train_logits = output_fn(train_logits_drop)
#I'm missing the keep_prob! don't know where to put it
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: Maximum length of
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
#Again, don't know where to put the keep_drop param
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Number of Layers
num_layers = None
# Embedding Size
encoding_embedding_size = None
decoding_embedding_size = None
# Learning Rate
learning_rate = None
# Dropout Keep Probability
keep_probability = None
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
9,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Objects and Data Structures Assessment Test
Test your knowledge.
Answer the following questions
Write a brief description of all the following Object Types and Data Structures we've learned about
Step1: Explain what the cell below will produce and why. Can you change it so the answer is correct?
Step2: Answer these 3 questions without typing code. Then type code to check your answer.
What is the value of the expression 4 * (6 + 5)
What is the value of the expression 4 * 6 + 5
What is the value of the expression 4 + 6 * 5
Step3: What is the type of the result of the expression 3 + 1.5 + 4?
What would you use to find a number’s square root, as well as its square?
Step4: Strings
Given the string 'hello' give an index command that returns 'e'. Use the code below
Step5: Reverse the string 'hello' using indexing
Step6: Given the string hello, give two methods of producing the letter 'o' using indexing.
Step7: Lists
Build this list [0,0,0] two separate ways.
Step8: Reassign 'hello' in this nested list to say 'goodbye' item in this list
Step9: Sort the list below
Step10: Dictionaries
Using keys and indexing, grab the 'hello' from the following dictionaries
Step11: Can you sort a dictionary? Why or why not?
Not sure about this.
Tuples
What is the major difference between tuples and lists?
How do you create a tuple?
Sets
What is unique about a set?
Use a set to find the unique values of the list below
Step12: Booleans
For the following quiz questions, we will get a preview of comparison operators
Step13: Final Question | Python Code:
print 10*100/10+5.75-5.5
Explanation: Objects and Data Structures Assessment Test
Test your knowledge.
Answer the following questions
Write a brief description of all the following Object Types and Data Structures we've learned about:
Numbers:
Strings:
Lists:
Tuples:
Dictionaries:
Numbers
Write an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.
Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25
End of explanation
2.0/3
Explanation: Explain what the cell below will produce and why. Can you change it so the answer is correct?
End of explanation
print 4*(6+5)
print 4*6+5
print 4+6*5
print 3+1.5+4
Explanation: Answer these 3 questions without typing code. Then type code to check your answer.
What is the value of the expression 4 * (6 + 5)
What is the value of the expression 4 * 6 + 5
What is the value of the expression 4 + 6 * 5
End of explanation
print 2**(0.5)
Explanation: What is the type of the result of the expression 3 + 1.5 + 4?
What would you use to find a number’s square root, as well as its square?
End of explanation
s = 'hello'
# Print out 'e' using indexing
print s[1]
# Code here
Explanation: Strings
Given the string 'hello' give an index command that returns 'e'. Use the code below:
End of explanation
s ='hello'
# Reverse the string using indexing
print s[::-1]
print s[:3:-1]
# Code here
Explanation: Reverse the string 'hello' using indexing:
End of explanation
s ='hello'
# Print out the
print s[4]
print s[-1]
# Code here
Explanation: Given the string hello, give two methods of producing the letter 'o' using indexing.
End of explanation
a = list([0,0,0])
print a
a = list([0,0])
print a
a.append(0)
print a
Explanation: Lists
Build this list [0,0,0] two separate ways.
End of explanation
l = [1,2,[3,4,'hello']]
l[2][2] = 'goodbye'
print l
Explanation: Reassign 'hello' in this nested list to say 'goodbye' item in this list:
End of explanation
l = [3,4,5,5,6,1]
print l
l.sort()
print l
l = [3,4,5,5,6,1]
print sorted(l)
print l
Explanation: Sort the list below:
End of explanation
d = {'simple_key':'hello'}
# Grab 'hello'
print d['simple_key']
d = {'k1':{'k2':'hello'}}
# Grab 'hello'
print d['k1']['k2']
# Getting a little tricker
d = {'k1':[{'nest_key':['this is deep',['hello']]}]}
# Grab hello
print d['k1'][0]['nest_key'][1][0]
# This will be hard and annoying!
d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]}
print d['k1'][2]['k2'][1]['tough'][2][0]
Explanation: Dictionaries
Using keys and indexing, grab the 'hello' from the following dictionaries:
End of explanation
l = [1,2,2,33,4,4,11,22,3,3,2]
set(l)
Explanation: Can you sort a dictionary? Why or why not?
Not sure about this.
Tuples
What is the major difference between tuples and lists?
How do you create a tuple?
Sets
What is unique about a set?
Use a set to find the unique values of the list below:
End of explanation
# Answer before running cell
2 > 3
# Answer before running cell
3 <= 2
# Answer before running cell
3 == 2.0
# Answer before running cell
3.0 == 3
# Answer before running cell
4**0.5 != 2
Explanation: Booleans
For the following quiz questions, we will get a preview of comparison operators:
<table class="table table-bordered">
<tr>
<th style="width:10%">Operator</th><th style="width:45%">Description</th><th>Example</th>
</tr>
<tr>
<td>==</td>
<td>If the values of two operands are equal, then the condition becomes true.</td>
<td> (a == b) is not true.</td>
</tr>
<tr>
<td>!=</td>
<td>If values of two operands are not equal, then condition becomes true.</td>
</tr>
<tr>
<td><></td>
<td>If values of two operands are not equal, then condition becomes true.</td>
<td> (a <> b) is true. This is similar to != operator.</td>
</tr>
<tr>
<td>></td>
<td>If the value of left operand is greater than the value of right operand, then condition becomes true.</td>
<td> (a > b) is not true.</td>
</tr>
<tr>
<td><</td>
<td>If the value of left operand is less than the value of right operand, then condition becomes true.</td>
<td> (a < b) is true.</td>
</tr>
<tr>
<td>>=</td>
<td>If the value of left operand is greater than or equal to the value of right operand, then condition becomes true.</td>
<td> (a >= b) is not true. </td>
</tr>
<tr>
<td><=</td>
<td>If the value of left operand is less than or equal to the value of right operand, then condition becomes true.</td>
<td> (a <= b) is true. </td>
</tr>
</table>
What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!)
End of explanation
# two nested lists
l_one = [1,2,[3,4]]
l_two = [1,2,{'k1':4}]
#True or False?
l_one[2][0] >= l_two[2]['k1']
Explanation: Final Question: What is the boolean output of the cell block below?
End of explanation |
9,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing look-elsewhere effect by creating 2d chi-square random fields with a Gaussian Process
by Kyle Cranmer, Dec 7, 2015
The correction for 2d look-elsewhere effect presented in
Estimating the significance of a signal in a multi-dimensional search by Ofer Vitells and Eilam Gross http
Step1: The main trick we will use is that a chi-square distribution for one degree of freedom is the same as the distribution of $x^2$ if $x$ is normally distributed. Here's a quick demonstration of that
Step2: Ok, now to the Gaussian processes.
Step3: Now lets histogram the values of the random field.
Don't get confused here... if you pick a single point and histogram the value of over many instances, you expect a Gaussian. However, for a single instance, you don't expect the histogram for the value of the field to be Gaussian (because of the correlations). Thought experiments
Step4: Ok, now let's repeat that several times and test lee2d
Step5: Generate 25 realizations of the GP, calculate the Euler characteristic for two thresholds, and use the mean of those Euler characteristics to estimate $N_1$ and $N_2$
Step6: With estimates of $N_1$ and $N_2$ predict the global p-value vs. u
Step7: Generate 5000 instances of the Gaussian Process, find maximum local significance for each, and check the prediction for the LEE-corrected global p-value
Step8: Study statistical uncertainty
Outline | Python Code:
%pylab inline --no-import-all
Explanation: Testing look-elsewhere effect by creating 2d chi-square random fields with a Gaussian Process
by Kyle Cranmer, Dec 7, 2015
The correction for 2d look-elsewhere effect presented in
Estimating the significance of a signal in a multi-dimensional search by Ofer Vitells and Eilam Gross http://arxiv.org/pdf/1105.4355v1.pdf
is based on the fact that the test statistic
\begin{equation}
q(\nu_1, \nu_2) = -2 \log \frac{ \max_{\theta} L(\mu=0, \nu_1, \nu_2, \theta)}{ \max_{\mu, \theta} L(\mu, \nu_1, \nu_2, \theta)}
\end{equation}
is a chi-square random field (with 1 degree of freedom). That means that, for any point in $\nu_1, \nu_2$, the quantity $q(\nu_1, \nu_2)$ would have a chi-square distribution if you repeated the experiment many times.
That is what you expect if you have a background model $p_b(x|\theta)$ and you look for a signal on top of it with signal strength $\mu$. Creating that scan is somewhat time consuming, so here we make realizations of a chi-square random field by using a Gaussian Process.
The main trick we will use is that a chi-square distribution for one degree of freedom is the same as the distribution of $x^2$ if $x$ is normally distributed. As you might have guessed, a Gaussian Process (GP) is like a chi-square random field, but it is Gaussian-distributed at each point.
Note, the distributions are not independent at each point, there is some covaraince. So if the $q(\nu_1, \nu_2)$ is high at one point, you can expect it to be high near by. We can control this behavior via the GP's kernel.
For more on the theory of Gaussian Processes, the best resource is available for free online: Rasmussen & Williams (2006). We will george -- a nice python package for Gaussian Processes (GP).
End of explanation
from scipy.stats import chi2, norm
chi2_array = chi2.rvs(1, size=10000)
norm_array = norm.rvs(size=10000)
_ = plt.hist(chi2_array, bins=100, alpha=.5, label='chi-square')
_ = plt.hist(norm_array**2, bins=100, alpha=.5, color='r', label='x^2')
plt.yscale('log', nonposy='clip')
plt.legend(('chi-square', 'x^2'))
#plt.semilogy()
Explanation: The main trick we will use is that a chi-square distribution for one degree of freedom is the same as the distribution of $x^2$ if $x$ is normally distributed. Here's a quick demonstration of that:
End of explanation
import george
from george.kernels import ExpSquaredKernel
length_scale_of_correaltion=0.1
kernel = ExpSquaredKernel(length_scale_of_correaltion, ndim=2)
# Create the Gaussian process
# gp = george.GP(kernel)
gp = george.GP(kernel, solver=george.HODLRSolver) #faster
n_scan_points=50
aspect_ratio = 10. # make excesses look like stripes
x_scan = np.arange(0,aspect_ratio,aspect_ratio/n_scan_points)
y_scan = np.arange(0,1,1./n_scan_points)
xx, yy = np.meshgrid(x_scan, y_scan)
# reformat the independent coordinates where we evaluate the GP
indep = np.vstack((np.hstack(xx),np.hstack(yy))).T
# illustration of what is being done here
np.vstack([[1,2],[3,4]]).T
# slow part: pre-compute internal stuff for the GP
gp.compute(indep)
# evaluate one realization of the GP
z = gp.sample(indep)
# reformat output for plotting
zz = z.reshape((n_scan_points,n_scan_points))
# plot the chi-square random field
plt.imshow(zz**2, cmap='gray')
plt.colorbar()
Explanation: Ok, now to the Gaussian processes.
End of explanation
# plot the gaussian distributed x and chi-square distributed x**2
plt.subplot(1,2,1)
count, edges, patches = plt.hist(np.hstack(zz), bins=100)
plt.xlabel('z')
plt.subplot(1,2,2)
count, edges, patches = plt.hist(np.hstack(zz)**2, bins=100)
plt.xlabel('q=z**2')
plt.yscale('log', nonposy='clip')
Explanation: Now lets histogram the values of the random field.
Don't get confused here... if you pick a single point and histogram the value of over many instances, you expect a Gaussian. However, for a single instance, you don't expect the histogram for the value of the field to be Gaussian (because of the correlations). Thought experiments: if you make length_scale_of_correaltion very small, then each point is essentially independent and you do expect to see a Gaussian; however, if length_scale_of_correaltion is very large then you expect the field to be nearly constant and the histogram below would be a delta function.
End of explanation
from lee2d import *
from scipy.ndimage import grey_closing, binary_closing
def fill_holes(array):
zero_array = array==0.
temp = grey_closing(array, size=2)*zero_array
return temp+array
Explanation: Ok, now let's repeat that several times and test lee2d
End of explanation
n_samples = 100
z_array = gp.sample(indep,n_samples)
q_max = np.zeros(n_samples)
phis = np.zeros((n_samples,2))
u1,u2 = 0.5, 1.
n_plots = 3
plt.figure(figsize=(9,n_plots*3))
for scan_no, z in enumerate(z_array):
scan = z.reshape((n_scan_points,n_scan_points))**2
q_max[scan_no] = np.max(scan)
# fill holes from failures in original likelihood
scan = fill_holes(scan)
#get excursion sets above those two levels
exc1 = (scan>u1) + 0. #add 0. to convert from bool to double
exc2 = (scan>u2) + 0.
#print '\nu1,u2 = ', u1, u2
#print 'diff = ', np.sum(exc1), np.sum(exc2)
if scan_no < n_plots:
aspect = 1.
plt.subplot(n_plots,3,3*scan_no+1)
aspect = 1.*scan.shape[0]/scan.shape[1]
plt.imshow(scan.T, cmap='gray', aspect=aspect)
plt.subplot(n_plots,3,3*scan_no+2)
plt.imshow(exc1.T, cmap='gray', aspect=aspect, interpolation='none')
plt.subplot(n_plots,3,3*scan_no+3)
plt.imshow(exc2.T, cmap='gray', aspect=aspect, interpolation='none')
phi1 = calculate_euler_characteristic(exc1)
phi2 = calculate_euler_characteristic(exc2)
#print 'phi1, phi2 = ', phi1, phi2
#print 'q_max = ', np.max(scan)
phis[scan_no] = [phi1, phi2]
plt.savefig('chi-square-random-fields.png')
exp_phi_1, exp_phi_2 = np.mean(phis[:,0]), np.mean(phis[:,1])
exp_phi_1, exp_phi_2
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=exp_phi_1, exp_phi_2=exp_phi_2)
print n1, n2
Explanation: Generate 25 realizations of the GP, calculate the Euler characteristic for two thresholds, and use the mean of those Euler characteristics to estimate $N_1$ and $N_2$
End of explanation
u = np.linspace(5,25,100)
global_p = global_pvalue(u,n1,n2)
Explanation: With estimates of $N_1$ and $N_2$ predict the global p-value vs. u
End of explanation
n_samples = 5000
z_array = gp.sample(indep,n_samples)
q_max = np.zeros(n_samples)
for scan_no, z in enumerate(z_array):
scan = z.reshape((n_scan_points,n_scan_points))**2
q_max[scan_no] = np.max(scan)
bins, edges, patches = plt.hist(q_max, bins=30)
icdf = 1.-np.cumsum(bins/n_samples)
icdf = np.hstack((1.,icdf))
icdf_error = np.sqrt(np.cumsum(bins))/n_samples
icdf_error = np.hstack((0.,icdf_error))
plt.xlabel('q_max')
plt.ylabel('counts / bin')
# plot the p-value
plt.subplot(121)
plt.plot(edges,icdf, c='r')
plt.errorbar(edges,icdf,yerr=icdf_error)
plt.plot(u, global_p)
plt.xlabel('u')
plt.ylabel('P(q_max >u)')
plt.xlim(0,25)
plt.subplot(122)
plt.plot(edges,icdf, c='r', label='toys')
plt.errorbar(edges,icdf,yerr=icdf_error)
plt.plot(u, global_p, label='prediction')
plt.xlabel('u')
plt.legend(('toys', 'prediction'))
#plt.ylabel('P(q>u)')
plt.ylim(1E-3,10)
plt.xlim(0,25)
plt.semilogy()
Explanation: Generate 5000 instances of the Gaussian Process, find maximum local significance for each, and check the prediction for the LEE-corrected global p-value
End of explanation
from scipy.stats import poisson
n_samples = 1000
z_array = gp.sample(indep,n_samples)
phis = np.zeros((n_samples,2))
for scan_no, z in enumerate(z_array):
scan = z.reshape((n_scan_points,n_scan_points))**2
#get excursion sets above those two levels
exc1 = (scan>u1) + 0. #add 0. to convert from bool to double
exc2 = (scan>u2) + 0.
phi1 = calculate_euler_characteristic(exc1)
phi2 = calculate_euler_characteristic(exc2)
phis[scan_no] = [phi1, phi2]
bins = np.arange(0,25)
counts, bins, patches = plt.hist(phis[:,0], bins=bins, normed=True, alpha=.3, color='b')
_ = plt.hist(phis[:,1], bins=bins, normed=True,alpha=.3, color='r')
plt.plot(bins,poisson.pmf(bins,np.mean(phis[:,0])), c='b')
plt.plot(bins,poisson.pmf(bins,np.mean(phis[:,1])), c='r')
plt.xlabel('phi_i')
plt.legend(('obs phi1', 'obs phi2', 'poisson(mean(phi1)', 'poisson(mean(phi2))'), loc='upper left')
print 'Check Poisson phi1', np.mean(phis[:,0]), np.std(phis[:,0]), np.sqrt(np.mean(phis[:,0]))
print 'Check Poisson phi1', np.mean(phis[:,1]), np.std(phis[:,1]), np.sqrt(np.mean(phis[:,1]))
print 'correlation coefficients:'
print np.corrcoef(phis[:,0], phis[:,1])
print 'covariance:'
print np.cov(phis[:,0], phis[:,1])
x, y = np.random.multivariate_normal([np.mean(phis[:,0]),np.mean(phis[:,0])], np.cov(phis[:,0], phis[:,1]), 5000).T
_ = plt.scatter(phis[:,0], phis[:,1], alpha=0.1)
plt.plot(x, y, 'x', alpha=0.1)
plt.axis('equal')
plt.xlabel('phi_0')
plt.ylabel('phi_1')
toy_n1, toy_n2 = np.zeros(x.size),np.zeros(x.size)
for i, (toy_exp_phi_1, toy_exp_phi_2) in enumerate(zip(x,y)):
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=toy_exp_phi_1, exp_phi_2=toy_exp_phi_2)
toy_n1[i] = n1
toy_n2[i] = n2
plt.scatter(toy_n1, toy_n2, alpha=.1)
plt.xlabel('n1')
plt.ylabel('n2')
# now propagate error exp_phi_1 and exp_phi_2 (by dividing cov matrix by n_samples) including correlations
x, y = np.random.multivariate_normal([np.mean(phis[:,0]),np.mean(phis[:,1])],
np.cov(phis[:,0], phis[:,1])/n_samples,
5000).T
'''
# check consistency with next cell by using diagonal covariance
dummy_cov = np.cov(phis[:,0], phis[:,1])/n_samples
dummy_cov[0,1]=0
dummy_cov[1,0]=0
print dummy_cov
x, y = np.random.multivariate_normal([np.mean(phis[:,0]),np.mean(phis[:,1])],
dummy_cov,
5000).T
'''
toy_global_p = np.zeros(x.size)
for i, (toy_exp_phi_1, toy_exp_phi_2) in enumerate(zip(x,y)):
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=toy_exp_phi_1, exp_phi_2=toy_exp_phi_2)
u = 16
#global_p = global_pvalue(u,n1,n2)
toy_global_p[i] = global_pvalue(u,n1,n2)
# now propagate error assuming uncorrelated but observed std. on phi_1 and phi_2 / sqrt(n_samples)
x = np.random.normal(np.mean(phis[:,0]), np.std(phis[:,0])/np.sqrt(n_samples), 5000)
y = np.random.normal(np.mean(phis[:,1]), np.std(phis[:,1])/np.sqrt(n_samples), 5000)
toy_global_p_uncor = np.zeros(x.size)
for i, (toy_exp_phi_1, toy_exp_phi_2) in enumerate(zip(x,y)):
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=toy_exp_phi_1, exp_phi_2=toy_exp_phi_2)
u = 16
#global_p = global_pvalue(u,n1,n2)
toy_global_p_uncor[i] = global_pvalue(u,n1,n2)
# now propagate error assuming uncorrelated Poisson stats on phi_1 and phi_2
x = np.random.normal(np.mean(phis[:,0]), np.sqrt(np.mean(phis[:,0]))/np.sqrt(n_samples), 5000)
y = np.random.normal(np.mean(phis[:,1]), np.sqrt(np.mean(phis[:,1]))/np.sqrt(n_samples), 5000)
toy_global_p_uncor_pois = np.zeros(x.size)
for i, (toy_exp_phi_1, toy_exp_phi_2) in enumerate(zip(x,y)):
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=toy_exp_phi_1, exp_phi_2=toy_exp_phi_2)
u = 16
#global_p = global_pvalue(u,n1,n2)
toy_global_p_uncor_pois[i] = global_pvalue(u,n1,n2)
counts, bins, patches = plt.hist(toy_global_p_uncor_pois, bins=50, normed=True, color='g', alpha=.3)
counts, bins, patches = plt.hist(toy_global_p_uncor, bins=bins, normed=True, color='r', alpha=.3)
counts, bins, patches = plt.hist(toy_global_p, bins=bins, normed=True, color='b', alpha=.3)
plt.xlabel('global p-value')
#plt.ylim(0,1.4*np.max(counts))
plt.legend(('uncorrelated Poisson approx from mean',
'uncorrelated Gaus. approx of observed dist',
'correlated Gaus. approx of observed dist'),
bbox_to_anchor=(1., 1.3))
Explanation: Study statistical uncertainty
Outline:
1. generate n_samples likelihood scans using the GP
1. make exclusion sets, calculate phi1, phi2 for levels u1, u2
1. look at histogram of phi1, phi2 (notice that they are narrower than Poisson)
1. look at 2-d scatter of phi1, phi2 (notice that they are positively correlated)
1. look at 2-d scatter of coefficients n1, n2 (notice tha they are negatively correlated)
1. Compare three ways of propagating error to global p-value
1. Poisson, no correlations: estimate uncertainty on Exp[phi1] as sqrt(exp_phi_1)/sqrt(n_samples)
1. Gaus approx of observed, no correlations: estimate uncertainty on Exp[phi1] as std(exp_phi_1)/sqrt(n_samples)
1. Gaus approx of observed, with correlations: estimate covariance of (Exp[phi1], Exp[phi2]) with cov(phi1, phi2)/n_samples -- note since it's covariance we divide by n_samples not sqrt(n_samples)
Conclusions:
The number of islands (as quantified by the Euler characteristic) is not Poisson distributed.
Deviation from the Poisson distribution will depend on the properties of the underlying 2-d fit (equivalently, the Gaussian Process kernel). In this example, the deviation isn't that big. It is probably generic that the uncertainty in phi is smaller than Poisson because one can only fit in so many islands into the scan... so it's probably more like a Binomial.
Unsurpringly there is also a positive correlation between the number of islands at levels u1 and u2.
This turns into an anti-correlation on the coefficients n1 and n2.
The two effects lead to the Poisson approximation over estimating the uncertainty on the global p-value.
End of explanation |
9,693 | Given the following text description, write Python code to implement the functionality described.
Description:
Write a function that returns true if the given number is the multiplication of 3 prime numbers
and false otherwise.
Knowing that (a) is less then 100.
Example:
is_multiply_prime(30) == True
30 = 2 * 3 * 5
| Python Code:
def is_multiply_prime(a):
def is_prime(n):
for j in range(2,n):
if n%j == 0:
return False
return True
for i in range(2,101):
if not is_prime(i): continue
for j in range(2,101):
if not is_prime(j): continue
for k in range(2,101):
if not is_prime(k): continue
if i*j*k == a: return True
return False |
9,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 3
Imports
Step2: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 3
Imports
End of explanation
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
xcoord, ycoord = np.meshgrid(x,y)
xpor = np.sin((nx*np.pi*xcoord)/L)
ypor = np.sin((ny*np.pi*ycoord)/L)
psi = (2/L)*xpor*ypor
return psi
print(well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1))
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,10)
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
psi = well2d(np.linspace(0,1,100),np.linspace(0,1,100),3,2,1.0)
plt.figure(figsize=(10,7))
plt.contourf(np.linspace(0,1,100),np.linspace(0,1,100),psi,cmap='gist_rainbow')
plt.xlim(0,1)
plt.ylim(0,1)
plt.colorbar()
assert True # use this cell for grading the contour plot
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
plt.figure(figsize=(10,7))
plt.pcolor(np.linspace(0,1,100),np.linspace(0,1,100),psi,cmap='rainbow')
plt.xlim(0,1)
plt.ylim(0,1)
plt.colorbar()
assert True # use this cell for grading the pcolor plot
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation |
9,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2016.12.09 - work log - prelim_month - no single names
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Table-of-Contents" data-toc-modified-id="Table-of-Contents-1"><span class="toc-item-num">1 </span>Table of Contents</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.1"><span class="toc-item-num">2.1 </span>Setup - Imports</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.2"><span class="toc-item-num">2.2 </span>Setup - Initialize Django</a></span></li></ul></li><li><span><a href="#Data-cleanup" data-toc-modified-id="Data-cleanup-3"><span class="toc-item-num">3 </span>Data cleanup</a></span><ul class="toc-item"><li><span><a href="#Remove-single-name-reliability-data" data-toc-modified-id="Remove-single-name-reliability-data-3.1"><span class="toc-item-num">3.1 </span>Remove single-name reliability data</a></span><ul class="toc-item"><li><span><a href="#Single-name-data-assessment" data-toc-modified-id="Single-name-data-assessment-3.1.1"><span class="toc-item-num">3.1.1 </span>Single-name data assessment</a></span></li><li><span><a href="#Delete-single-name-data" data-toc-modified-id="Delete-single-name-data-3.1.2"><span class="toc-item-num">3.1.2 </span>Delete single-name data</a></span></li></ul></li></ul></li><li><span><a href="#Coding-to-look-into" data-toc-modified-id="Coding-to-look-into-4"><span class="toc-item-num">4 </span>Coding to look into</a></span><ul class="toc-item"><li><span><a href="#Match-for-just-first-name?---TODO" data-toc-modified-id="Match-for-just-first-name?---TODO-4.1"><span class="toc-item-num">4.1 </span>Match for just first name? - TODO</a></span></li></ul></li><li><span><a href="#Debugging" data-toc-modified-id="Debugging-5"><span class="toc-item-num">5 </span>Debugging</a></span><ul class="toc-item"><li><span><a href="#No-mentions-in-Article_Data-view-page?---FIXED" data-toc-modified-id="No-mentions-in-Article_Data-view-page?---FIXED-5.1"><span class="toc-item-num">5.1 </span>No mentions in Article_Data view page? - FIXED</a></span></li></ul></li></ul></div>
Setup
Back to Table of Contents
Setup - Imports
Back to Table of Contents
Step1: Setup - Initialize Django
Back to Table of Contents
First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings.
Step2: Data cleanup
Back to Table of Contents
Remove single-name reliability data
Back to Table of Contents
Next, remove all reliability data that refers to a single name using the "View reliability name information" screen
Step3: Is there only one person with first name Kate?
Step4: So... If there is a single match in the database for a single name part (first name or last name), but the match contains more than just the first name, I don't want to call that a match unless there is some sort of associated ID that also matches.
Debugging
Back to Table of Contents
No mentions in Article_Data view page? - FIXED
Back to Table of Contents
For all subjects here | Python Code:
import datetime
print( "packages imported at " + str( datetime.datetime.now() ) )
%pwd
Explanation: 2016.12.09 - work log - prelim_month - no single names
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Table-of-Contents" data-toc-modified-id="Table-of-Contents-1"><span class="toc-item-num">1 </span>Table of Contents</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.1"><span class="toc-item-num">2.1 </span>Setup - Imports</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.2"><span class="toc-item-num">2.2 </span>Setup - Initialize Django</a></span></li></ul></li><li><span><a href="#Data-cleanup" data-toc-modified-id="Data-cleanup-3"><span class="toc-item-num">3 </span>Data cleanup</a></span><ul class="toc-item"><li><span><a href="#Remove-single-name-reliability-data" data-toc-modified-id="Remove-single-name-reliability-data-3.1"><span class="toc-item-num">3.1 </span>Remove single-name reliability data</a></span><ul class="toc-item"><li><span><a href="#Single-name-data-assessment" data-toc-modified-id="Single-name-data-assessment-3.1.1"><span class="toc-item-num">3.1.1 </span>Single-name data assessment</a></span></li><li><span><a href="#Delete-single-name-data" data-toc-modified-id="Delete-single-name-data-3.1.2"><span class="toc-item-num">3.1.2 </span>Delete single-name data</a></span></li></ul></li></ul></li><li><span><a href="#Coding-to-look-into" data-toc-modified-id="Coding-to-look-into-4"><span class="toc-item-num">4 </span>Coding to look into</a></span><ul class="toc-item"><li><span><a href="#Match-for-just-first-name?---TODO" data-toc-modified-id="Match-for-just-first-name?---TODO-4.1"><span class="toc-item-num">4.1 </span>Match for just first name? - TODO</a></span></li></ul></li><li><span><a href="#Debugging" data-toc-modified-id="Debugging-5"><span class="toc-item-num">5 </span>Debugging</a></span><ul class="toc-item"><li><span><a href="#No-mentions-in-Article_Data-view-page?---FIXED" data-toc-modified-id="No-mentions-in-Article_Data-view-page?---FIXED-5.1"><span class="toc-item-num">5.1 </span>No mentions in Article_Data view page? - FIXED</a></span></li></ul></li></ul></div>
Setup
Back to Table of Contents
Setup - Imports
Back to Table of Contents
End of explanation
%run django_init.py
Explanation: Setup - Initialize Django
Back to Table of Contents
First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings.
End of explanation
# imports
from context_text.article_coding.manual_coding.manual_article_coder import ManualArticleCoder
from context_text.models import Article_Subject
# declare variables
my_coder = None
subject = None
person_name = ""
person_instance = None
person_match_list = None
# create ManualArticleCoder and Article_Subject instance
my_coder = ManualArticleCoder()
subject = Article_Subject()
# set up look up of "Kate"
person_name = "Kate"
# lookup person - returns person and confidence score inside
# Article_Person descendent instance.
subject = my_coder.lookup_person( subject,
person_name,
create_if_no_match_IN = False,
update_person_IN = False )
# retrieve information from Article_Person
person_instance = subject.person
person_match_list = subject.person_match_list # list of Person instances
if ( person_instance is not None ):
# Found person for "Kate":
print( "Found person for \"" + str( person_name ) + "\": " + str( person_instance ) )
else:
# no person instance found.
print( "No person instance found for \"" + str( person_name ) + "\"" )
#-- END check to see if person_instance --#
if ( ( person_match_list is not None ) and ( len( person_match_list ) > 0 ) ):
print( "match list:" )
for match_person in person_match_list:
# output each person for now.
print( "- " + str( match_person ) )
#-- END loop over person_match_list --#
else:
print( "match list is None or empty." )
#-- END check to see if there is a match list.
Explanation: Data cleanup
Back to Table of Contents
Remove single-name reliability data
Back to Table of Contents
Next, remove all reliability data that refers to a single name using the "View reliability name information" screen:
https://research.local/research/context/analysis/reliability/names/disagreement/view
To start, enter the following in fields there:
Label: - "prelim_month"
Coders to compare (1 through ==>): - 2
Reliability names filter type: - Select "Lookup"
[Lookup] - Person has first name, no other name parts. - CHECK the checkbox
You should see lots of entries where coders detected people who were mentioned only by their first name.
Single-name data assessment
Back to Table of Contents
Need to look at each instance where a person has a single name part.
Most are probably instances where the computer correctly detected the name part, but where you don't have enough name to match it to a person so the human coding protocol directed them to not capture the name fragment.
However, there might be some where a coder made a mistake and just captured a name part for a person whose full name was in the story. To check, click the "Article ID" in the column that has a link to article ID. It will take you to a view of the article where all the people who coded the article are included, with each detection of a mention or quotation displayed next to the paragraph where the person was originally first detected.
So for each instance of a single name part:
click on the article ID link in the row to go to the article and check to see if there is person whose name the fragment is a part of ( https://research.local/research/context/text/article/article_data/view_with_text/ ).
If there is a person with a full name to which the name fragment is a reference, check to see if the coder has data for the full person.
if not, merge:
go to the disagreement view page: https://research.local/research/context/analysis/reliability/names/disagreement/view
Configure:
Label: - "prelim_month"
Coders to compare (1 through ==>): - 2
Reliability names filter type: - Select "Lookup"
[Lookup] - Associated Article IDs (comma-delimited): - Enter the ID of the article the coding belonged to.
this will bring up all coding for the article whose ID you entered.
In the "select" column, click the checkbox in the row where there is a single name part that needs to be merged.
In the "merge INTO" column, click the checbox in the row with the full name for that person.
In "Reliability Names Action", choose "Merge Coding --> FROM 1 SELECTED / INTO 1"
Click "Do Action" button.
Remove the Reliability_Names row with the name fragment from reliability data.
Delete single-name data
Back to Table of Contents
To get rid of all matching in this list, click the checkbox in the "select" column next to each one you want to delete (sorry, no "select all" just yet), choose "Delete selected" from the "Reliability names action:" field at the top of the list, then click the "Do action" button.
Reliability_Names records Removed:
| ID | Article | Article_Data | Article_Subject |
|------|------|------|------|
| 8618 | Article 20739 | Article_Data 2980 | 11006 (AS) - Christopher ( id = 2776; capture_method = OpenCalais_REST_API_v2 ) (mentioned; individual) ==> name: Christopher |
| Article <AID> | Article_Data <ADID> | <str( Article_Subject )> |
Coding to look into
Back to Table of Contents
Coding decisions to look at more closely:
Match for just first name? - TODO
Back to Table of Contents
First name "Kate" was matched to "Kate Gosselin" but "Gosselin" is nowhere in the article.
Article Data 2980, article 20739 - 11003 (AS) - Gosselin, Kate ( id = 1608; capture_method = OpenCalais_REST_API_v2 ) (mentioned; individual) ==> name: Kate
Not sure where "Gosselin" came from - need to look into the lookup for "Kate".
article 20739 - https://research.local/research/context/text/article/article_data/view_with_text/?article_id=20739
article data 2980 - https://research.local/research/context/text/article/article_data/view/?article_id=20739&article_data_id_select=2980
End of explanation
# imports
from context_text.models import Person
# declare variables
name_string = ""
test_person_qs = None
test_person = None
test_person_count = -1
# do a lookup, filtering on first name of "Kate".
name_string = "Kate"
test_person_qs = Person.objects.filter( first_name = name_string )
# got anything at all?
if ( test_person_qs is not None ):
# process results - count...
test_person_count = test_person_qs.count()
print( "Found " + str( test_person_count ) + " matches:" )
# ...and loop.
for test_person in test_person_qs:
# output person
print( "- " + str( test_person ) )
#-- END loop over matching persons. --#
#-- END check to see if None --#
Explanation: Is there only one person with first name Kate?
End of explanation
from context_text.models import Article_Data
# lookup the article data in question.
article_data = Article_Data.objects.get( pk = 2980 )
# ha. So, I had a misnamed variable - didn't need to do any more debugging than this.
Explanation: So... If there is a single match in the database for a single name part (first name or last name), but the match contains more than just the first name, I don't want to call that a match unless there is some sort of associated ID that also matches.
Debugging
Back to Table of Contents
No mentions in Article_Data view page? - FIXED
Back to Table of Contents
For all subjects here:
https://research.local/research/context/text/article/article_data/view/?article_id=20739&article_data_id_select=2980)
There are no mentions displayed, even though the counts next to each show there are mentions.
End of explanation |
9,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple (ie. no error checking or sensible engineering) notebook to extract the student answer data from an xml file.
I'm not 100% sure what we actually need for the moment, so I'm just going to extract the student answer data from a single file. That is, I'm not at first going to use the reference answer etc.
Step1: The reference answers are the third daughter node of the tree
Step2: Now iterate over the student answers to get the specific responses. For the moment, we'll just stick to the text and the accuracy. I'll also add an index term to make it a bit easier to convert to a dataframe.
Step3: Next, we need to carry out whatever analysis we want on the answers. In this case, we'll split on whitespace, convert to lower case, and strip punctuation. Feel free to redefine the to_tokens function to do whatever analysis you prefer.
Step4: So now we can apply the to_tokens function to each of the student responses
Step5: OK, good. So now let's see how big the vocabulary is for the complete set
Step6: Now we can set up a document frequency dict
Step7: Now add a tf.idf dict to each of the responses
Step8: Finally, convert the response data into a dataframe | Python Code:
filename='semeval2013-task7/semeval2013-Task7-5way/beetle/train/Core/FaultFinding-BULB_C_VOLTAGE_EXPLAIN_WHY1.xml'
import pandas as pd
from xml.etree import ElementTree as ET
tree=ET.parse(filename)
Explanation: A simple (ie. no error checking or sensible engineering) notebook to extract the student answer data from an xml file.
I'm not 100% sure what we actually need for the moment, so I'm just going to extract the student answer data from a single file. That is, I'm not at first going to use the reference answer etc.
End of explanation
r=tree.getroot()
r[2]
Explanation: The reference answers are the third daughter node of the tree:
End of explanation
responses_ls=[{'accuracy':a.attrib['accuracy'], 'text':a.text, 'idx':i} for (i, a) in enumerate(r[2])]
responses_ls
Explanation: Now iterate over the student answers to get the specific responses. For the moment, we'll just stick to the text and the accuracy. I'll also add an index term to make it a bit easier to convert to a dataframe.
End of explanation
from string import punctuation
def to_tokens(textIn):
'''Convert the input textIn to a list of tokens'''
tokens_ls=[t.lower().strip(punctuation) for t in textIn.split()]
# remove any empty tokens
return [t for t in tokens_ls if t]
str='"Help!" yelped the banana, who was obviously scared out of his skin.'
print(str)
print(to_tokens(str))
Explanation: Next, we need to carry out whatever analysis we want on the answers. In this case, we'll split on whitespace, convert to lower case, and strip punctuation. Feel free to redefine the to_tokens function to do whatever analysis you prefer.
End of explanation
for resp_dict in responses_ls:
resp_dict['tokens']=to_tokens(resp_dict['text'])
responses_ls
Explanation: So now we can apply the to_tokens function to each of the student responses:
End of explanation
vocab_set=set()
for resp_dict in responses_ls:
vocab_set=vocab_set.union(set(resp_dict['tokens']))
len(vocab_set)
Explanation: OK, good. So now let's see how big the vocabulary is for the complete set:
End of explanation
docFreq_dict={}
for t in vocab_set:
docFreq_dict[t]=len([resp_dict for resp_dict in responses_ls if t in resp_dict['tokens']])
docFreq_dict
Explanation: Now we can set up a document frequency dict:
End of explanation
for resp_dict in responses_ls:
resp_dict['tfidf']={t:resp_dict['tokens'].count(t)/docFreq_dict[t] for t in resp_dict['tokens']}
responses_ls[6]
Explanation: Now add a tf.idf dict to each of the responses:
End of explanation
out_df=pd.DataFrame(index=docFreq_dict.keys())
for resp_dict in responses_ls:
out_df[resp_dict['idx']]=pd.Series(resp_dict['tfidf'], index=out_df.index)
out_df=out_df.fillna(0).T
out_df.head()
accuracy_ss=pd.Series({r['idx']:r['accuracy'] for r in responses_ls})
accuracy_ss.head()
Explanation: Finally, convert the response data into a dataframe:
End of explanation |
9,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An example of GEANT4 in IPython
Version 0.1, released 18/11/2014, alpha
This is a rough example of what can be done within IPython notebook with the GEANT4 python environment. Currently this "linac head" is made of just a vacuum and two scattering foils. The beam distribution currently is just a purely mono-energetic 6 MeV beam projecting out from a point. This is very much a non-physical simulation. It is designed for example purposes only.
As a starting place I recommend fiddling with the geometry class.
If you want to see an advanced example of Geant4 with its python bindings Christopher Poole's linac is great.
Importing GEANT4
Importing the GEANT4 module is as simple as writing
Step1: Setting the requirements for a simulation
Now, before we start up our simulation we need to define a few classes. Geant4 absolutely requires the following three classes
Step2: Now that you have made your geometry class, time to load it up
Step3: The physics list
There are a range of physics options available, generally one would define their own physics class. Just for ease at the moment I have used a standard one (and because I am still learning myself). If someone really wanted to use this for realistic results you would need to customise this yourself, especially a complicated concept called cuts. I have found in one case that the default option gave awkward PDDs.
Step4: Generating the beam
Given here is a very simplistic beam. I have made it so that mono-energetic electrons are shot all in the same direction, all from a single point. This of course is not realistic.
Step5: And this is now loading up the generator we have just made
Step6: Initialise the simulation
Now that we have set all of the requirements for our simulation we can initialise it
Step7: Seeing the geometry
To see the beautiful geometry we have made we can use a raytracer macro. This first cell creates the macro file. The second cell runs the file and then displays the created png image.
Step8: Seeing the particle tracks
We can use a dawn macro in order to print out the particle tracks that a given number of generated particles porduced.
Step9: Once we have defined how we want to see the tracks we can beam on our pretend linac with 50 electrons.
Step10: The beam on created a prim file which needs to be converted to png for viewing.
Step11: And here is our wonderful simulation
Step12: Versions | Python Code:
%pylab inline
from Geant4 import *
from IPython.display import Image
Explanation: An example of GEANT4 in IPython
Version 0.1, released 18/11/2014, alpha
This is a rough example of what can be done within IPython notebook with the GEANT4 python environment. Currently this "linac head" is made of just a vacuum and two scattering foils. The beam distribution currently is just a purely mono-energetic 6 MeV beam projecting out from a point. This is very much a non-physical simulation. It is designed for example purposes only.
As a starting place I recommend fiddling with the geometry class.
If you want to see an advanced example of Geant4 with its python bindings Christopher Poole's linac is great.
Importing GEANT4
Importing the GEANT4 module is as simple as writing:
from Geant4 import *
The rest of what is written here you might get an idea about from this lecture
End of explanation
class MyDetectorConstruction(G4VUserDetectorConstruction):
"My Detector Construction"
def __init__(self):
G4VUserDetectorConstruction.__init__(self)
self.solid = {}
self.logical = {}
self.physical = {}
self.create_world(side = 4000,
material = "G4_AIR")
self.create_cylinder(name = "vacuum",
radius = 200,
length = 320,
translation = [0,0,900],
material = "G4_Galactic",
colour = [1.,1.,1.,0.1],
mother = 'world')
self.create_cylinder(name = "upper_scatter",
radius = 10,
length = 0.01,
translation = [0,0,60],
material = "G4_Ta",
colour = [1.,1.,1.,0.7],
mother = 'vacuum')
self.create_cylinder(name = "lower_scatter",
radius = 30,
length = 0.01,
translation = [0,0,20],
material = "G4_Al",
colour = [1.,1.,1.,0.7],
mother = 'vacuum')
self.create_applicator_aperture(name = "apature_1",
inner_side = 142,
outer_side = 182,
thickness = 6,
translation = [0,0,449],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_applicator_aperture(name = "apature_2",
inner_side = 130,
outer_side = 220,
thickness = 12,
translation = [0,0,269],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_applicator_aperture(name = "apature_3",
inner_side = 110,
outer_side = 180,
thickness = 12,
translation = [0,0,140],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_applicator_aperture(name = "apature_4",
inner_side = 100,
outer_side = 140,
thickness = 12,
translation = [0,0,59],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_applicator_aperture(name = "cutout",
inner_side = 100,
outer_side = 120,
thickness = 6,
translation = [0,0,50],
material = "G4_Fe",
colour = [1,1,1,0.7],
mother = 'world')
self.create_cube(name = "phantom",
side = 500,
translation = [0,0,-250],
material = "G4_WATER",
colour = [0,0,1,0.4],
mother = 'world')
def create_world(self, **kwargs):
material = gNistManager.FindOrBuildMaterial(kwargs['material'])
side = kwargs['side']
self.solid['world'] = G4Box("world", side/2., side/2., side/2.)
self.logical['world'] = G4LogicalVolume(self.solid['world'],
material,
"world")
self.physical['world'] = G4PVPlacement(G4Transform3D(),
self.logical['world'],
"world", None, False, 0)
visual = G4VisAttributes()
visual.SetVisibility(False)
self.logical['world'].SetVisAttributes(visual)
def create_cylinder(self, **kwargs):
name = kwargs['name']
radius = kwargs['radius']
length = kwargs['length']
translation = G4ThreeVector(*kwargs['translation'])
material = gNistManager.FindOrBuildMaterial(kwargs['material'])
visual = G4VisAttributes(G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
self.solid[name] = G4Tubs(name, 0., radius, length/2., 0., 2*pi)
self.logical[name] = G4LogicalVolume(self.solid[name],
material,
name)
self.physical[name] = G4PVPlacement(None, translation,
name,
self.logical[name],
mother, False, 0)
self.logical[name].SetVisAttributes(visual)
def create_cube(self, **kwargs):
name = kwargs['name']
side = kwargs['side']
translation = G4ThreeVector(*kwargs['translation'])
material = gNistManager.FindOrBuildMaterial(kwargs['material'])
visual = G4VisAttributes(G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
self.solid[name] = G4Box(name, side/2., side/2., side/2.)
self.logical[name] = G4LogicalVolume(self.solid[name],
material,
name)
self.physical[name] = G4PVPlacement(None, translation,
name,
self.logical[name],
mother, False, 0)
self.logical[name].SetVisAttributes(visual)
def create_applicator_aperture(self, **kwargs):
name = kwargs['name']
inner_side = kwargs['inner_side']
outer_side = kwargs['outer_side']
thickness = kwargs['thickness']
translation = G4ThreeVector(*kwargs['translation'])
material = gNistManager.FindOrBuildMaterial(kwargs['material'])
visual = G4VisAttributes(G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
inner_box = G4Box("inner", inner_side/2., inner_side/2., thickness/2. + 1)
outer_box = G4Box("outer", outer_side/2., outer_side/2., thickness/2.)
self.solid[name] = G4SubtractionSolid(name,
outer_box,
inner_box)
self.logical[name] = G4LogicalVolume(self.solid[name],
material,
name)
self.physical[name] = G4PVPlacement(None,
translation,
name,
self.logical[name],
mother, False, 0)
self.logical[name].SetVisAttributes(visual)
# -----------------------------------------------------------------
def Construct(self): # return the world volume
return self.physical['world']
Explanation: Setting the requirements for a simulation
Now, before we start up our simulation we need to define a few classes. Geant4 absolutely requires the following three classes:
A detector geometry, this is where stuff is and what it is made of
A physics list, which is what particles to use and how to simulate them
and a primary generator, which is how do you generate your particles, what energy do they have, direction, position, type etc.
This small example only includes those three classes, however things such as outputting where energy was deposited requires further elements to the simulation such as scoring
A good overview of the base GEANT4 simulation requirements is found in the Geant4 documentation
Creating the geometry class
Everything written in the next cell defines what exists in your world. To get an idea of what is going on here is the palce to start. A list of Geant4 materials will be useful to know what you can quickly make your world out of. My recommendation would be to have a fiddle around and see what happens. That's the way I tend to learn best. Once you desire to extend yourself you might want to have a look at some of the Geant4 documentation:
learn how Geant4 handles "solids", "logicals", and "physicals"
then delve deeper into various available Geant4 solids
End of explanation
# set geometry
detector = MyDetectorConstruction()
gRunManager.SetUserInitialization(detector)
Explanation: Now that you have made your geometry class, time to load it up
End of explanation
# set physics list
physics_list = FTFP_BERT()
gRunManager.SetUserInitialization(physics_list)
Explanation: The physics list
There are a range of physics options available, generally one would define their own physics class. Just for ease at the moment I have used a standard one (and because I am still learning myself). If someone really wanted to use this for realistic results you would need to customise this yourself, especially a complicated concept called cuts. I have found in one case that the default option gave awkward PDDs.
End of explanation
class MyPrimaryGeneratorAction(G4VUserPrimaryGeneratorAction):
"My Primary Generator Action"
def __init__(self):
G4VUserPrimaryGeneratorAction.__init__(self)
particle_table = G4ParticleTable.GetParticleTable()
electron = particle_table.FindParticle(G4String("e-"))
positron = particle_table.FindParticle(G4String("e+"))
gamma = particle_table.FindParticle(G4String("gamma"))
beam = G4ParticleGun()
beam.SetParticleEnergy(6*MeV)
beam.SetParticleMomentumDirection(G4ThreeVector(0,0,-1))
beam.SetParticleDefinition(electron)
beam.SetParticlePosition(G4ThreeVector(0,0,1005))
self.particleGun = beam
def GeneratePrimaries(self, event):
self.particleGun.GeneratePrimaryVertex(event)
Explanation: Generating the beam
Given here is a very simplistic beam. I have made it so that mono-energetic electrons are shot all in the same direction, all from a single point. This of course is not realistic.
End of explanation
primary_generator_action = MyPrimaryGeneratorAction()
gRunManager.SetUserAction(primary_generator_action)
Explanation: And this is now loading up the generator we have just made
End of explanation
# Initialise
gRunManager.Initialize()
Explanation: Initialise the simulation
Now that we have set all of the requirements for our simulation we can initialise it
End of explanation
%%file macros/raytrace.mac
/vis/open RayTracer
/vis/rayTracer/headAngle 340.
/vis/rayTracer/eyePosition 200 200 250 cm
/vis/rayTracer/trace images/world.jpg
gUImanager.ExecuteMacroFile('macros/raytrace.mac')
# Show image
Image(filename="images/world.jpg")
Explanation: Seeing the geometry
To see the beautiful geometry we have made we can use a raytracer macro. This first cell creates the macro file. The second cell runs the file and then displays the created png image.
End of explanation
%%file macros/dawn.mac
/vis/open DAWNFILE
/vis/scene/create
/vis/scene/add/volume
/vis/scene/add/trajectories smooth
/vis/modeling/trajectories/create/drawByCharge
/vis/modeling/trajectories/drawByCharge-0/default/setDrawStepPts true
/vis/modeling/trajectories/drawByCharge-0/default/setStepPtsSize 2
/vis/scene/endOfEventAction accumulate 1000
/vis/scene/add/hits
/vis/sceneHandler/attach
#/vis/scene/add/axes 0. 0. 0. 10. cm
/vis/viewer/set/targetPoint 0.0 0.0 300.0 mm
/vis/viewer/set/viewpointThetaPhi 90 0
/vis/viewer/zoom 1
gUImanager.ExecuteMacroFile('macros/dawn.mac')
Explanation: Seeing the particle tracks
We can use a dawn macro in order to print out the particle tracks that a given number of generated particles porduced.
End of explanation
gRunManager.BeamOn(50)
!mv g4_00.prim images/world.prim
Explanation: Once we have defined how we want to see the tracks we can beam on our pretend linac with 50 electrons.
End of explanation
!dawn -d images/world.prim
!convert images/world.eps images/world.png
Explanation: The beam on created a prim file which needs to be converted to png for viewing.
End of explanation
Image("images/world.png")
!G4VRML_DEST_DIR=.
!G4VRMLFILE_MAX_FILE_NUM=1
!G4VRMLFILE_VIEWER=echo
gApplyUICommand("/vis/open VRML2FILE")
gRunManager.BeamOn(1)
!mv g4_00.wrl images/world.wrl
Explanation: And here is our wonderful simulation
End of explanation
%load_ext version_information
%version_information matplotlib, numpy
Explanation: Versions
End of explanation |
9,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_raw_objects
Step1: Continuous data is stored in objects of type
Step2: Information about the channels contained in the
Step3: You can also pass an index directly to the
Step4: Selecting subsets of channels and samples
It is possible to use more intelligent indexing to extract data, using
channel names, types or time ranges.
Step5: Notice the different scalings of these types
Step6: You can restrict the data to a specific time range
Step7: And drop channels by name
Step8: Concatenating | Python Code:
from __future__ import print_function
import mne
import os.path as op
from matplotlib import pyplot as plt
Explanation: .. _tut_raw_objects:
The :class:Raw <mne.io.Raw> data structure: continuous data
End of explanation
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.RawFIF(data_path, preload=True, verbose=False)
# Give the sample rate
print('sample rate:', raw.info['sfreq'], 'Hz')
# Give the size of the data matrix
print('channels x samples:', raw._data.shape)
Explanation: Continuous data is stored in objects of type :class:Raw <mne.io.RawFIF>.
The core data structure is simply a 2D numpy array (channels × samples,
._data) combined with an :class:Info <mne.Info> object
(.info) (:ref:tut_info_objects.
The most common way to load continuous data is from a .fif file. For more
information on :ref:loading data from other formats <ch_convert>, or
creating it :ref:from scratch <tut_creating_data_structures>.
Loading continuous data
End of explanation
print('Shape of data array:', raw._data.shape)
array_data = raw._data[0, :1000]
_ = plt.plot(array_data)
Explanation: Information about the channels contained in the :class:Raw <mne.io.RawFIF>
object is contained in the :class:Info <mne.Info> attribute.
This is essentially a dictionary with a number of relevant fields (see
:ref:tut_info_objects).
Indexing data
There are two ways to access the data stored within :class:Raw
<mne.io.RawFIF> objects. One is by accessing the underlying data array, and
the other is to index the :class:Raw <mne.io.RawFIF> object directly.
To access the data array of :class:Raw <mne.io.Raw> objects, use the
_data attribute. Note that this is only present if preload==True.
End of explanation
# Extract data from the first 5 channels, from 1 s to 3 s.
sfreq = raw.info['sfreq']
data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]
_ = plt.plot(times, data.T)
_ = plt.title('Sample channels')
Explanation: You can also pass an index directly to the :class:Raw <mne.io.RawFIF>
object. This will return an array of times, as well as the data representing
those timepoints. This may be used even if the data is not preloaded:
End of explanation
# Pull all MEG gradiometer channels:
# Make sure to use .copy() or it will overwrite the data
meg_only = raw.copy().pick_types(meg=True)
eeg_only = raw.copy().pick_types(meg=False, eeg=True)
# The MEG flag in particular lets you specify a string for more specificity
grad_only = raw.copy().pick_types(meg='grad')
# Or you can use custom channel names
pick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']
specific_chans = raw.copy().pick_channels(pick_chans)
print(meg_only, eeg_only, grad_only, specific_chans, sep='\n')
Explanation: Selecting subsets of channels and samples
It is possible to use more intelligent indexing to extract data, using
channel names, types or time ranges.
End of explanation
f, (a1, a2) = plt.subplots(2, 1)
eeg, times = eeg_only[0, :int(sfreq * 2)]
meg, times = meg_only[0, :int(sfreq * 2)]
a1.plot(times, meg[0])
a2.plot(times, eeg[0])
del eeg, meg, meg_only, grad_only, eeg_only, data, specific_chans
Explanation: Notice the different scalings of these types
End of explanation
raw = raw.crop(0, 50) # in seconds
print('New time range from', raw.times.min(), 's to', raw.times.max(), 's')
Explanation: You can restrict the data to a specific time range
End of explanation
nchan = raw.info['nchan']
raw = raw.drop_channels(['MEG 0241', 'EEG 001'])
print('Number of channels reduced from', nchan, 'to', raw.info['nchan'])
Explanation: And drop channels by name
End of explanation
# Create multiple :class:`Raw <mne.io.RawFIF>` objects
raw1 = raw.copy().crop(0, 10)
raw2 = raw.copy().crop(10, 20)
raw3 = raw.copy().crop(20, 40)
# Concatenate in time (also works without preloading)
raw1.append([raw2, raw3])
print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
Explanation: Concatenating :class:Raw <mne.io.RawFIF> objects
:class:Raw <mne.io.RawFIF> objects can be concatenated in time by using the
:func:append <mne.io.RawFIF.append> function. For this to work, they must
have the same number of channels and their :class:Info
<mne.Info> structures should be compatible.
End of explanation |
9,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step1: Simulate data under a specified model
We are interested in estimating the Ne and divergence time values on this tree.
Step2: True measurements that will inform our priors
In BPP we can set a prior on the percent divergence $\tau$ (pronounced Tau) between the root and tips of the tree. When combined with an estimate of the per-site-per-generation mutation rate ($\mu$) and generation time (g) this can be converted into units of years. In our ipcoal simulation we actually have access to the root ancestral sequence as well as the tip sequences, so we can measure the true percent sequence divergence.
Step3: In addition to setting a prior on Tau we must also set a prior on $\theta$ (pronounced theta), the population mutation rate ($\theta$=4Ne$\mu$). Here we know the Ne and $\mu$ used in the simulation scenario, so again we can calculate it exactly. For real data we will not know these prior values precisely, and so we will want to put a wide prior to incorporate our uncertainty.
Step4: BPP analysis setup
Step5: The parameters of the object
Step6: Set priors and plot priors of transformed values
Here we will enter our expectations on the generation time and mutation rate of our simulated organisms so that we can toggle the theta and tau priors until the resulting prior distributions on Ne and T seem reasonable.
In our simulation the root divergence time was 3M generations. Assuming a generation time of 1 and prior on root tau of ~3% (as we measured above) yields a prior on the root divergence time with 95% CI from about 0.5-6.5 Ma. So this is a pretty good but not highly informative prior.
Similarly, our simulation used Ne=200K or 500K on different lineages. If we assume a wide prior on the per-site per-generation mutation rate ($\mu$) with highest density of 1e-8 (as used in our simulations) and a prior on $\theta$ with highest density at about 0.01 (we calculated 0.008 from the simulated data above) yields a 95% CI on Ne that is again accurate but not highly informative.
These priors incorporate estimated knowledge about our organisms, by describing gentime and mutrate as distributions, and puts informative but wide priors on the parameters we hope to infer (theta and tau).
Step7: Run inference
Distribute jobs in parallel and run replicate jobs that start MCMC from different random seeds.
Step8: Re-load results
You can load your results anytime later after your runs have finished by providing the name and workdir for the job to ipa.bpp. This allows you to mess around with plotting and comparing distributions later. You also have the option here to combine the posteriors from multiple replicate runs (that used the same data and priors), or to analyze each separately.
Step9: The "00" tables result
The main result of the "00" algorithm is a dataframe with the inferred parameters of the multispecies coalescent model.
Step10: Converting units from the table results
The results are often difficult to interpret without converting the units into a more easily interpretable type. For example, the theta parameters represent the population effective mutation rate (4*Ne*u), but we are typically more interested in just knowing N. Similarly, the tau parameters are in units of percent sequence divergence (u/gen) but we instead like to know the divergence time estimates in units of years, or millions of years.
To convert these units we need an estimate of the per-site per-generation mutation-rate (u) and the generation time of our organism (g). Rather than offering a point estimate it is more appropriate in the Bayesian context to provide these values a statistical distribution. This is similar to how we described our priors using a gamma or inverse-gamma distribution earlier, and how we used a distribution of u and g to check the reasonable conversion of the units.
In the example below the converted units can be compared with the true simulation scenario at the beginning of this notebook where we set the divergence times and Ne values on the tree. Among the tip-level taxa we can see that mean Ne estimates are around 2e5 or ~6e5, which is pretty accurate, and the root divergence time is at 3e6, which is correct.
Step11: Plot posteriors
You can plot several posterior distributions on a shared axis using the .draw_posteriors() function. This takes a list of tuples as input, where each tuple is the (mean, variance) of a parameter estimate. You can draw the posteriors from a single analysis using this function or combine results from several different analyses if you wish. | Python Code:
# conda install ipyrad -c conda-forge -c bioconda
# conda install ipcoal -c conda-forge
# conda install bpp -c conda-forge -c eaton-lab
import ipyrad.analysis as ipa
import pandas as pd
import numpy as np
import toytree
import toyplot
import ipcoal
Explanation: <span style="color:gray">ipyrad-analysis toolkit:</span> bpp (alg 00)
The ipyrad-analysis bpp tool w/ algorithm 00 can be used infer parameters on a fixed species tree. Please see the full BPP documentation for details about BPP analyses. The ipyrad-analysis wrapper is intended to make it easy to auomate many BPP analyses on large RAD-seq datasets.
Load packages
End of explanation
# generate a tree with divergence times in generations
TREE = toytree.rtree.unittree(ntips=5, treeheight=3e6, seed=123)
# set larger Ne on one lineage than the other
TREE = TREE.set_node_values("Ne", {i: 5e5 for i in (0, 1, 2, 5, 6)}, default=2e5)
# draw in 'p' style to show Ne
TREE.draw(ts='p', node_hover=True);
# create a simulation model for this tree/network with 3 diploids per spp.
model = ipcoal.Model(tree=TREE, nsamples=6, mut=1e-8)
# simulate N loci
model.sim_loci(nloci=1000, nsites=150)
# write result to a database file
model.write_loci_to_hdf5(name="test-bpp", outdir="/tmp", diploid=True)
Explanation: Simulate data under a specified model
We are interested in estimating the Ne and divergence time values on this tree.
End of explanation
# concatenate seqs into single alignment
concat = np.concatenate(model.seqs, axis=1)
# get ancestral root node sequence
rootseq = np.concatenate(model.ancestral_seq)
# what is sequence divergence relative to root?
np.sum(rootseq != concat[0]) / rootseq.size
Explanation: True measurements that will inform our priors
In BPP we can set a prior on the percent divergence $\tau$ (pronounced Tau) between the root and tips of the tree. When combined with an estimate of the per-site-per-generation mutation rate ($\mu$) and generation time (g) this can be converted into units of years. In our ipcoal simulation we actually have access to the root ancestral sequence as well as the tip sequences, so we can measure the true percent sequence divergence.
End of explanation
# theta in simulation
4 * TREE.treenode.Ne * 1e-8
Explanation: In addition to setting a prior on Tau we must also set a prior on $\theta$ (pronounced theta), the population mutation rate ($\theta$=4Ne$\mu$). Here we know the Ne and $\mu$ used in the simulation scenario, so again we can calculate it exactly. For real data we will not know these prior values precisely, and so we will want to put a wide prior to incorporate our uncertainty.
End of explanation
SEQS = "/tmp/test-bpp.seqs.hdf5"
IMAP = {
"r0": ["r0-0", "r0-1", "r0-2"],
"r1": ["r1-0", "r1-1", "r1-2"],
"r2": ["r2-0", "r2-1", "r2-2"],
"r3": ["r3-0", "r3-1", "r3-2"],
"r4": ["r4-0", "r4-1", "r4-2"],
}
# create a bpp object to run algorithm 00
tool = ipa.bpp(
name="test-bpp",
workdir="/tmp",
data=SEQS,
guidetree=TREE,
imap=IMAP,
infer_sptree=0,
infer_delimit=0,
maxloci=1000,
burnin=2e3,
nsample=2e4,
sampfreq=5,
)
Explanation: BPP analysis setup
End of explanation
tool.kwargs
Explanation: The parameters of the object
End of explanation
# toggle these prior settings
tool.kwargs["thetaprior"] = (2.5, 0.004)
tool.kwargs["tauprior"] = (3, 0.01)
# draw the distributions when using our assumptions for transforming
tool.draw_priors(
gentime_min=0.99, gentime_max=1.01,
mutrate_min=5e-9, mutrate_max=2e-8,
);
Explanation: Set priors and plot priors of transformed values
Here we will enter our expectations on the generation time and mutation rate of our simulated organisms so that we can toggle the theta and tau priors until the resulting prior distributions on Ne and T seem reasonable.
In our simulation the root divergence time was 3M generations. Assuming a generation time of 1 and prior on root tau of ~3% (as we measured above) yields a prior on the root divergence time with 95% CI from about 0.5-6.5 Ma. So this is a pretty good but not highly informative prior.
Similarly, our simulation used Ne=200K or 500K on different lineages. If we assume a wide prior on the per-site per-generation mutation rate ($\mu$) with highest density of 1e-8 (as used in our simulations) and a prior on $\theta$ with highest density at about 0.01 (we calculated 0.008 from the simulated data above) yields a 95% CI on Ne that is again accurate but not highly informative.
These priors incorporate estimated knowledge about our organisms, by describing gentime and mutrate as distributions, and puts informative but wide priors on the parameters we hope to infer (theta and tau).
End of explanation
tool.run(auto=True, nreps=2, force=True)
Explanation: Run inference
Distribute jobs in parallel and run replicate jobs that start MCMC from different random seeds.
End of explanation
import ipyrad.analysis as ipa
# reload results by creating object with a given name and workdir
tool = ipa.bpp(name="test-bpp", workdir="/tmp")
# call summarize on the tool
table, mcmc = tool.summarize_results("00", individual_results=False)
Explanation: Re-load results
You can load your results anytime later after your runs have finished by providing the name and workdir for the job to ipa.bpp. This allows you to mess around with plotting and comparing distributions later. You also have the option here to combine the posteriors from multiple replicate runs (that used the same data and priors), or to analyze each separately.
End of explanation
# tables is a summary of the posteriors
table
Explanation: The "00" tables result
The main result of the "00" algorithm is a dataframe with the inferred parameters of the multispecies coalescent model.
End of explanation
df = tool.transform(mcmc, 0.99, 1.01, 5e-9, 2e-8).T
df
Explanation: Converting units from the table results
The results are often difficult to interpret without converting the units into a more easily interpretable type. For example, the theta parameters represent the population effective mutation rate (4*Ne*u), but we are typically more interested in just knowing N. Similarly, the tau parameters are in units of percent sequence divergence (u/gen) but we instead like to know the divergence time estimates in units of years, or millions of years.
To convert these units we need an estimate of the per-site per-generation mutation-rate (u) and the generation time of our organism (g). Rather than offering a point estimate it is more appropriate in the Bayesian context to provide these values a statistical distribution. This is similar to how we described our priors using a gamma or inverse-gamma distribution earlier, and how we used a distribution of u and g to check the reasonable conversion of the units.
In the example below the converted units can be compared with the true simulation scenario at the beginning of this notebook where we set the divergence times and Ne values on the tree. Among the tip-level taxa we can see that mean Ne estimates are around 2e5 or ~6e5, which is pretty accurate, and the root divergence time is at 3e6, which is correct.
End of explanation
# get only the divergence time result
subd = df.loc[[i for i in df.index if "div_" in i], :]
# get results as tuple pairs with (mean, var)
tuples = [
(i, j) for (i, j) in
zip(subd["mean"], subd.loc[:, "S.D"] ** 2)
]
# draw the results
c, a, m = tool.draw_posteriors(
gamma_tuples=tuples,
labels=subd.index,
);
# style the axes
a.y.ticks.labels.angle = -90
Explanation: Plot posteriors
You can plot several posterior distributions on a shared axis using the .draw_posteriors() function. This takes a list of tuples as input, where each tuple is the (mean, variance) of a parameter estimate. You can draw the posteriors from a single analysis using this function or combine results from several different analyses if you wish.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.