Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
8,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https
Step1: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
Step2: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code)
Step3: Below I'm running images through the VGG network in batches.
Exercise
Step4: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
Step5: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
Step10: Training
Here, we'll train the network.
Exercise
Step11: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
from sklearn import preprocessing
labels_vecs = preprocessing.LabelBinarizer().fit(labels).transform(labels)
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
from sklearn import model_selection
ss = model_selection.StratifiedShuffleSplit(n_splits=1, test_size=0.2)
splitter = ss.split(codes, labels)
split_i = next(splitter)
train_x, train_y = codes[split_i[0]], labels_vecs[split_i[0]]
val_x, val_y = codes[split_i[1][1::2]], labels_vecs[split_i[1][1::2]]
test_x, test_y = codes[split_i[1][::2]], labels_vecs[split_i[1][::2]]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
output_size = labels_vecs.shape[1]
fc1 = tf.contrib.layers.fully_connected(inputs_, 1024)
logits = tf.contrib.layers.fully_connected(fc1, output_size)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
epochs = 20
num_batches = 64
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
batch_i = 0
for x, y in get_batches(train_x, train_y, num_batches):
batch_i += 1
feed = {inputs_: x, labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}:, Batch: {}, Training loss: {:>2}".format(epoch + 1, epochs, batch_i, loss))
if batch_i % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(epoch, epochs),
"Iteration: {}".format(batch_i),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation |
8,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collecting Tweets
This notebook shows how to collect tweets. For analyzing words you want to collect by search term, but collecting tweets from a specific user is also possible.
Step1: Collect Tweets by search term
Function
Step2: Collect Tweets from user
Function
Step3: If you want to sort the tweets by retweets or favorites, you'll need to convert the retweets and favorites columns from unicode into integers
Step4: For fun
Step5: Make word frequency dataframe
Step6: Now plot relative frequency results. We see from word_freq_df that the largest relative frequency terms are specialized things like "sotu" (state of the union) and specific policy-related words like "middle-class." We'll increase the requirement on background words to remove these policy-specific words and get at more general words that the president's twitter account nevertheless uses more often than usual
Step7: At least 1000 background occurrences
Step8: The month of January appears to carry special import with the president's twitter account.
At least 5000 background occurrences
Step9: And finally we'll look at the least presidential words on Barack Obama's twitter account | Python Code:
import sys
sys.path.append('..')
from twords.twords import Twords
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
# this pandas line makes the dataframe display all text in a line; useful for seeing entire tweets
pd.set_option('display.max_colwidth', -1)
twit_mars = Twords()
# set path to folder that contains jar files for twitter search
twit_mars.jar_folder_path = "../jar_files_and_background/"
Explanation: Collecting Tweets
This notebook shows how to collect tweets. For analyzing words you want to collect by search term, but collecting tweets from a specific user is also possible.
End of explanation
twit_mars.create_java_tweets(total_num_tweets=100, tweets_per_run=50, querysearch="mars rover",
final_until=None, output_folder="mars_rover",
decay_factor=4, all_tweets=True)
twit_mars.get_java_tweets_from_csv_list()
twit_mars.tweets_df.head(5)
Explanation: Collect Tweets by search term
Function: create_java_tweets
This function collects tweets and puts them into a single folder in the form needed to read them into a Twords object using get_java_tweets_from_csv_list.
For more information of create_java_tweets arguments see source code in twords.py file
total_num_tweets: (int) total number of tweets to collect
tweets_per_run: (int) number of tweets per call to java tweet collector; from experience best to keep around 10,000 for large runs (for runs less than 10,000 can just set tweets_per_run to same value as total_num_tweets)
querysearch: (string) search query - for example, "charisma" or "mars rover"; a space between words implies an "and" operator between them: only tweets with both terms will be returned
final_until: (string) the date to search backward in time from; has form '2015-07-31'; for example, if date is '2015-07-31', then tweets are collected backward in time from that date. If left as None, uses current date to search backward from
output_folder: (string) name of folder to put output files in
decay_factor: (int) how quickly to wind down tweet search if errors occur and no tweets are found in a run - a failed run will count as tweets_per_run/decay_factor tweets found, so the higher the factor the longer the program will try to search for tweets even if it gathers none in a run
all_tweets: (bool) whether to return "all tweets" (as defined on twitter website) or "top tweets"; the details behind these designations are mysteries only Twitter knows, but from experiment on website "top tweets" appear to be subset of "all tweets" that Twitter considers interesting; there is no guarantee that this will return literally every tweet, and experiment suggests even "all tweets" does not return every single tweet that given search query may match
Try collecting tweets about the mars rover:
End of explanation
twit = Twords()
twit.jar_folder_path = "../jar_files_and_background/"
twit.get_all_user_tweets("barackobama", tweets_per_run=500)
twit = Twords()
twit.data_path = "barackobama"
twit.get_java_tweets_from_csv_list()
twit.convert_tweet_dates_to_standard()
Explanation: Collect Tweets from user
Function: get_all_user_tweets
This function collects all user tweets that are available from twitter website by scrolling. As an example, a run of this function collected about 87% of the tweets from user barackobama.
To avoid problems with scrolling on the website (which is what the java tweet collector programmatically does), best if tweets_per_run is set to be around 500.
This function may sometimes return multiple copies of the same tweet, which can be removed in the resulting pandas dataframe once the data is read into Twords.
user: (string) twitter handle of user to gather tweets from
tweets_per_run (int) number of tweets to collect in a single call to java tweet collector; some experimentation is required to see which number ends up dropping the fewest tweets - 500 seems to be a decent value
End of explanation
twit.tweets_df["retweets"] = twit.tweets_df["retweets"].map(int)
twit.tweets_df["favorites"] = twit.tweets_df["favorites"].map(int)
twit.tweets_df.sort_values("favorites", ascending=False)[:5]
twit.tweets_df.sort_values("retweets", ascending=False)[:5]
Explanation: If you want to sort the tweets by retweets or favorites, you'll need to convert the retweets and favorites columns from unicode into integers:
End of explanation
twit.background_path = '../jar_files_and_background/freq_table_72319443_total_words_twitter_corpus.csv'
twit.create_Background_dict()
twit.create_Stop_words()
twit.keep_column_of_original_tweets()
twit.lower_tweets()
twit.keep_only_unicode_tweet_text()
twit.remove_urls_from_tweets()
twit.remove_punctuation_from_tweets()
twit.drop_non_ascii_characters_from_tweets()
twit.drop_duplicate_tweets()
twit.convert_tweet_dates_to_standard()
twit.sort_tweets_by_date()
Explanation: For fun: A look at Barack Obama's tweets
The Twords word frequency analysis can also be applied to these tweets. In this case there was no search term.
End of explanation
twit.create_word_bag()
twit.make_nltk_object_from_word_bag()
twit.create_word_freq_df(10000)
twit.word_freq_df.sort_values("log relative frequency", ascending = False, inplace = True)
twit.word_freq_df.head(20)
twit.tweets_containing("sotu")[:10]
Explanation: Make word frequency dataframe:
End of explanation
num_words_to_plot = 32
background_cutoff = 100
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
Explanation: Now plot relative frequency results. We see from word_freq_df that the largest relative frequency terms are specialized things like "sotu" (state of the union) and specific policy-related words like "middle-class." We'll increase the requirement on background words to remove these policy-specific words and get at more general words that the president's twitter account nevertheless uses more often than usual:
At least 100 background occurrences:
End of explanation
num_words_to_plot = 50
background_cutoff = 1000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
Explanation: At least 1000 background occurrences:
End of explanation
num_words_to_plot = 32
background_cutoff = 5000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
Explanation: The month of January appears to carry special import with the president's twitter account.
At least 5000 background occurrences:
End of explanation
num_words_to_plot = 32
background_cutoff = 5000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=False).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
Explanation: And finally we'll look at the least presidential words on Barack Obama's twitter account:
End of explanation |
8,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nonequilibrium Switching Free Energy Analysis
This notebook is useful for examining the results of nonequilibrium switching calculations performed with the Perses tool. It enables examination of work traces as well as trajectories.
Import required libraries
Step1: Set the appropriate variables to the directories where the output files are stored
Step3: Now we can plot the forward and reverse work traces
These are the cumulative work going from lambda=0 to lambda=1, and the reverse
Step4: What should I be looking for in the work traces?
Is there a place in the trajectory where the work suddenly becomes anomalously high? This might indicate either a bad protocol or a bug in the code
In general, these plots are giving an illustration of how fast the system is changing as lambda is changed. The faster that change is, the more unfavorable the variance of the ultimate free energy calculation
Let's take a closer look at the trajectories
See something weird? Sometimes it helps to just visualize the trajectory of the atoms. We can do that fairly straightforwardly | Python Code:
import numpy as np
import nglview
from bokeh.plotting import figure, output_notebook, show
from bokeh.layouts import row, column
import simtk.unit as unit
import mdtraj as md
import plotting_tools
output_notebook()
Explanation: Nonequilibrium Switching Free Energy Analysis
This notebook is useful for examining the results of nonequilibrium switching calculations performed with the Perses tool. It enables examination of work traces as well as trajectories.
Import required libraries
End of explanation
trajectory_directory = "/Users/grinawap/solvent_test_4"
trajectory_prefix = "cdk02" #these were set by the yaml file
data_reader = plotting_tools.NonequilibriumSwitchingAnalysis(trajectory_directory, trajectory_prefix)
cum_work_shape = np.shape(data_reader.cumulative_work)
n_steps_per_protocol = cum_work_shape[2]
protocol_steps = np.linspace(0, 1, n_steps_per_protocol)
n_protocols = cum_work_shape[1]
Explanation: Set the appropriate variables to the directories where the output files are stored
End of explanation
#create a convenience function to add lines to a bokeh plot
def plot_line(fig, x, y):
fig.line(x, y)
def plot_work_values(protocol_steps, data_reader, n_protocols):
Convenience function to plot work values from nonequilibrium switching calculations.
Arguments
---------
protocol_steps : [n_steps] array
Indexes the lambda value at each point in the corresponding work trajectory
data_reader : NonequilibriumSwitchingAnalysis object
object that contains the switching data
n_protocols : int
The total number of protocols in each direction (should be symmetric)
x_axis_label = "lambda value"
y_axis_label = "Work (kT)"
work_figure = figure(title="Nonequilibrium Switching Work", x_axis_label=x_axis_label,
y_axis_label=y_axis_label)
for protocol_index in range(n_protocols):
work_figure.line(protocol_steps, data_reader.cumulative_work[0, protocol_index, :], line_color="green")
work_figure.line(protocol_steps, data_reader.cumulative_work[1, protocol_index, :], line_color="red")
show(work_figure)
plot_work_values(protocol_steps, data_reader, n_protocols)
Explanation: Now we can plot the forward and reverse work traces
These are the cumulative work going from lambda=0 to lambda=1, and the reverse
End of explanation
nonequilibrium_trajectory = data_reader.get_nonequilibrium_trajectory("forward", 0)
view = nglview.NGLWidget()
view.add_trajectory(nonequilibrium_trajectory)
view.add_ball_and_stick()
view
Explanation: What should I be looking for in the work traces?
Is there a place in the trajectory where the work suddenly becomes anomalously high? This might indicate either a bad protocol or a bug in the code
In general, these plots are giving an illustration of how fast the system is changing as lambda is changed. The faster that change is, the more unfavorable the variance of the ultimate free energy calculation
Let's take a closer look at the trajectories
See something weird? Sometimes it helps to just visualize the trajectory of the atoms. We can do that fairly straightforwardly:
End of explanation |
8,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kernel Density Estimation for outlier detection
Motivation
A common assumption in pattern recognition is that all classes are known. This assumption does not hold in many real-world applications, where classes which are not currently part of the model do exist.
In the context of kernel densities, we hypothesize that a novel class is one in which samples in the feature space occur in regions of low density relative to each known class, but in high density relative to the overall population.
Step1: Load sample glass data.
Step2: Read SDSS data, preprocessed by colour indices and reddenning correction
Step3: Use the same features as reported in Alasdair Tran's Honours thesis 2015.
Step4: Bandwidth Selection
We use the median of the pairwise Euclidean distances as a heuristic to determine bandwidth size.
Step5: (TODO) Define the training, validation, and test sets, and select appropriate Gaussian kernel bandwidth. Use sklearn's grid search to find a good bandwidth.
Step6: Estimate a kernel density estimator on the training set
Step7: Use the fitted density to estimate the log density for all items in the test set
Step8: Choose an appropriate threshold for identifying outliers
Step9: Identify the outliers in the dataset. (TODO) Export or visualise appropriately for getting feedback from the astronomers.
Step10: Calculate class-specific densities
Step11: Discussion
We need to check for outliers on the whole dataset. This can be done by cross validation (an outer loop). However, we also need to check that the various estimated densities from each split are similar.
We have not used the labels. Consider the difference between regions of low density in the unlabelled case and regions of low density when using only data from a single class.
The original downloaded SDSS data also contains uncertainties about the magnitude, which can be used to cross check whether outliers are noisy measurements.
A set of sanity checks are usually used for cleaning up the data before analysis. Cross check that the outliers mostly fail the sanity checks | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.neighbors.kde import KernelDensity
%matplotlib inline
Explanation: Kernel Density Estimation for outlier detection
Motivation
A common assumption in pattern recognition is that all classes are known. This assumption does not hold in many real-world applications, where classes which are not currently part of the model do exist.
In the context of kernel densities, we hypothesize that a novel class is one in which samples in the feature space occur in regions of low density relative to each known class, but in high density relative to the overall population.
End of explanation
data = pd.read_csv("../data/glass.csv", index_col=False,names=["class"] + list(range(8)))
data_features = [x for x in range(8)]
classes = np.unique(data["class"])
data.head()
Explanation: Load sample glass data.
End of explanation
# data = pd.read_hdf('../data/sdss.h5', 'sdss')
# data.head()
Explanation: Read SDSS data, preprocessed by colour indices and reddenning correction
End of explanation
# target_col = 'class'
# data_features = ['psfMag_r_w14', 'psf_u_g_w14', 'psf_g_r_w14', 'psf_r_i_w14',
# 'psf_i_z_w14', 'petroMag_r_w14', 'petro_u_g_w14', 'petro_g_r_w14',
# 'petro_r_i_w14', 'petro_i_z_w14', 'petroRad_r']
Explanation: Use the same features as reported in Alasdair Tran's Honours thesis 2015.
End of explanation
#h = 1/np.sqrt(0.02) # Bandwidth coming from Alasdair's SVM experiments
def percentile_pairwise_distance(X, Y=None):
if Y is None: Y = X
distances = metrics.pairwise_distances(X, Y)
return np.percentile(distances, 20)
h = percentile_pairwise_distance(data[data_features].values)
print("Bandwidth:", h)
Explanation: Bandwidth Selection
We use the median of the pairwise Euclidean distances as a heuristic to determine bandwidth size.
End of explanation
num_data = len(data)
idx_all = np.random.permutation(num_data)
num_train = int(np.floor(0.7*num_data))
idx_train = idx_all[:num_train]
idx_test = idx_all[num_train:]
Explanation: (TODO) Define the training, validation, and test sets, and select appropriate Gaussian kernel bandwidth. Use sklearn's grid search to find a good bandwidth.
End of explanation
kde = KernelDensity(kernel='gaussian', bandwidth=h, rtol=1e-5)
Xtrain = data[data_features].ix[idx_train]
kde.fit(Xtrain)
Explanation: Estimate a kernel density estimator on the training set
End of explanation
Xtest = data[data_features].ix[idx_test]
pred = kde.score_samples(Xtest)
Explanation: Use the fitted density to estimate the log density for all items in the test set
End of explanation
_ = plt.hist(pred, bins=50)
idx_sort = np.argsort(pred)
pred[idx_sort[:10]]
Explanation: Choose an appropriate threshold for identifying outliers
End of explanation
idx_outlier = idx_test[np.where(pred < -7)]
data.ix[idx_outlier]
Explanation: Identify the outliers in the dataset. (TODO) Export or visualise appropriately for getting feedback from the astronomers.
End of explanation
densities = {}
for cl in classes:
Xtrain_cl = Xtrain[data["class"]==cl]
densities[cl] = KernelDensity(kernel='gaussian', bandwidth=h, rtol=1e-5)
densities[cl].fit(Xtrain_cl)
Explanation: Calculate class-specific densities
End of explanation
class_pred = {}
for cl in classes:
class_pred[cl] = densities[cl].score_samples(Xtest)
class_pred[cl] -= pred
fig = plt.figure(figsize=(16,10))
ax = fig.add_subplot(231)
_ = ax.hist(class_pred[1], 30)
ax = fig.add_subplot(232)
_ = ax.hist(class_pred[2], 30)
ax = fig.add_subplot(233)
_ = ax.hist(class_pred[3], 30)
ax = fig.add_subplot(234)
_ = ax.hist(class_pred[5], 30)
ax = fig.add_subplot(235)
_ = ax.hist(class_pred[6], 30)
ax = fig.add_subplot(236)
_ = ax.hist(class_pred[7], 30)
Explanation: Discussion
We need to check for outliers on the whole dataset. This can be done by cross validation (an outer loop). However, we also need to check that the various estimated densities from each split are similar.
We have not used the labels. Consider the difference between regions of low density in the unlabelled case and regions of low density when using only data from a single class.
The original downloaded SDSS data also contains uncertainties about the magnitude, which can be used to cross check whether outliers are noisy measurements.
A set of sanity checks are usually used for cleaning up the data before analysis. Cross check that the outliers mostly fail the sanity checks
End of explanation |
8,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
# Getting Started with gensim
The goal of this tutorial is to get a new user up-and-running with gensim. This notebook covers the following objectives.
## Objectives
Installing gensim.
Accessing the gensim Jupyter notebook tutorials.
Presenting the core concepts behind the library.
Installing gensim
Before we can start using gensim for natural language processing (NLP), you will need to install Python along with gensim and its dependences. It is suggested that a new user install a prepackaged python distribution and a number of popular distributions are listed below.
Anaconda
EPD
WinPython
Once Python is installed, we will use pip to install the gensim library. First, we will make sure that Python is installed and accessible from the command line. From the command line, execute the following command
Step1: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenize our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
Step2: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
Step3: Because our corpus is small, there is only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
Vector
To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string "coffee milk coffee" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from.
Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to
Step4: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts
Step5: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.
We can convert our entire original corpus to a list of vectors
Step6: Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
Model
Now that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim, documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus.
One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space, where the frequency counts are weighted according to the relative rarity of each word in the corpus.
Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors" | Python Code:
raw_corpus = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
Explanation: # Getting Started with gensim
The goal of this tutorial is to get a new user up-and-running with gensim. This notebook covers the following objectives.
## Objectives
Installing gensim.
Accessing the gensim Jupyter notebook tutorials.
Presenting the core concepts behind the library.
Installing gensim
Before we can start using gensim for natural language processing (NLP), you will need to install Python along with gensim and its dependences. It is suggested that a new user install a prepackaged python distribution and a number of popular distributions are listed below.
Anaconda
EPD
WinPython
Once Python is installed, we will use pip to install the gensim library. First, we will make sure that Python is installed and accessible from the command line. From the command line, execute the following command:
which python
The resulting address should correspond to the Python distribution that you installed above. Now that we have verified that we are using the correct version of Python, we can install gensim from the command line as follows:
pip install -U gensim
To verify that gensim was installed correctly, you can activate Python from the command line and execute import gensim
$ python
Python 3.5.1 |Anaconda custom (x86_64)| (default, Jun 15 2016, 16:14:02)
[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import gensim
>>> # No error is a good thing
>>> exit()
Note: Windows users that are following long should either use Windows subsystem for Linux or another bash implementation for Windows, such as Git bash
Accessing the gensim Jupyter notebooks
All of the gensim tutorials (including this document) are stored in Jupyter notebooks. These notebooks allow the user to run the code locally while working through the material. If you would like to run a tutorial locally, first clone the GitHub repository for the project.
bash
$ git clone https://github.com/RaRe-Technologies/gensim.git
Next, start a Jupyter notebook server. This is accomplished using the following bash commands (or starting the notebook server from the GUI application).
bash
$ cd gensim
$ pwd
/Users/user1/home/gensim
$ cd docs/notebooks
$ jupyter notebook
After a few moments, Jupyter will open a web page in your browser and you can access each tutorial by clicking on the corresponding link.
<img src="jupyter_home.png">
This will open the corresponding notebook in a separate tab. The Python code in the notebook can be executed by selecting/clicking on a cell and pressing SHIFT + ENTER.
<img src="jupyter_execute_cell.png">
Note: The order of cell execution matters. Be sure to run all of the code cells in order from top to bottom, you you might encounter errors.
Core Concepts and Simple Example
This section introduces the basic concepts and terms needed to understand and use gensim and provides a simple usage example. In particular, we will build a model that measures the importance of a particular word.
At a very high-level, gensim is a tool for discovering the semantic structure of documents by examining the patterns of words (or higher-level structures such as entire sentences or documents). gensim accomplishes this by taking a corpus, a collection of text documents, and producing a vector representation of the text in the corpus. The vector representation can then be used to train a model, which is an algorithms to create different representations of the data, which are usually more semantic. These three concepts are key to understanding how gensim works so let's take a moment to explain what each of them means. At the same time, we'll work through a simple example that illustrates each of them.
Corpus
A corpus is a collection of digital documents. This collection is the input to gensim from which it will infer the structure of the documents, their topics, etc. The latent structure inferred from the corpus can later be used to assign topics to new documents which were not present in the training corpus. For this reason, we also refer to this collection as the training corpus. No human intervention (such as tagging the documents by hand) is required - the topic classification is unsupervised.
For our corpus, we'll use a list of 9 strings, each consisting of only a single sentence.
End of explanation
# Create a set of frequent words
stoplist = set('for a of the and to in'.split())
# Lowercase each document, split it by white space and filter out stopwords
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in raw_corpus]
# Count word frequencies
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
# Only keep words that appear more than once
processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts]
processed_corpus
Explanation: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenize our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
End of explanation
import logging
logging.basicConfig(level=logging.DEBUG)
from gensim import corpora
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
Explanation: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
End of explanation
print(dictionary.token2id)
Explanation: Because our corpus is small, there is only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
Vector
To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string "coffee milk coffee" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from.
Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to:
End of explanation
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
new_vec
Explanation: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts:
End of explanation
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
bow_corpus
Explanation: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.
We can convert our entire original corpus to a list of vectors:
End of explanation
from gensim import models
# train the model
tfidf = models.TfidfModel(bow_corpus)
# transform the "system minors" string
tfidf[dictionary.doc2bow("system minors".lower().split())]
Explanation: Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
Model
Now that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim, documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus.
One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space, where the frequency counts are weighted according to the relative rarity of each word in the corpus.
Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors":
End of explanation |
8,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
xmin = np.min(image_data.reshape(-1,1))
xmax = np.max(image_data.reshape(-1,1))
a = 0.1; b = 0.9;
x = a + (image_data - xmin)*(b-a)/(xmax-xmin)
return x
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
# features =
# labels
features = tf.placeholder(tf.float32,(None,features_count))
labels = tf.placeholder(tf.float32,(None,labels_count))
# TODO: Set the weights and biases tensors
# weights =
# biases =
dd = tf.truncated_normal((features_count,labels_count))
weights = tf.Variable(dd,dtype=tf.float32)
biases = tf.Variable(tf.zeros((labels_count)))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
# epochs =
# learning_rate =
epochs = 10
learning_rate = 0.01
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
valid_acc_batch
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_,l = session.run([optimizer,loss], feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
test_accuracy
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
8,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scraping the web is fun
Step1: Let's scrape the daily crime log from Cambridge, Massachusetts!
The most powerful web-scraping Python library is lxml...
Step2: Wow that's ugly. Wait, can't pandas do this? Yes!
Step3: Let's get rid of the first 2 rows.
Step4: And now let's parse the text of the 1st column, and pull it all together. | Python Code:
import requests
r = requests.get("http://berkeley.edu")
r
dir(r)
r.encoding
Explanation: Scraping the web is fun:
Do-goodery
Making much money
Amusing much people
End of explanation
import lxml.html as LH
url = "http://www.cambridgema.gov/cpd/newsandalerts/Archives/detail.aspx?path=%2fsitecore%2fcontent%2fhome%2fcpd%2fnewsandalerts%2fArchives%2f2015%2f10%2f10092015"
tree = LH.parse(url)
table = [td.text_content() for td in tree.xpath('//td')]
table[:10]
Explanation: Let's scrape the daily crime log from Cambridge, Massachusetts!
The most powerful web-scraping Python library is lxml...
End of explanation
import pandas
tables = pandas.read_html(url)
print(type(tables))
print(len(tables))
print(type(tables[0]))
tables[0]
df = tables[0]
Explanation: Wow that's ugly. Wait, can't pandas do this? Yes!
End of explanation
df = df.ix[2:]
df
Explanation: Let's get rid of the first 2 rows.
End of explanation
def parse_crime(text):
words = text.split()
date, time, crime_type, id_code = words[:4]
description = " ".join(words[4:])
return pandas.Series([date, time, crime_type, id_code, description])
parsed = df[0].apply(parse_crime)
parsed["description"] = df[1]
parsed.columns = ["date", "time", "crime_type", "id_code", "summary", "description"]
parsed
Explanation: And now let's parse the text of the 1st column, and pull it all together.
End of explanation |
8,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graph attributes can be manipulated through the set_*() and get_*() methods.
Step1: Defaults can be set for nodes and edges.
Step2: Nodes, edges and subgraphs are added and deleted through the respective add_*() and del_*() methods. | Python Code:
graph.set_fontsize('12')
graph.get_fontsize()
Explanation: Graph attributes can be manipulated through the set_*() and get_*() methods.
End of explanation
graph.set_node_defaults(fillcolor='blue', style='filled')
graph.get_node_defaults()
Explanation: Defaults can be set for nodes and edges.
End of explanation
node1 = pydot.Node(name='node1', label='My first node', shape='box')
node2 = pydot.Node(name='node2', label='My second node', color='red')
edge = pydot.Edge(src=node1, dst=node2, label='My Edge', style='dotted')
graph.add_node(node1)
graph.add_node(node2)
graph.add_edge(edge)
graph.write('pydot.dot')
graph.write_png('pydot.png')
Explanation: Nodes, edges and subgraphs are added and deleted through the respective add_*() and del_*() methods.
End of explanation |
8,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training at scale with AI Platform Training Service
Learning Objectives
Step1: Change your project name and bucket name in the cell below if necessary.
Step2: Confirm below that the bucket is regional and its region equals to the specified region
Step3: Create BigQuery tables
If you have not already created a BigQuery dataset for our data, run the following cell
Step4: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
Step5: Make the validation dataset be 1/10 the size of the training dataset.
Step6: Export the tables as CSV files
Step7: Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes
Step8: Move code into a Python package
The first thing to do is to convert your training code snippets into a regular Python package.
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
Create the package directory
Our package directory contains 3 files
Step10: Paste existing code into model.py
A Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file.
In the cell below, we write the contents of the cell into model.py packaging the model we
developed in the previous labs so that we can deploy it to AI Platform Training Service.
Step12: Modify code to read data from and write checkpoint files to GCS
If you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.)
This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both.
We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file.
Step13: Run trainer module package locally
Now we can test our training code locally as follows using the local test data. We'll run a very small training job over a single file with a small batch size and one eval step.
Step14: Run your training package on Cloud AI Platform
Once the code works in standalone mode locally, you can run it on Cloud AI Platform. To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service
Step15: (Optional) Run your training package using Docker container
AI Platform Training also supports training in custom containers, allowing users to bring their own Docker containers with any pre-installed ML framework or algorithm to run on AI Platform Training.
In this last section, we'll see how to submit a Cloud training job using a customized Docker image.
Containerizing our ./taxifare/trainer package involves 3 steps
Step16: Remark | Python Code:
from google import api_core
from google.cloud import bigquery
Explanation: Training at scale with AI Platform Training Service
Learning Objectives:
1. Learn how to organize your training code into a Python package
1. Train your model using cloud infrastructure via Google Cloud AI Platform Training Service
1. (optional) Learn how to run your training package using Docker containers and push training Docker images on a Docker registry
Introduction
In this notebook we'll make the jump from training locally, to do training in the cloud. We'll take advantage of Google Cloud's AI Platform Training Service.
AI Platform Training Service is a managed service that allows the training and deployment of ML models without having to provision or maintain servers. The infrastructure is handled seamlessly by the managed service for us.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
OUTDIR = f"gs://{BUCKET}/taxifare/data"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%env OUTDIR=$OUTDIR
%env TFVERSION=2.5
Explanation: Change your project name and bucket name in the cell below if necessary.
End of explanation
%%bash
gsutil ls -Lb gs://$BUCKET | grep "gs://\|Location"
echo $REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai_platform/region $REGION
Explanation: Confirm below that the bucket is regional and its region equals to the specified region:
End of explanation
bq = bigquery.Client(project=PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset)
print("Dataset created")
except api_core.exceptions.Conflict:
print("Dataset already exists")
Explanation: Create BigQuery tables
If you have not already created a BigQuery dataset for our data, run the following cell:
End of explanation
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Explanation: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Explanation: Make the validation dataset be 1/10 the size of the training dataset.
End of explanation
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
Explanation: Export the tables as CSV files
End of explanation
!gsutil ls gs://$BUCKET/taxifare/data
Explanation: Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes:
Upload data to Google Cloud Storage
Move code into a trainer Python package
Submit training job with gcloud to train on AI Platform
Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
To do this run the notebook 0_export_data_from_bq_to_gcs.ipynb, which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
End of explanation
ls ./taxifare/trainer/
Explanation: Move code into a Python package
The first thing to do is to convert your training code snippets into a regular Python package.
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
Create the package directory
Our package directory contains 3 files:
End of explanation
%%writefile ./taxifare/trainer/model.py
Data prep, train and evaluate DNN model.
import datetime
import logging
import os
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import activations, callbacks, layers, models
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
# inputs are all float except for pickup_datetime which is a string
STRING_COLS = ["pickup_datetime"]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
DAYS = ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
def features_and_labels(row_data):
for unwanted_col in ["key"]:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
shuffle_buffer_size=1000000,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if not isinstance(s, str):
s = s.numpy().decode("utf-8")
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff * londiff + latdiff * latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, numeric_cols, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed["pickup_datetime"]
feature_columns = {
colname: fc.numeric_column(colname) for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ["pickup_longitude", "dropoff_longitude"]:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78) / 8.0, name=f"scale_{lon_col}"
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ["pickup_latitude", "dropoff_latitude"]:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37) / 8.0, name=f"scale_{lat_col}"
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed["euclidean"] = layers.Lambda(euclidean, name="euclidean")(
[
inputs["pickup_longitude"],
inputs["pickup_latitude"],
inputs["dropoff_longitude"],
inputs["dropoff_latitude"],
]
)
feature_columns["euclidean"] = fc.numeric_column("euclidean")
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed["hourofday"] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32
),
name="hourofday",
)(inputs["pickup_datetime"])
feature_columns["hourofday"] = fc.indicator_column(
fc.categorical_column_with_identity("hourofday", num_buckets=24)
)
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns["pickup_latitude"], latbuckets
)
b_dlat = fc.bucketized_column(
feature_columns["dropoff_latitude"], latbuckets
)
b_plon = fc.bucketized_column(
feature_columns["pickup_longitude"], lonbuckets
)
b_dlon = fc.bucketized_column(
feature_columns["dropoff_longitude"], lonbuckets
)
ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns["pickup_and_dropoff"] = fc.embedding_column(pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr, string_cols):
numeric_cols = set(CSV_COLUMNS) - {LABEL_COLUMN, "key"} - set(string_cols)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype="float32")
for colname in numeric_cols
}
inputs.update(
{
colname: layers.Input(name=colname, shape=(), dtype="string")
for colname in string_cols
}
)
# transforms
transformed, feature_columns = transform(inputs, numeric_cols, nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation="relu", name=f"h{layer}")(x)
output = layers.Dense(1, name="fare")(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss="mse", metrics=[rmse, "mse"])
return model
def train_and_evaluate(hparams):
batch_size = hparams["batch_size"]
nbuckets = hparams["nbuckets"]
lr = hparams["lr"]
nnsize = hparams["nnsize"]
eval_data_path = hparams["eval_data_path"]
num_evals = hparams["num_evals"]
num_examples_to_train_on = hparams["num_examples_to_train_on"]
output_dir = hparams["output_dir"]
train_data_path = hparams["train_data_path"]
timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
savedmodel_dir = os.path.join(output_dir, "export/savedmodel")
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, "checkpoints")
tensorboard_path = os.path.join(output_dir, "tensorboard")
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
model = build_dnn_model(nbuckets, nnsize, lr, STRING_COLS)
logging.info(model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(
checkpoint_path, save_weights_only=True, verbose=1
)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path, histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb],
)
# Exporting the model with default serving function.
model.save(model_export_path)
return history
Explanation: Paste existing code into model.py
A Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file.
In the cell below, we write the contents of the cell into model.py packaging the model we
developed in the previous labs so that we can deploy it to AI Platform Training Service.
End of explanation
%%writefile taxifare/trainer/task.py
Argument definitions for model training code in `trainer.model`.
import argparse
from trainer import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help="Batch size for training steps",
type=int,
default=32,
)
parser.add_argument(
"--eval_data_path",
help="GCS location pattern of eval files",
required=True,
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes (provide space-separated sizes)",
nargs="+",
type=int,
default=[32, 8],
)
parser.add_argument(
"--nbuckets",
help="Number of buckets to divide lat and lon with",
type=int,
default=10,
)
parser.add_argument(
"--lr", help="learning rate for optimizer", type=float, default=0.001
)
parser.add_argument(
"--num_evals",
help="Number of times to evaluate model on eval data training.",
type=int,
default=5,
)
parser.add_argument(
"--num_examples_to_train_on",
help="Number of examples to train on.",
type=int,
default=100,
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True,
)
parser.add_argument(
"--train_data_path",
help="GCS location pattern of train files containing eval URLs",
required=True,
)
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk",
)
args = parser.parse_args()
hparams = args.__dict__
hparams.pop("job-dir", None)
model.train_and_evaluate(hparams)
Explanation: Modify code to read data from and write checkpoint files to GCS
If you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.)
This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both.
We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file.
End of explanation
%%bash
EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*
TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*
OUTPUT_DIR=./taxifare-model
test ${OUTPUT_DIR} && rm -rf ${OUTPUT_DIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python3 -m trainer.task \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTPUT_DIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size 5 \
--num_examples_to_train_on 100 \
--num_evals 1 \
--nbuckets 10 \
--lr 0.001 \
--nnsize 32 8
Explanation: Run trainer module package locally
Now we can test our training code locally as follows using the local test data. We'll run a very small training job over a single file with a small batch size and one eval step.
End of explanation
%%bash
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%Y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=50
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=100
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
#TODO 2
gcloud ai-platform jobs submit training $JOBID \
--module-name=trainer.task \
--package-path=taxifare/trainer \
--staging-bucket=gs://${BUCKET} \
--python-version=3.7 \
--runtime-version=${TFVERSION} \
--region=${REGION} \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
Explanation: Run your training package on Cloud AI Platform
Once the code works in standalone mode locally, you can run it on Cloud AI Platform. To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:
- jobid: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- region: Cloud region to train in. See here for supported AI Platform Training Service regions
The arguments before -- \ are for AI Platform Training Service.
The arguments after -- \ are sent to our task.py.
Because this is on the entire dataset, it will take a while. You can monitor the job from the GCP console in the Cloud AI Platform section.
End of explanation
%%writefile ./taxifare/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-5:latest
# TODO 3
COPY . /code
WORKDIR /code
ENTRYPOINT ["python3", "-m", "trainer.task"]
PROJECT_DIR = !cd ./taxifare &&pwd
PROJECT_DIR = PROJECT_DIR[0]
IMAGE_NAME = "taxifare_training_container"
DOCKERFILE = f"{PROJECT_DIR}/Dockerfile"
IMAGE_URI = f"gcr.io/{PROJECT}/{IMAGE_NAME}"
%env PROJECT_DIR=$PROJECT_DIR
%env IMAGE_NAME=$IMAGE_NAME
%env DOCKERFILE=$DOCKERFILE
%env IMAGE_URI=$IMAGE_URI
!docker build $PROJECT_DIR -f $DOCKERFILE -t $IMAGE_URI
!docker push $IMAGE_URI
Explanation: (Optional) Run your training package using Docker container
AI Platform Training also supports training in custom containers, allowing users to bring their own Docker containers with any pre-installed ML framework or algorithm to run on AI Platform Training.
In this last section, we'll see how to submit a Cloud training job using a customized Docker image.
Containerizing our ./taxifare/trainer package involves 3 steps:
Writing a Dockerfile in ./taxifare
Building the Docker image
Pushing it to the Google Cloud container registry in our GCP project
The Dockerfile specifies
1. How the container needs to be provisioned so that all the dependencies in our code are satisfied
2. Where to copy our trainer Package in the container
3. What command to run when the container is ran (the ENTRYPOINT line)
End of explanation
%%bash
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model
JOBID=taxifare_container_$(date -u +%Y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=50
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=100
NBUCKETS=10
NNSIZE="32 8"
# AI-Platform machines to use for training
MACHINE_TYPE=n1-standard-4
SCALE_TIER=CUSTOM
# GCS paths.
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
IMAGE_NAME=taxifare_training_container
IMAGE_URI=gcr.io/$PROJECT/$IMAGE_NAME
gcloud ai-platform jobs submit training $JOBID \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--master-machine-type=$MACHINE_TYPE \
--scale-tier=$SCALE_TIER \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--nnsize $NNSIZE
Explanation: Remark: If you prefer to build the container image from the command line, we have written a script for that ./taxifare/scripts/build.sh. This script reads its configuration from the file ./taxifare/scripts/env.sh. You can configure these arguments the way you want in that file. You can also simply type make build from within ./taxifare to build the image (which will invoke the build script). Similarly, we wrote the script ./taxifare/scripts/push.sh to push the Docker image, which you can also trigger by typing make push from within ./taxifare.
Train using a custom container on AI Platform
To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:
- jobname: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- master-image-uri: The uri of the Docker image we pushed in the Google Cloud registry
- region: Cloud region to train in. See here for supported AI Platform Training Service regions
The arguments before -- \ are for AI Platform Training Service.
The arguments after -- \ are sent to our task.py.
You can track your job and view logs using cloud console.
End of explanation |
8,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 24
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: In this chapter we model systems that involve rotating objects.
Rotation
Rotation is complicated
Step2: Rmin and Rmax are the initial and final values for the radius, r.
L is the total length of the paper. t_end is the length of the
simulation in time, and dt is the time step for the ODE solver.
We use the Params object to make a System object
Step3: The initial state contains three variables, theta, y, and r.
estimate_k computes the parameter, k, that relates theta and r.
Here's how it works
Step4: Ravg is the average radius, half way between Rmin and Rmax, so
Cavg is the circumference of the roll when r is Ravg.
revs is the total number of revolutions it would take to roll up
length L if r were constant at Ravg. And rads is just revs
converted to radians.
Finally, k is the change in r for each radian of revolution. For
these parameters, k is about 2.8e-5 m/rad.
Step5: Now we can use the differential equations from the previous section to
write a slope function
Step6: As usual, the slope function takes a State object, a time, and a
System object. The State object contains hypothetical values of
theta, y, and r at time t. The job of the slope function is to
compute the time derivatives of these values. The derivative of theta is angular velocity, which is often denoted omega.
And as usual, we'll test the slope function with the initial conditions.
Step7: We'd like to stop the simulation when the length of paper on the roll is L. We can do that with an event function that passes through 0 when y equals L
Step8: Now we can run the simulation like this
Step9: Here are the last few time steps.
Step10: At $\omega = 300$ rad/s, the time it takes to complete one roll is about 4.2 seconds, which is consistent with what we see in the video.
Step11: The final value of y is 47 meters, as expected.
Step12: The final value of radius is Rmax.
Step13: And the total number of rotations is close to 200, which seems plausible.
Step14: As an exercise, we'll see how fast the paper is moving. But first, let's take a closer look at the results.
Plotting
Here's what theta looks like over time.
Step15: theta grows linearly, as we should expect with constant angular velocity.
Here's what r looks like over time.
Step16: r also increases linearly.
But since the derivative of y depends on r, and r is increasing, y grows with increasing slope.
Step17: Because this system is so simple, it is almost silly to simulate it; as we'll see in the next section, it is easy enough to solve the
differential equations analytically.
However, it is often useful to start with simulation as a way of exploring and checking assumptions.
Analysis
The differential equations in Section xx are simple enough that we can just solve them. Since angular velocity is constant
Step18: In this case the estimate turns out to be exact.
Summary
Exercises
Exercise
Step19: With constant angular velocity, linear velocity is increasing, reaching its maximum at the end.
Step21: Now suppose this peak velocity is the limiting factor; that is, we can't move the paper any faster than that.
Nevertheless, we might be able to speed up the process by keeping the linear velocity at the maximum all the time.
Write a slope function that keeps the linear velocity, dydt, constant, and computes the angular velocity, omega, accordingly.
Run the simulation and see how much faster we could finish rolling the paper. | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 24
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
from modsim import units
m = units.meter
rad = units.radian
s = units.second
from modsim import Params
params = Params(Rmin = 0.02 * m,
Rmax = 0.055 * m,
L = 47 * m,
theta_0 = 0 * rad,
y_0 = 0 * m,
omega = 300 * rad / s,
t_end = 130 * s
)
Explanation: In this chapter we model systems that involve rotating objects.
Rotation
Rotation is complicated: in three dimensions, objects can rotate around three axes; objects are often easier to spin around some
axes than others; and they may be stable when spinning around some axes but not others.
If the configuration of an object changes over time, it might become
easier or harder to spin, which explains the surprising dynamics of
gymnasts, divers, ice skaters, etc.
And when you apply a twisting force to a rotating object, the effect is often contrary to intuition. For an example, see this video on
gyroscopic precession http://modsimpy.com/precess.
In this chapter, we will not take on the physics of rotation in all its glory. Rather, we will focus on simple scenarios where all rotation and all twisting forces are around a single axis. In that case, we can treat some vector quantities as if they were scalars (in the same way that we sometimes treat velocity as a scalar with an implicit direction).
This approach makes it possible to simulate and analyze many interesting systems, but you will also encounter systems that would be better approached with the more general toolkit.
The fundamental ideas in this chapter and the next are angular
velocity, angular acceleration, torque, and moment of
inertia. If you are not already familiar with these concepts, I will
define them as we go along, and I will point to additional reading.
As a case study, you can use these tools to simulate the behavior of a yo-yo (see http://modsimpy.com/yoyo). But we'll work our way up to it gradually, starting with toilet paper.
The physics of toilet paper
As a simple example of a system with rotation, we'll simulate the
manufacture of a roll of toilet paper, as shown in this video https://youtu.be/Z74OfpUbeac?t=231. Starting with a cardboard tube at the center, we will roll up 47 m of paper, the typical length of a roll of toilet paper in the U.S. (see http://modsimpy.com/paper).
{height="2.5in"}
This figure shows a diagram of the system: $r$ represents
the radius of the roll at a point in time. Initially, $r$ is the radius of the cardboard core, $R_{min}$. When the roll is complete, $r$ is $R_{max}$.
I'll use $\theta$ to represent the total rotation of the roll in
radians. In the diagram, $d\theta$ represents a small increase in
$\theta$, which corresponds to a distance along the circumference of the roll of $r~d\theta$.
Finally, I'll use $y$ to represent the total length of paper that's been rolled. Initially, $\theta=0$ and $y=0$. For each small increase in $\theta$, there is a corresponding increase in $y$:
$$dy = r~d\theta$$
If we divide both sides by a small increase in time, $dt$, we get a
differential equation for $y$ as a function of time.
$$\frac{dy}{dt} = r \frac{d\theta}{dt}$$
As we roll up the paper, $r$ increases, too. Assuming that $r$ increases by a fixed amount per revolution, we can write
$$dr = k~d\theta$$
Where $k$ is an unknown constant we'll have to figure out. Again, we can divide both sides by $dt$ to get a differential equation in time:
$$\frac{dr}{dt} = k \frac{d\theta}{dt}$$
Finally, let's assume that $\theta$ increases at a constant rate of $\omega = 300$ rad/s (about 2900 revolutions per minute):
$$\frac{d\theta}{dt} = \omega$$
This rate of change is called an angular velocity. Now we have a system of three differential equations we can use to simulate the system.
Implementation
At this point we have a pretty standard process for writing simulations like this.
First we'll create a Params object with the parameters of the system:
End of explanation
from modsim import State, System
def make_system(params):
init = State(theta = params.theta_0,
y = params.y_0,
r = params.Rmin
)
k = estimate_k(params)
return System(params,
init=init,
k=k,
)
Explanation: Rmin and Rmax are the initial and final values for the radius, r.
L is the total length of the paper. t_end is the length of the
simulation in time, and dt is the time step for the ODE solver.
We use the Params object to make a System object:
End of explanation
from numpy import pi
def estimate_k(params):
Rmin, Rmax, L = params.Rmin, params.Rmax, params.L
Ravg = (Rmax + Rmin) / 2
Cavg = 2 * pi * Ravg
revs = L / Cavg
rads = 2 * pi * revs
k = (Rmax - Rmin) / rads
return k
Explanation: The initial state contains three variables, theta, y, and r.
estimate_k computes the parameter, k, that relates theta and r.
Here's how it works:
End of explanation
system = make_system(params)
system.init
system.k
Explanation: Ravg is the average radius, half way between Rmin and Rmax, so
Cavg is the circumference of the roll when r is Ravg.
revs is the total number of revolutions it would take to roll up
length L if r were constant at Ravg. And rads is just revs
converted to radians.
Finally, k is the change in r for each radian of revolution. For
these parameters, k is about 2.8e-5 m/rad.
End of explanation
def slope_func(t, state, system):
theta, y, r = state
k, omega = system.k, system.omega
dydt = r * omega
drdt = k * omega
return omega, dydt, drdt
Explanation: Now we can use the differential equations from the previous section to
write a slope function:
End of explanation
slope_func(0, system.init, system)
Explanation: As usual, the slope function takes a State object, a time, and a
System object. The State object contains hypothetical values of
theta, y, and r at time t. The job of the slope function is to
compute the time derivatives of these values. The derivative of theta is angular velocity, which is often denoted omega.
And as usual, we'll test the slope function with the initial conditions.
End of explanation
def event_func(t, state, system):
theta, y, r = state
return y - system.L
event_func(0, system.init, system)
Explanation: We'd like to stop the simulation when the length of paper on the roll is L. We can do that with an event function that passes through 0 when y equals L:
End of explanation
from modsim import run_solve_ivp
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details.message
Explanation: Now we can run the simulation like this:
End of explanation
results.tail()
Explanation: Here are the last few time steps.
End of explanation
results.index[-1]
Explanation: At $\omega = 300$ rad/s, the time it takes to complete one roll is about 4.2 seconds, which is consistent with what we see in the video.
End of explanation
final_state = results.iloc[-1]
print(final_state.y, params.L)
Explanation: The final value of y is 47 meters, as expected.
End of explanation
print(final_state.r, params.Rmax)
Explanation: The final value of radius is Rmax.
End of explanation
radians = final_state.theta
rotations = radians / 2 / pi
rotations
Explanation: And the total number of rotations is close to 200, which seems plausible.
End of explanation
from modsim import decorate
def plot_theta(results):
results.theta.plot(color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
plot_theta(results)
Explanation: As an exercise, we'll see how fast the paper is moving. But first, let's take a closer look at the results.
Plotting
Here's what theta looks like over time.
End of explanation
def plot_r(results):
results.r.plot(color='C2', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (m)')
plot_r(results)
Explanation: theta grows linearly, as we should expect with constant angular velocity.
Here's what r looks like over time.
End of explanation
def plot_y(results):
results.y.plot(color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
plot_y(results)
Explanation: r also increases linearly.
But since the derivative of y depends on r, and r is increasing, y grows with increasing slope.
End of explanation
k = (params.Rmax**2 - params.Rmin**2) / (2 * params.L)
print(k, system.k)
Explanation: Because this system is so simple, it is almost silly to simulate it; as we'll see in the next section, it is easy enough to solve the
differential equations analytically.
However, it is often useful to start with simulation as a way of exploring and checking assumptions.
Analysis
The differential equations in Section xx are simple enough that we can just solve them. Since angular velocity is constant:
$$\frac{d\theta}{dt} = \omega$$
We can find $\theta$ as a function of time by integrating both sides:
$$\theta(t) = \omega t$$
Similarly, we can solve this equation
$$\frac{dr}{dt} = k \omega$$
to find
$$r(t) = k \omega t + R_{min}$$
Then we can plug the solution for $r$ into the equation for $y$:
$$\begin{aligned}
\frac{dy}{dt} & = r \omega \
& = \left[ k \omega t + R_{min} \right] \omega \nonumber\end{aligned}$$
Integrating both sides yields:
$$y(t) = \left[ k \omega t^2 / 2 + R_{min} t \right] \omega$$
So $y$ is a parabola, as you might have guessed.
We can also use these equations to find the relationship between $y$ and $r$, independent of time, which we can use to compute $k$. Using a move we saw in Section xxx, I'll divide Equations 1 and
2, yielding
$$\frac{dr}{dy} = \frac{k}{r}$$
Separating variables yields
$$r~dr = k~dy$$
Integrating both sides yields
$$r^2 / 2 = k y + C$$
When $y=0$, $r=R_{min}$, so
$$C = \frac{1}{2} R_{min}^2$$
Solving for $y$, we have
$$y = \frac{1}{2k} (r^2 - R_{min}^2) \label{eqn3}$$
When $y=L$, $r=R_{max}$; substituting in those values yields
$$L = \frac{1}{2k} (R_{max}^2 - R_{min}^2)$$
Solving for $k$ yields
$$k = \frac{1}{2L} (R_{max}^2 - R_{min}^2) \label{eqn4}$$
Plugging in the values of the parameters yields 2.8e-5 m/rad, the same as the "estimate" we computed in Section xxx.
End of explanation
# Solution
from modsim import gradient
dydt = gradient(results.y);
# Solution
dydt.plot(label='dydt')
decorate(xlabel='Time (s)',
ylabel='Linear velocity (m/s)')
Explanation: In this case the estimate turns out to be exact.
Summary
Exercises
Exercise: Since we keep omega constant, the linear velocity of the paper increases with radius. We can use gradient to estimate the derivative of results.y.
End of explanation
max_linear_velocity = dydt.iloc[-1]
max_linear_velocity
Explanation: With constant angular velocity, linear velocity is increasing, reaching its maximum at the end.
End of explanation
# Solution
def slope_func(t, state, system):
Computes the derivatives of the state variables.
state: State object with theta, y, r
t: time
system: System object with r, k
returns: sequence of derivatives
theta, y, r = state
k, omega = system.k, system.omega
dydt = system.linear_velocity
omega = dydt / r
drdt = k * omega
return omega, dydt, drdt
# Solution
system.linear_velocity = max_linear_velocity
slope_func(0, system.init, system)
# Solution
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details.message
# Solution
t_final = results.index[-1]
t_final
# Solution
plot_theta(results)
# Solution
plot_r(results)
# Solution
plot_y(results)
Explanation: Now suppose this peak velocity is the limiting factor; that is, we can't move the paper any faster than that.
Nevertheless, we might be able to speed up the process by keeping the linear velocity at the maximum all the time.
Write a slope function that keeps the linear velocity, dydt, constant, and computes the angular velocity, omega, accordingly.
Run the simulation and see how much faster we could finish rolling the paper.
End of explanation |
8,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Options Widgets
Herein, we present the widgets that can be used as components in order to assemble higher level widgets, such as the ones presented in Menpo Widgets.ipynb and MenpoFit Widgets.ipynb. Those widgets live in menpowidgets.options and menpowidgets.menpofit.options. Specifically we split this notebook in the following subsections
Step1: <a name="sec
Step2: We can replace the render_function() with a new one as follows
Step3: The style of the widget can also be changed to a predefined theme
Step4: Finally, the options of the widget can be updated by using the `set_widget_state() function as
Step5: <a name="sec
Step6: The state of the widget can be updated for a new Image object as
Step7: The widget has the ability to remember the image categories that it has already seen by creating a key name based on the properties. The key has the format
Step8: When an unseen object is passed in, then the widget automatically assigns the following default options
Step9: <a name="sec
Step10: The options for a new object can be defined as
Step11: Thus, until now the widget remembers the following objects
Step12: Note that if I pass in an object of the same category as the first one, then it gets the options we had selected and the memory dict does not chage
Step13: The default options that get assigned to an unseen object are
Step14: <a name="sec
Step15: Let's replace the render function
Step16: Now, let's assume we have a new LandmarkManager object
Step17: Once again, the objects that the widget has seen until now can be retrieved as
Step18: whereas the default options that an unseen object gets assigned are
Step19: The predefined style of the widget can be changed at any time as
Step20: <a name="sec
Step21: Let's now create and display the widget
Step22: The selected_values dictionary has the following keys
Step23: and there is a dict with options that corresponds to each key. Additionally, in case there are more than one labels, the user can define a different colour per label (marker_face_colour, marker_edge_colour and line_colour) which are returned in a list. The rest of the options are common for all labels.
The render function can be easily replaced as
Step24: Finally, the state of the widget can be updated with a new object as follows
Step25: <a name="sec
Step26: <a name="sec
Step27: Note that the visibility of the animation buttons and the variance button can be controlled by the animation_visible and plot_variance_visible arguments, respectively.
Let's now update the state of the widget
Step28: Finally, let's create an instance of the widget with a single slider (mode = 'single').
Step29: <a name="sec
Step30: Now, let's update the widget state with a new Result object that does not have the initial shape and the image object
Step31: <a name="sec
Step32: Let's now update the state of the widget with a Result object that has no iterations. Note that the Iterations tab is now empty.
Step33: <a name="sec
Step34: Of course the widget text can be updated as
Step35: <a name="sec
Step36: The actaul features function and options are stored in
Step37: <a name="sec
Step38: Then the widget can be called as | Python Code:
from menpowidgets.options import (AnimationOptionsWidget, ChannelOptionsWidget, PatchOptionsWidget,
LandmarkOptionsWidget, RendererOptionsWidget, PlotOptionsWidget,
LinearModelParametersWidget, TextPrintWidget, FeatureOptionsWidget,
SaveFigureOptionsWidget)
from menpowidgets.menpofit.options import ResultOptionsWidget, IterativeResultOptionsWidget
from menpo.visualize import print_dynamic
Explanation: Options Widgets
Herein, we present the widgets that can be used as components in order to assemble higher level widgets, such as the ones presented in Menpo Widgets.ipynb and MenpoFit Widgets.ipynb. Those widgets live in menpowidgets.options and menpowidgets.menpofit.options. Specifically we split this notebook in the following subsections:
Basics
Widgets with Memory
Animation Options
Channels Options
Patches Options
Landmarks Options
Renderer Options
Plot Options
Linear Model Parameters
Result Options
Iterative Result Options
Text Print
Feature Options
Save Figure Options
<a name="sec:basics"></a>1. Basics
As explained in the Introduction.ipynb notebook and similar to the Widgets Tools.ipynb, all the widgets presented here are subclasses of menpo.abstract.MenpoWidget, thus they follow the same rules, which are:
They expect as input the rendering callback function.
They implement add_render_function(), remove_render_function(), replace_render_function() and call_render_function().
They implement set_widget_state(), which updates the widget state with the properties of a new object.
They implement style() which takes a set of options that change the style of the widget, such as font-related options, border-related options, etc.
The only difference from the widgets in menpowidgets.tools (explained in Widgets Tools.ipynb) is that these widgets also implement:
* predefined_style() which sets a predefined theme on the widget. Possible themes are 'minimal', 'success', 'info', 'warning' and 'danger'.
<a name="sec:basics"></a>2. Widgets with Memory
All the widgets of this notebook have memory. Specifically, they have the ability to recognize objects with the same properties and use the same options. This becomes more clear in the Main Widgets.ipynb notebook.
However, in order to make this more clear, assume the following simplistic scenario:
Assume that we have a set of images to render. We are using a widget that allows us to browse through the image objects, one at a time (e.g. AnimationOptionsWidget), as well as a widget for selecting options regarding their channels (i.e. ChannelOptionsWidget). Everytime we get a new object, we need to use some rendering options. However, it is not possible to use the options from the previous object because they may not apply on the current one due to different properties. Therefore, the widget is smart enough to encode the image objects based on their properties (i.e. n_channels, is_masked) and if the current object category is seen before, then the corresponding selected options are applied. Otherwise, if the current object is not seen before, then it gets assigned some default options.
The above description means that the widgets remember the options that correspond to object categories and augment their memory as more new obects come in.
Before presenting each widget separately, let's first import the things that are required.
End of explanation
# Initial options
index = {'min': 0,
'max': 100,
'step': 1,
'index': 10}
# Render function
def render_function(change):
print_dynamic('{}'.format(change['new']))
# Create widget
anim_wid = AnimationOptionsWidget(index,
index_style='buttons',
render_function=render_function,
style='info')
# Display widget
anim_wid
Explanation: <a name="sec:animation"></a>3. Animation Options
The aim of this widget is to allow the user to browse through a set of objects (e.g. images, shapes, etc.). Thus, it provides the ability to select an index by some controllers (e.g. slider or buttons). It also provides the ability to play an animation of the objects. The functionality of the buttons is the following:
<i class="fa fa-plus"></i> Next object.<br>
<i class="fa fa-minus"></i> Previous object.<br>
<i class="fa fa-play"></i> Start the animation playback.<br>
<i class="fa fa-stop"></i> Stop the animation playback.<br>
<i class="fa fa-fast-forward"></i> Increase the animation's speed.<br>
<i class="fa fa-fast-backward"></i> Decrease the animation's speed.<br>
<i class="fa fa-repeat"></i> Repeat mode is enabled.<br>
<i class="fa fa-long-arrow-right"></i> Repeat mode is disabled.
The initial options are defined as a dict. We also define a render_function() that prints the selected options.
End of explanation
def new_render_function(change):
print_dynamic('This is the new function. Index = {}'.format(anim_wid.selected_values))
anim_wid.replace_render_function(new_render_function)
Explanation: We can replace the render_function() with a new one as follows:
End of explanation
anim_wid.predefined_style('warning')
Explanation: The style of the widget can also be changed to a predefined theme
End of explanation
anim_wid.set_widget_state({'min': 0, 'max': 20, 'step': 2, 'index': 16},
allow_callback=False)
Explanation: Finally, the options of the widget can be updated by using the `set_widget_state() function as
End of explanation
# Render function
def render_function(change):
s = "Channels: {}. Gryph: {}. Masked: {}".format(change['new']['channels'],
change['new']['glyph_enabled'],
change['new']['masked_enabled'])
print_dynamic(s)
# Create widget
chan_wid = ChannelOptionsWidget(n_channels=3, image_is_masked=True,
render_function=render_function, style='danger')
# Display widget
chan_wid
Explanation: <a name="sec:channels"></a>4. Channels Options
The aim of this widget is to allow the user to select options related to the channels of an Image.
It is assumed that an Image object is uniquely described by the following properties:
1. n_channels: The Image's number of channels.
2. image_is_masked: True if the object is a MaskedImage.
Let us define a render_function() that prints the selected channels to be visualized along with the masked_enabled and glyph_enabled flags and create the widget.
End of explanation
chan_wid.set_widget_state(n_channels=36, image_is_masked=True, allow_callback=True)
Explanation: The state of the widget can be updated for a new Image object as
End of explanation
chan_wid.default_options
Explanation: The widget has the ability to remember the image categories that it has already seen by creating a key name based on the properties. The key has the format:
python
'{}_{}'.format(n_channels, image_is_masked)
Consequently, until now, the objects that the widget has seen with their corresponding options are:
End of explanation
chan_wid.get_default_options(n_channels=100, image_is_masked=False)
Explanation: When an unseen object is passed in, then the widget automatically assigns the following default options:
End of explanation
# Render function
def render_function(change):
s = "Patches: {}. Offset: {}. Background: {}. BBoxes: {}. Centers: {}".format(
pat_wid.selected_values['patches_indices'], pat_wid.selected_values['offset_index'],
pat_wid.selected_values['background'], pat_wid.selected_values['render_patches_bboxes'],
pat_wid.selected_values['render_centers'])
print_dynamic(s)
# Create widget
pat_wid = PatchOptionsWidget(n_patches=68, n_offsets=3, render_function=render_function, style='info')
pat_wid
Explanation: <a name="sec:patches"></a>5. Patches Options
The PatchOptionsWidget allows the selection of patches-related options, e.g. patches slicing, bounding boxes rendering, black/white background colour etc. It assumes that a patch-based image is uniquely defined by the following properties:
* n_patches: The number of patches.
* n_offsets: The number of offsets per patch.
Similar to the ChannelOptionsWidget, it has memory of the objects it has seen by assigning them a key of the following format:
python
'{}_{}'.format(n_patches, n_offsets)
For example
End of explanation
pat_wid.set_widget_state(n_patches=49, n_offsets=1, allow_callback=True)
Explanation: The options for a new object can be defined as:
End of explanation
print(pat_wid.default_options)
Explanation: Thus, until now the widget remembers the following objects:
End of explanation
print('Objects in memory (before): {}'.format(len(pat_wid.default_options)))
pat_wid.set_widget_state(n_patches=68, n_offsets=3)
print('\nObjects in memory (after): {}'.format(len(pat_wid.default_options)))
Explanation: Note that if I pass in an object of the same category as the first one, then it gets the options we had selected and the memory dict does not chage:
End of explanation
pat_wid.get_default_options(n_patches=2, n_offsets=20)
Explanation: The default options that get assigned to an unseen object are:
End of explanation
# Render function
def render_function(change):
s = "Group: {}. Labels: {}.".format(land_wid.selected_values['group'], land_wid.selected_values['with_labels'])
print_dynamic(s)
# Initial object's properties
group_keys = ['PTS', 'ibug_face_68']
labels_keys = [['all'], ['jaw', 'eye']]
# Create widget
land_wid = LandmarkOptionsWidget(group_keys, labels_keys, render_function=render_function, style='success')
# Display widget
land_wid
Explanation: <a name="sec:landmarks"></a>6. Landmarks Options
The LandmarkOptionsWidget allows the selection of landmarks-related options, e.g. group, labels etc. It assumes that an object with landmarks (LandmarkManager) is uniquely defined by the following properties:
* group_keys: The list of LandmarkGroup names.
* labels_keys: A list with the list of labels per LandmarkGroup.
Of course, it has memory of the objects it has seen by assigning them a key of the following format:
python
"{}_{}".format(group_keys, labels_keys)
Let us define a rendering callback function and create the widget:
End of explanation
def render_function(change):
s = "Render: {}. Group: {}. Labels: {}.".format(land_wid.selected_values['render_landmarks'],
land_wid.selected_values['group'],
land_wid.selected_values['with_labels'])
print_dynamic(s)
land_wid.replace_render_function(render_function)
Explanation: Let's replace the render function:
End of explanation
land_wid.set_widget_state(group_keys=['PTS', 'other'], labels_keys=[['all'], ['land', 'marks']],
allow_callback=True)
Explanation: Now, let's assume we have a new LandmarkManager object:
End of explanation
land_wid.default_options
Explanation: Once again, the objects that the widget has seen until now can be retrieved as
End of explanation
land_wid.get_default_options(group_keys=['new'], labels_keys=[['object']])
Explanation: whereas the default options that an unseen object gets assigned are:
End of explanation
land_wid.predefined_style('warning')
Explanation: The predefined style of the widget can be changed at any time as:
End of explanation
# Widget's tabs
options_tabs = ['lines', 'markers', 'numbering', 'zoom_one', 'axes']
# Initial object's labels
labels = ['hello', 'world']
# Render function
def render_function(change):
print(change['new'])
Explanation: <a name="sec:renderer"></a>7. Renderer Options
The RendererOptionsWidget allows the selection of generic rendering options related to lines, markers, axes, legend, etc. It is a very powerful and flexible widget and it makes it very easy to select its parts. Its contructor requires two arguments:
options_tabs: It is a list that defines the nature as well as the ordering of the tabs of the widget. It can get the following values:
| Value | Returned widget | Description |
| --------------- |:------------------------ | :---------------- |
| 'lines' | LineOptionsWidget | Lines options |
| 'markers' | MarkerOptionsWidget | Markers options |
| 'numbering' | NumberingOptionsWidget | Numbering options |
| 'zoom_one' | ZoomOneScaleWidget | Single Zoom |
| 'zoom_two' | ZoomTwoScalesWidget | Zoom per axis |
| 'legend' | LegendOptionsWidget | Legend options |
| 'grid' | GridOptionsWidget | Grid options |
| 'image' | ImageOptionsWidget | Image options |
| 'axes' | AxesOptionsWidget | Axes options |
labels: This is a list that uniquely defines each new object. Thus, the unique keys that define the object categories have the format:
python
"{}".format(labels)
Let us define the options_tabs, as well as the labels parameters. The render function will be printing all the selected options.
End of explanation
rend_wid = RendererOptionsWidget(options_tabs, labels,
render_function=render_function, style='info', tabs_style='warning')
rend_wid
Explanation: Let's now create and display the widget:
End of explanation
print(rend_wid.selected_values.keys())
Explanation: The selected_values dictionary has the following keys:
End of explanation
def render_function(change):
print_dynamic("marker face colour: {}, line colour: {}, zoom: {:.1f}".format(
rend_wid.selected_values['markers']['marker_face_colour'][0],
rend_wid.selected_values['lines']['line_colour'][0], rend_wid.selected_values['zoom_one']))
rend_wid.replace_render_function(render_function)
Explanation: and there is a dict with options that corresponds to each key. Additionally, in case there are more than one labels, the user can define a different colour per label (marker_face_colour, marker_edge_colour and line_colour) which are returned in a list. The rest of the options are common for all labels.
The render function can be easily replaced as:
End of explanation
rend_wid.set_widget_state(labels=None, allow_callback=True)
Explanation: Finally, the state of the widget can be updated with a new object as follows:
End of explanation
def render_function(change):
s = "Marker face colour: {}, Line width: {}".format(
plot_wid.selected_values['marker_face_colour'],
plot_wid.selected_values['line_width'])
print_dynamic(s)
plot_wid = PlotOptionsWidget(legend_entries=['menpo', 'project'], render_function=render_function,
style='danger', tabs_style='info')
plot_wid
Explanation: <a name="sec:plot"></a>8. Plot Options
The aim of this widget is to allow the user to select options related with plotting a graph with various curves. It can accomodate options for different curves that are related to markers and lines. It also has options regarding the legend, axes, grid, zoom and figure properties.
The concept behind this widget is very similar to RendererOptionsWidget. The two main differences are:
1. The subwidgets are not selected; they are predefined.
2. In case there are more than one curves, the user can select different line (line_colour, line_style, line_width) and marker (makrer_face_colour, marker_edge_colour, marker_size, marker_style, marker_edge_width) options per curve; not only differeny colours as in the case of RendererOptionsWidget.
Let's define a rendering function that prints the marker_face_colour and line_width and create the widget assuming that we have two curves:
End of explanation
def render_function(change):
print_dynamic("Selected parameters: {}".format(change['new']))
def variance_function(name):
print_dynamic('PLOT VARIANCE')
param_wid = LinearModelParametersWidget(n_parameters=5, render_function=render_function,
params_str='Parameter ', mode='multiple',
params_bounds=(-3., 3.), plot_variance_visible=True,
plot_variance_function=variance_function, style='info')
param_wid
Explanation: <a name="sec:parameters"></a>9. Linear Model Parameters
The aim of this widget is to tweak the parameters of a linear model to generate new instances. The user can select the number of parameters (n_parameters) and between two possible mode options:
* 'multiple': In this case there will be a different slider per parameter.
* 'single': In this case there will be a single slider and a dropdown menu to select the parameter we wish to change.
Also, the widget is able to animate itself, by linearly changing the value of each parameter from zero to minimum to maximum and then back to zero. The functionality of each button is as follows:
<i class="fa fa-play"></i> Start the animation playback.<br>
<i class="fa fa-stop"></i> Stop the animation playback.<br>
<i class="fa fa-fast-forward"></i> Increase the animation's speed.<br>
<i class="fa fa-fast-backward"></i> Decrease the animation's speed.<br>
<i class="fa fa-repeat"></i> Repeat mode is enabled.<br>
<i class="fa fa-long-arrow-right"></i> Repeat mode is disabled.<br>
Reset Reset the values of all parameters to 0.<br>
Variance Plot the variance of the model.
Let's define a render function that prints the selected parameter values, a toy variance plotting function and create an instance of the widget:
End of explanation
param_wid.set_widget_state(n_parameters=10, params_str='', params_step=0.1, params_bounds=(-10, 10),
plot_variance_visible=False, allow_callback=False)
Explanation: Note that the visibility of the animation buttons and the variance button can be controlled by the animation_visible and plot_variance_visible arguments, respectively.
Let's now update the state of the widget:
End of explanation
param_wid = LinearModelParametersWidget(n_parameters=15, render_function=render_function,
mode='single', plot_variance_function=variance_function,
style='warning')
param_wid
Explanation: Finally, let's create an instance of the widget with a single slider (mode = 'single').
End of explanation
def render_function(change):
print_dynamic("Final: {}, Initial: {}, GT: {}, Image: {}, Subplots: {}".format(
res_wid.selected_values['render_final_shape'],
res_wid.selected_values['render_initial_shape'],
res_wid.selected_values['render_gt_shape'],
res_wid.selected_values['render_image'],
res_wid.selected_values['subplots_enabled']))
res_wid = ResultOptionsWidget(has_gt_shape=True, has_initial_shape=True, has_image=True,
render_function=render_function, style='info')
res_wid
Explanation: <a name="sec:result"></a>10. Result Options
The aim of this widget is to provide options for visualising a menpofit.result.Result object. This means that the user can render the final fitting, initial shape as well as ground truth shape with or without the image. These shapes can be viewed either on separate or on the same figure. Note that the widget is "smart" enough to adjust in case there is not an initial shape, ground truth shape or image in the Result object.
Let's create a rendering function and a widget instance:
End of explanation
res_wid.set_widget_state(has_gt_shape=True, has_initial_shape=False, has_image=False, allow_callback=True)
Explanation: Now, let's update the widget state with a new Result object that does not have the initial shape and the image object:
End of explanation
def plot_function(name):
print_dynamic(name.description)
def render_function(change):
print(res_wid.selected_values)
def update(change):
print('Update')
res_wid = IterativeResultOptionsWidget(has_gt_shape=True, has_initial_shape=True, has_image=True, n_shapes=None,
has_costs=False, render_function=render_function,
tab_update_function=update, style='info', tabs_style='danger',
displacements_function=plot_function, errors_function=plot_function,
costs_function=plot_function)
res_wid
Explanation: <a name="sec:iterative_result"></a>11. Iterative Result Options
This widget is a more advanced version of ResultOptionsWidget. It provides options for both a simple result object (i.e. Result in menpofit.result) as well as an iterative result object (i.e. MultiScaleParametricIterativeResult and MultiScaleNonParametricIterativeResult).
It has two tabs:
* Final: It has the same functionalities as ResultOptionsWidget. Its purpose is to visualise the final result of the fitting procedure.
* Iterations: This visualises the iterations of the fitting procedure either as an animation or in static figures.
Moreover, the widget has a tab_update_function argument that expects a function that gets called when the tab selection changes. The purpose is to update a potential rendering options widget, because not the same options apply for visualising the final result and the iterations of a fitting process.
Let's create an instance of the widget:
End of explanation
res_wid.set_widget_state(has_gt_shape=False, has_initial_shape=False, has_image=True, n_shapes=None,
has_costs=False, allow_callback=True)
Explanation: Let's now update the state of the widget with a Result object that has no iterations. Note that the Iterations tab is now empty.
End of explanation
text_per_line = ['> This is the', '> Text Print widget!', '> :-)']
txt_wid = TextPrintWidget(text_per_line, style='danger')
txt_wid
Explanation: <a name="sec:text"></a>12. Text Print
The aim of this widget is to allow the user to print text within the widget area. For example:
End of explanation
txt_wid.set_widget_state(['M', 'E', 'N', 'P', 'O'])
Explanation: Of course the widget text can be updated as:
End of explanation
feat_wid = FeatureOptionsWidget(style='danger')
feat_wid
Explanation: <a name="sec:features"></a>13. Feature Options
This widget is very simple and is designed to be used by the features_selection() widget. It doesn't get any input options.
End of explanation
print(feat_wid.features_function)
print(feat_wid.features_options)
Explanation: The actaul features function and options are stored in
End of explanation
%matplotlib inline
import menpo.io as mio
im = mio.import_builtin_asset.lenna_png()
renderer = im.view_landmarks(figure_size=(6, 4))
Explanation: <a name="sec:save"></a>14. Save Figure Options
The aim of this widget is to allow the user to save a figure to file. It expects as input the renderer object that was used to render a figure (class Renderer).
Let's first generate such a renderer by visualizing an image
End of explanation
save_wid = SaveFigureOptionsWidget(renderer, style='warning')
save_wid
Explanation: Then the widget can be called as
End of explanation |
8,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p>
<img src="http
Step1: <center><img src="https
Step2: By convention, you'll find that most people in the SciPy/PyData world will import NumPy using np as an alias
Step3: Throughout this chapter, and indeed the rest of the book, you'll find that this is the way we will import and use NumPy.
Understanding Data Types in Python
Effective data-driven science and computation requires understanding how data is stored and manipulated.
Here we outlines and contrasts how arrays of data are handled in the Python language itself, and how NumPy improves on this.
Python offers several different options for storing data in efficient, fixed-type data buffers.
The built-in array module (available since Python 3.3) can be used to create dense arrays of a uniform type
Step4: Here 'i' is a type code indicating the contents are integers.
Much more useful, however, is the ndarray object of the NumPy package.
While Python's array object provides efficient storage of array-based data, NumPy adds to this efficient operations on that data.
Creating Arrays from Python Lists
First, we can use np.array to create arrays from Python lists
Step5: Remember that unlike Python lists, NumPy is constrained to arrays that all contain the same type.
If types do not match, NumPy will upcast if possible (here, integers are up-cast to floating point)
Step6: If we want to explicitly set the data type of the resulting array, we can use the dtype keyword
Step7: Creating Arrays from Scratch
Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into NumPy
Step8: NumPy Standard Data Types
NumPy arrays contain values of a single type, so have a look at those types and their bounds
Step9: Each array has attributes ndim (the number of dimensions), shape (the size of each dimension), size (the total size of the array) and dtype (the data type of the array)
Step10: Array Indexing
Step11: In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices
Step12: Values can also be modified using any of the above index notation
Step13: Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.
Step14: Array Slicing
Step15: A potentially confusing case is when the step value is negative.
In this case, the defaults for start and stop are swapped.
This becomes a convenient way to reverse an array
Step16: Multi-dimensional subarrays
Multi-dimensional slices work in the same way, with multiple slices separated by commas
Step17: Accessing array rows and columns
One commonly needed routine is accessing of single rows or columns of an array
Step18: Subarrays as no-copy views
One important–and extremely useful–thing to know about array slices is that they return views rather than copies of the array data.
This is one area in which NumPy array slicing differs from Python list slicing
Step19: It is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the copy() method.
Reshaping of Arrays
If you want to put the numbers 1 through 9 in a $3 \times 3$ grid
Step20: Concatenation of arrays
np.concatenate takes a tuple or list of arrays as its first argument
Step21: For working with arrays of mixed dimensions, it can be clearer to use the np.vstack (vertical stack) and np.hstack (horizontal stack) functions
Step22: Splitting of arrays
The opposite of concatenation is splitting, we can pass a list of indices giving the split points
Step23: Computation on NumPy Arrays
Step24: If we measure the execution time of this code for a large input, we see that this operation is very slow, perhaps surprisingly so!
Step25: It takes $2.63$ seconds to compute these million operations and to store the result.
It turns out that the bottleneck here is not the operations themselves, but the type-checking and function dispatches that CPython must do at each cycle of the loop.
If we were working in compiled code instead, this type specification would be known before the code executes and the result could be computed much more efficiently.
Introducing UFuncs
For many types of operations, NumPy provides a convenient interface into just this kind of compiled routine.
This is known as a vectorized operation.
This can be accomplished by performing an operation on the array, which will then be applied to each element.
Step26: Vectorized operations in NumPy are implemented via ufuncs, whose main purpose is to quickly execute repeated operations on values in NumPy arrays.
Ufuncs are extremely flexible – before we saw an operation between a scalar and an array, but we can also operate between two arrays
Step27: And ufunc operations are not limited to one-dimensional arrays–they can also act on multi-dimensional arrays as well
Step28: Any time you see such a loop in a Python script, you should consider whether it can be replaced with a vectorized expression.
Array arithmetic
NumPy's ufuncs feel very natural to use because they make use of Python's native arithmetic operators
Step29: Trigonometric functions
NumPy provides a large number of useful ufuncs, we'll start by defining an array of angles
Step30: Exponents and logarithms
Another common NumPy ufunc are the exponentials (that are useful for maintaining precision with very small inputs)
Step31: Specifying output
For large calculations, it is sometimes useful to be able to specify the array where the result of the calculation will be stored
Step32: Outer products
Finally, any ufunc can compute the output of all pairs of two different inputs using the outer method
Step33: Aggregations
Step34: Minimum and Maximum
Similarly, Python has built-in min and max functions
Step35: Multi dimensional aggregates
One common type of aggregation operation is an aggregate along a row or column
Step36: Other aggregation functions
Additionally, most aggregates have a NaN-safe counterpart that computes the result while ignoring missing values, which are marked by the special IEEE floating-point NaN value
|Function Name | NaN-safe Version | Description |
|-------------------|---------------------|-----------------------------------------------|
| np.sum | np.nansum | Compute sum of elements |
| np.prod | np.nanprod | Compute product of elements |
| np.mean | np.nanmean | Compute mean of elements |
| np.std | np.nanstd | Compute standard deviation |
| np.var | np.nanvar | Compute variance |
| np.min | np.nanmin | Find minimum value |
| np.max | np.nanmax | Find maximum value |
| np.argmin | np.nanargmin | Find index of minimum value |
| np.argmax | np.nanargmax | Find index of maximum value |
| np.median | np.nanmedian | Compute median of elements |
| np.percentile | np.nanpercentile| Compute rank-based statistics of elements |
| np.any | N/A | Evaluate whether any elements are true |
| np.all | N/A | Evaluate whether all elements are true |
Computation on Arrays
Step37: Broadcasting allows these types of binary operations to be performed on arrays of different sizes
Step38: We can think of this as an operation that stretches or duplicates the value 5 into the array [5, 5, 5], and adds the results; the advantage of NumPy's broadcasting is that this duplication of values does not actually take place.
We can similarly extend this to arrays of higher dimensions
Step39: Here the one-dimensional array a is stretched, or broadcast across the second dimension in order to match the shape of M.
More complicated cases can involve broadcasting of both arrays
Step40: Rules of Broadcasting
Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays
Step41: Plotting a two-dimensional function
One place that broadcasting is very useful is in displaying images based on two-dimensional functions.
If we want to define a function $z = f(x, y)$, broadcasting can be used to compute the function across the grid
Step42: Comparisons, Masks, and Boolean Logic
Masking comes up when you want to extract, modify, count, or otherwise manipulate values in an array based on some criterion
Step43: Just as in the case of arithmetic ufuncs, these will work on arrays of any size and shape
Step44: Counting entries
To count the number of True entries in a Boolean array, np.count_nonzero is useful
Step45: Boolean Arrays as Masks
A more powerful pattern is to use Boolean arrays as masks, to select particular subsets of the data themselves
Step46: What is returned is a one-dimensional array filled with all the values that meet this condition; in other words, all the values in positions at which the mask array is True.
Fancy Indexing
We saw how to access and modify portions of arrays using simple indices (e.g., arr[0]), slices (e.g., arr[
Step47: When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed
Step48: Fancy indexing also works in multiple dimensions
Step49: Like with standard indexing, the first index refers to the row, and the second to the column
Step50: The pairing of indices in fancy indexing follows all the broadcasting rules that we've already seen
Step51: each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations
Step52: Remember
Step53: Example
Step54: Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array
Step55: Now to see which points were selected, let's over-plot large circles at the locations of the selected points
Step56: Modifying Values with Fancy Indexing
Fancy indexing it can also be used to modify parts of an array
Step57: Notice, though, that repeated indices with these operations can cause some potentially unexpected results
Step58: Where did the 4 go? The result of this operation is to first assign x[0] = 4, followed by x[0] = 6.
The result, of course, is that x[0] contains the value 6.
Step59: You might expect that x[3] would contain the value 2, and x[4] would contain the value 3, as this is how many times each index is repeated. Why is this not the case?
Conceptually, this is because x[i] += 1 is meant as a shorthand of x[i] = x[i] + 1. x[i] + 1 is evaluated, and then the result is assigned to the indices in x.
With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
Step60: The at() method does an in-place application of the given operator at the specified indices (here, i) with the specified value (here, 1).
Another method that is similar in spirit is the reduceat() method of ufuncs, which you can read about in the NumPy documentation.
Example
Step61: Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be?
If you dig into the np.histogram source code (you can do this in IPython by typing np.histogram??), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large...
Step62: What this comparison shows is that algorithmic efficiency is almost never a simple question. An algorithm efficient for large datasets will not always be the best choice for small datasets, and vice versa.
The key to efficiently using Python in data-intensive applications is knowing about general convenience routines like np.histogram and when they're appropriate, but also knowing how to make use of lower-level functionality when you need more pointed behavior.
Sorting Arrays
Up to this point we have been concerned mainly with tools to access and operate on array data with NumPy.
This section covers algorithms related to sorting values in NumPy arrays.
Fast Sorting in NumPy
Step63: A related function is argsort, which instead returns the indices of the sorted elements
Step64: The first element of this result gives the index of the smallest element, the second value gives the index of the second smallest, and so on.
These indices can then be used (via fancy indexing) to construct the sorted array if desired
Step65: Sorting along rows or columns
Step66: Keep in mind that this treats each row or column as an independent array, and any relationships between the row or column values will be lost!
Partial Sorts
Step67: Note that the first three values in the resulting array are the three smallest in the array, and the remaining array positions contain the remaining values.
Within the two partitions, the elements have arbitrary order.
Similarly to sorting, we can partition along an arbitrary axis of a multidimensional array
Step68: The result is an array where the first two slots in each row contain the smallest values from that row, with the remaining values filling the remaining slots.
Finally, just as there is a np.argsort that computes indices of the sort, there is a np.argpartition that computes indices of the partition.
Example
Step69: With the pairwise square-distances converted, we can now use np.argsort to sort along each row.
The leftmost columns will then give the indices of the nearest neighbors
Step70: Notice that the first column is order because each point's closest neighbor is itself.
If we're simply interested in the nearest $k$ neighbors, all we need is to partition each row so that the smallest $k + 1$ squared distances come first, with larger distances filling the remaining positions of the array | Python Code:
__AUTHORS__ = {'am': ("Andrea Marino",
"andrea.marino@unifi.it",),
'mn': ("Massimo Nocentini",
"massimo.nocentini@unifi.it",
"https://github.com/massimo-nocentini/",)}
__KEYWORDS__ = ['Python', 'numpy', 'numerical', 'data',]
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
<small>
Massimo Nocentini, PhD.
<br><br>
February 26, 2020: init
</small>
</div>
</p>
<br>
<br>
<div align="center">
<b>Abstract</b><br>
These slides outline techniques for effectively loading, storing, and manipulating in-memory data in Python.
</div>
End of explanation
import numpy
numpy.__version__
Explanation: <center><img src="https://upload.wikimedia.org/wikipedia/commons/c/c3/Python-logo-notext.svg"></center>
Introduction to NumPy
The topic is very broad: datasets can come from a wide range of sources and a wide range of formats, including be collections of documents, collections of images, collections of sound clips, collections of numerical measurements, or nearly anything else.
Despite this apparent heterogeneity, it will help us to think of all data fundamentally as arrays of numbers.
For this reason, efficient storage and manipulation of numerical arrays is absolutely fundamental to the process of doing data science.
NumPy (short for Numerical Python) provides an efficient interface to store and operate on dense data buffers.
In some ways, NumPy arrays are like Python's built-in list type, but NumPy arrays provide much more efficient storage and data operations as the arrays grow larger in size.
NumPy arrays form the core of nearly the entire ecosystem of data science tools in Python, so time spent learning to use NumPy effectively will be valuable no matter what aspect of data science interests you.
End of explanation
import numpy as np
Explanation: By convention, you'll find that most people in the SciPy/PyData world will import NumPy using np as an alias:
End of explanation
import array
L = list(range(10))
A = array.array('i', L)
A
Explanation: Throughout this chapter, and indeed the rest of the book, you'll find that this is the way we will import and use NumPy.
Understanding Data Types in Python
Effective data-driven science and computation requires understanding how data is stored and manipulated.
Here we outlines and contrasts how arrays of data are handled in the Python language itself, and how NumPy improves on this.
Python offers several different options for storing data in efficient, fixed-type data buffers.
The built-in array module (available since Python 3.3) can be used to create dense arrays of a uniform type:
End of explanation
np.array([1, 4, 2, 5, 3])
Explanation: Here 'i' is a type code indicating the contents are integers.
Much more useful, however, is the ndarray object of the NumPy package.
While Python's array object provides efficient storage of array-based data, NumPy adds to this efficient operations on that data.
Creating Arrays from Python Lists
First, we can use np.array to create arrays from Python lists:
End of explanation
np.array([3.14, 4, 2, 3])
Explanation: Remember that unlike Python lists, NumPy is constrained to arrays that all contain the same type.
If types do not match, NumPy will upcast if possible (here, integers are up-cast to floating point):
End of explanation
np.array([1, 2, 3, 4], dtype='float32')
Explanation: If we want to explicitly set the data type of the resulting array, we can use the dtype keyword:
End of explanation
np.zeros(10, dtype=int)
np.ones((3, 5), dtype=float)
np.full((3, 5), 3.14)
np.arange(0, 20, 2)
np.linspace(0, 1, 5)
np.random.random((3, 3))
np.random.normal(0, 1, (3, 3))
np.eye(3)
Explanation: Creating Arrays from Scratch
Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into NumPy:
End of explanation
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array
x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
Explanation: NumPy Standard Data Types
NumPy arrays contain values of a single type, so have a look at those types and their bounds:
| Data type | Description |
|---------------|-------------|
| bool_ | Boolean (True or False) stored as a byte |
| int_ | Default integer type (same as C long; normally either int64 or int32)|
| intc | Identical to C int (normally int32 or int64)|
| intp | Integer used for indexing (same as C ssize_t; normally either int32 or int64)|
| int8 | Byte (-128 to 127)|
| int16 | Integer (-32768 to 32767)|
| int32 | Integer (-2147483648 to 2147483647)|
| int64 | Integer (-9223372036854775808 to 9223372036854775807)|
| uint8 | Unsigned integer (0 to 255)|
| uint16 | Unsigned integer (0 to 65535)|
| uint32 | Unsigned integer (0 to 4294967295)|
| uint64 | Unsigned integer (0 to 18446744073709551615)|
| float_ | Shorthand for float64.|
| float16 | Half precision float: sign bit, 5 bits exponent, 10 bits mantissa|
| float32 | Single precision float: sign bit, 8 bits exponent, 23 bits mantissa|
| float64 | Double precision float: sign bit, 11 bits exponent, 52 bits mantissa|
| complex_ | Shorthand for complex128.|
| complex64 | Complex number, represented by two 32-bit floats|
| complex128| Complex number, represented by two 64-bit floats|
The Basics of NumPy Arrays
Data manipulation in Python is nearly synonymous with NumPy array manipulation: even newer tools like Pandas are built around the NumPy array.
Attributes of arrays: Determining the size, shape, memory consumption, and data types of arrays
Indexing of arrays: Getting and setting the value of individual array elements
Slicing of arrays: Getting and setting smaller subarrays within a larger array
Reshaping of arrays: Changing the shape of a given array
Joining and splitting of arrays: Combining multiple arrays into one, and splitting one array into many
NumPy Array Attributes
First let's discuss some useful array attributes.
We'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array:
End of explanation
print("x3 ndim: ", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
print("dtype:", x3.dtype)
Explanation: Each array has attributes ndim (the number of dimensions), shape (the size of each dimension), size (the total size of the array) and dtype (the data type of the array):
End of explanation
x1
x1[0]
x1[-1] # To index from the end of the array, you can use negative indices.
Explanation: Array Indexing: Accessing Single Elements
In a one-dimensional array, the $i^{th}$ value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists:
End of explanation
x2
x2[0, 0]
x2[2, -1]
Explanation: In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:
End of explanation
x2[0, 0] = 12
x2
Explanation: Values can also be modified using any of the above index notation:
End of explanation
x1[0] = 3.14159 # this will be truncated!
x1
Explanation: Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.
End of explanation
x = np.arange(10)
x
x[:5] # first five elements
x[5:] # elements after index 5
x[4:7] # middle sub-array
x[::2] # every other element
x[1::2] # every other element, starting at index 1
Explanation: Array Slicing: Accessing Subarrays
Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the slice notation, marked by the colon (:) character.
The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array x, use this:
python
x[start:stop:step]
If any of these are unspecified, they default to the values start=0, stop=size of dimension, step=1.
One-dimensional subarrays
End of explanation
x[::-1] # all elements, reversed
x[5::-2] # reversed every other from index 5
Explanation: A potentially confusing case is when the step value is negative.
In this case, the defaults for start and stop are swapped.
This becomes a convenient way to reverse an array:
End of explanation
x2
x2[:2, :3] # two rows, three columns
x2[:3, ::2] # all rows, every other column
x2[::-1, ::-1]
Explanation: Multi-dimensional subarrays
Multi-dimensional slices work in the same way, with multiple slices separated by commas:
End of explanation
print(x2[:, 0]) # first column of x2
print(x2[0, :]) # first row of x2
print(x2[0]) # equivalent to x2[0, :]
Explanation: Accessing array rows and columns
One commonly needed routine is accessing of single rows or columns of an array:
End of explanation
x2
x2_sub = x2[:2, :2]
x2_sub
x2_sub[0, 0] = 99 # if we modify this subarray, the original array is changed too
x2
Explanation: Subarrays as no-copy views
One important–and extremely useful–thing to know about array slices is that they return views rather than copies of the array data.
This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.
End of explanation
np.arange(1, 10).reshape((3, 3))
x = np.array([1, 2, 3])
x.reshape((1, 3)) # row vector via reshape
x[np.newaxis, :] # row vector via newaxis
x.reshape((3, 1)) # column vector via reshape
x[:, np.newaxis] # column vector via newaxis
Explanation: It is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the copy() method.
Reshaping of Arrays
If you want to put the numbers 1 through 9 in a $3 \times 3$ grid:
End of explanation
x = np.array([1, 2, 3])
y = np.array([3, 2, 1])
np.concatenate([x, y])
z = [99, 99, 99]
np.concatenate([x, y, z])
grid = np.array([[1, 2, 3],
[4, 5, 6]])
np.concatenate([grid, grid]) # concatenate along the first axis
np.concatenate([grid, grid], axis=1) # concatenate along the second axis (zero-indexed)
Explanation: Concatenation of arrays
np.concatenate takes a tuple or list of arrays as its first argument:
End of explanation
x = np.array([1, 2, 3])
grid = np.array([[9, 8, 7],
[6, 5, 4]])
np.vstack([x, grid]) # vertically stack the arrays
y = np.array([[99],
[99]])
np.hstack([grid, y]) # horizontally stack the arrays
Explanation: For working with arrays of mixed dimensions, it can be clearer to use the np.vstack (vertical stack) and np.hstack (horizontal stack) functions:
End of explanation
x = [1, 2, 3, 99, 99, 3, 2, 1]
x1, x2, x3 = np.split(x, [3, 5])
print(x1, x2, x3)
grid = np.arange(16).reshape((4, 4))
grid
np.vsplit(grid, [2])
np.hsplit(grid, [2])
Explanation: Splitting of arrays
The opposite of concatenation is splitting, we can pass a list of indices giving the split points:
End of explanation
np.random.seed(0)
def compute_reciprocals(values):
output = np.empty(len(values))
for i in range(len(values)):
output[i] = 1.0 / values[i]
return output
values = np.random.randint(1, 10, size=5)
compute_reciprocals(values)
Explanation: Computation on NumPy Arrays: Universal Functions
Numpy provides an easy and flexible interface to optimized computation with arrays of data.
The key to making it fast is to use vectorized operations, generally implemented through NumPy's universal functions (ufuncs).
The Slowness of Loops
Python's default implementation (known as CPython) does some operations very slowly, this is in part due to the dynamic, interpreted nature of the language.
The relative sluggishness of Python generally manifests itself in situations where many small operations are being repeated – for instance looping over arrays to operate on each element.
For example, pretend to compute the reciprocal of values contained in a array:
End of explanation
big_array = np.random.randint(1, 100, size=1000000)
%timeit compute_reciprocals(big_array)
Explanation: If we measure the execution time of this code for a large input, we see that this operation is very slow, perhaps surprisingly so!
End of explanation
%timeit (1.0 / big_array)
Explanation: It takes $2.63$ seconds to compute these million operations and to store the result.
It turns out that the bottleneck here is not the operations themselves, but the type-checking and function dispatches that CPython must do at each cycle of the loop.
If we were working in compiled code instead, this type specification would be known before the code executes and the result could be computed much more efficiently.
Introducing UFuncs
For many types of operations, NumPy provides a convenient interface into just this kind of compiled routine.
This is known as a vectorized operation.
This can be accomplished by performing an operation on the array, which will then be applied to each element.
End of explanation
np.arange(5) / np.arange(1, 6)
Explanation: Vectorized operations in NumPy are implemented via ufuncs, whose main purpose is to quickly execute repeated operations on values in NumPy arrays.
Ufuncs are extremely flexible – before we saw an operation between a scalar and an array, but we can also operate between two arrays:
End of explanation
x = np.arange(9).reshape((3, 3))
2 ** x
Explanation: And ufunc operations are not limited to one-dimensional arrays–they can also act on multi-dimensional arrays as well:
End of explanation
x = np.arange(4)
print("x =", x)
print("x + 5 =", x + 5)
print("x - 5 =", x - 5)
print("x * 2 =", x * 2)
print("x / 2 =", x / 2)
print("x // 2 =", x // 2) # floor division
print("-x = ", -x)
print("x ** 2 = ", x ** 2)
print("x % 2 = ", x % 2)
-(0.5*x + 1) ** 2 # can be strung together also
Explanation: Any time you see such a loop in a Python script, you should consider whether it can be replaced with a vectorized expression.
Array arithmetic
NumPy's ufuncs feel very natural to use because they make use of Python's native arithmetic operators:
End of explanation
theta = np.linspace(0, np.pi, 3)
print("theta = ", theta)
print("sin(theta) = ", np.sin(theta))
print("cos(theta) = ", np.cos(theta))
print("tan(theta) = ", np.tan(theta))
Explanation: Trigonometric functions
NumPy provides a large number of useful ufuncs, we'll start by defining an array of angles:
End of explanation
x = [1, 2, 3]
print("x =", x)
print("e^x =", np.exp(x))
print("2^x =", np.exp2(x))
print("3^x =", np.power(3, x))
x = [1, 2, 4, 10]
print("x =", x)
print("ln(x) =", np.log(x))
print("log2(x) =", np.log2(x))
print("log10(x) =", np.log10(x))
Explanation: Exponents and logarithms
Another common NumPy ufunc are the exponentials (that are useful for maintaining precision with very small inputs)
End of explanation
x = np.arange(5)
y = np.empty(5)
np.multiply(x, 10, out=y)
print(y)
y = np.zeros(10)
np.power(2, x, out=y[::2])
print(y)
Explanation: Specifying output
For large calculations, it is sometimes useful to be able to specify the array where the result of the calculation will be stored:
End of explanation
x = np.arange(1, 6)
np.multiply.outer(x, x)
Explanation: Outer products
Finally, any ufunc can compute the output of all pairs of two different inputs using the outer method:
End of explanation
L = np.random.random(100)
sum(L)
np.sum(L)
big_array = np.random.rand(1000000)
%timeit sum(big_array)
%timeit np.sum(big_array)
Explanation: Aggregations: Min, Max, and Everything In Between
Summing the Values in an Array
As a quick example, consider computing the sum of all values in an array.
Python itself can do this using the built-in sum function:
End of explanation
min(big_array), max(big_array)
np.min(big_array), np.max(big_array)
%timeit min(big_array)
%timeit np.min(big_array)
big_array.min(), big_array.max(), big_array.sum()
Explanation: Minimum and Maximum
Similarly, Python has built-in min and max functions:
End of explanation
M = np.random.random((3, 4))
M
M.sum() # By default, each NumPy aggregation function works on the whole array
M.min(axis=0) # specifying the axis along which the aggregate is computed
M.max(axis=1) # find the maximum value within each row
Explanation: Multi dimensional aggregates
One common type of aggregation operation is an aggregate along a row or column:
End of explanation
a = np.array([0, 1, 2])
b = np.array([5, 5, 5])
a + b
Explanation: Other aggregation functions
Additionally, most aggregates have a NaN-safe counterpart that computes the result while ignoring missing values, which are marked by the special IEEE floating-point NaN value
|Function Name | NaN-safe Version | Description |
|-------------------|---------------------|-----------------------------------------------|
| np.sum | np.nansum | Compute sum of elements |
| np.prod | np.nanprod | Compute product of elements |
| np.mean | np.nanmean | Compute mean of elements |
| np.std | np.nanstd | Compute standard deviation |
| np.var | np.nanvar | Compute variance |
| np.min | np.nanmin | Find minimum value |
| np.max | np.nanmax | Find maximum value |
| np.argmin | np.nanargmin | Find index of minimum value |
| np.argmax | np.nanargmax | Find index of maximum value |
| np.median | np.nanmedian | Compute median of elements |
| np.percentile | np.nanpercentile| Compute rank-based statistics of elements |
| np.any | N/A | Evaluate whether any elements are true |
| np.all | N/A | Evaluate whether all elements are true |
Computation on Arrays: Broadcasting
Another means of vectorizing operations is to use NumPy's broadcasting functionality.
Broadcasting is simply a set of rules for applying binary ufuncs (e.g., addition, subtraction, multiplication, etc.) on arrays of different sizes.
Introducing Broadcasting
Recall that for arrays of the same size, binary operations are performed on an element-by-element basis:
End of explanation
a + 5
Explanation: Broadcasting allows these types of binary operations to be performed on arrays of different sizes:
End of explanation
M = np.ones((3, 3))
M
M + a
Explanation: We can think of this as an operation that stretches or duplicates the value 5 into the array [5, 5, 5], and adds the results; the advantage of NumPy's broadcasting is that this duplication of values does not actually take place.
We can similarly extend this to arrays of higher dimensions:
End of explanation
a = np.arange(3)
b = np.arange(3)[:, np.newaxis]
a, b
a + b
Explanation: Here the one-dimensional array a is stretched, or broadcast across the second dimension in order to match the shape of M.
More complicated cases can involve broadcasting of both arrays:
End of explanation
X = np.random.random((10, 3))
Xmean = X.mean(0)
Xmean
X_centered = X - Xmean
X_centered.mean(0) # To double-check, we can check that the centered array has near 0 means.
Explanation: Rules of Broadcasting
Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays:
Rule 1: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is padded with ones on its leading (left) side.
Rule 2: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
Rule 3: If in any dimension the sizes disagree and neither is equal to 1, an error is raised.
Centering an array
Imagine you have an array of 10 observations, each of which consists of 3 values, we'll store this in a $10 \times 3$ array:
End of explanation
steps = 500
x = np.linspace(0, 5, steps) # # x and y have 500 steps from 0 to 5
y = np.linspace(0, 5, steps)[:, np.newaxis]
z = np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x)
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(z, origin='lower', extent=[0, 5, 0, 5], cmap='viridis')
plt.colorbar();
Explanation: Plotting a two-dimensional function
One place that broadcasting is very useful is in displaying images based on two-dimensional functions.
If we want to define a function $z = f(x, y)$, broadcasting can be used to compute the function across the grid:
End of explanation
x = np.array([1, 2, 3, 4, 5])
x < 3 # less than
x > 3 # greater than
x != 3 # not equal
(2 * x) == (x ** 2)
Explanation: Comparisons, Masks, and Boolean Logic
Masking comes up when you want to extract, modify, count, or otherwise manipulate values in an array based on some criterion: for example, you might wish to count all values greater than a certain value, or perhaps remove all outliers that are above some threshold.
In NumPy, Boolean masking is often the most efficient way to accomplish these types of tasks.
Comparison Operators as ufuncs
End of explanation
rng = np.random.RandomState(0)
x = rng.randint(10, size=(3, 4))
x
x < 6
Explanation: Just as in the case of arithmetic ufuncs, these will work on arrays of any size and shape:
End of explanation
np.count_nonzero(x < 6) # how many values less than 6?
np.sum(x < 6)
np.sum(x < 6, axis=1) # how many values less than 6 in each row?
np.any(x > 8) # are there any values greater than 8?
np.any(x < 0) # are there any values less than zero?
np.all(x < 10) # are all values less than 10?
np.all(x < 8, axis=1) # are all values in each row less than 8?
Explanation: Counting entries
To count the number of True entries in a Boolean array, np.count_nonzero is useful:
End of explanation
x
x < 5
x[x < 5]
Explanation: Boolean Arrays as Masks
A more powerful pattern is to use Boolean arrays as masks, to select particular subsets of the data themselves:
End of explanation
rand = np.random.RandomState(42)
x = rand.randint(100, size=10)
x
[x[3], x[7], x[2]] # Suppose we want to access three different elements.
ind = [3, 7, 4]
x[ind] # Alternatively, we can pass a single list or array of indices
Explanation: What is returned is a one-dimensional array filled with all the values that meet this condition; in other words, all the values in positions at which the mask array is True.
Fancy Indexing
We saw how to access and modify portions of arrays using simple indices (e.g., arr[0]), slices (e.g., arr[:5]), and Boolean masks (e.g., arr[arr > 0]).
We'll look at another style of array indexing, known as fancy indexing, that is like the simple indexing we've already seen, but we pass arrays of indices in place of single scalars.
Fancy indexing is conceptually simple: it means passing an array of indices to access multiple array elements at once:
End of explanation
ind = np.array([[3, 7],
[4, 5]])
x[ind]
Explanation: When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed:
End of explanation
X = np.arange(12).reshape((3, 4))
X
Explanation: Fancy indexing also works in multiple dimensions:
End of explanation
row = np.array([0, 1, 2])
col = np.array([2, 1, 3])
X[row, col]
Explanation: Like with standard indexing, the first index refers to the row, and the second to the column:
End of explanation
X[row[:, np.newaxis], col]
Explanation: The pairing of indices in fancy indexing follows all the broadcasting rules that we've already seen:
End of explanation
row[:, np.newaxis] * col
Explanation: each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations
End of explanation
X
X[2, [2, 0, 1]] # combine fancy and simple indices
X[1:, [2, 0, 1]] # combine fancy indexing with slicing
mask = np.array([1, 0, 1, 0], dtype=bool)
X[row[:, np.newaxis], mask] # combine fancy indexing with masking
Explanation: Remember: with fancy indexing that the return value reflects the broadcasted shape of the indices, rather than the shape of the array being indexed.
Combined Indexing
For even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen:
End of explanation
mean = [0, 0]
cov = [[1, 2],
[2, 5]]
X = rand.multivariate_normal(mean, cov, 100)
X.shape
plt.scatter(X[:, 0], X[:, 1]);
Explanation: Example: Selecting Random Points
One common use of fancy indexing is the selection of subsets of rows from a matrix.
For example, we might have an $N$ by $D$ matrix representing $N$ points in $D$ dimensions, such as the following points drawn from a two-dimensional normal distribution:
End of explanation
indices = np.random.choice(X.shape[0], 20, replace=False)
indices
selection = X[indices] # fancy indexing here
selection.shape
Explanation: Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array:
End of explanation
plt.scatter(X[:, 0], X[:, 1], alpha=0.3);
Explanation: Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
End of explanation
x = np.arange(10)
i = np.array([2, 1, 8, 4])
x[i] = 99
x
x[i] -= 10 # use any assignment-type operator for this
x
Explanation: Modifying Values with Fancy Indexing
Fancy indexing it can also be used to modify parts of an array:
End of explanation
x = np.zeros(10)
x[[0, 0]] = [4, 6]
x
Explanation: Notice, though, that repeated indices with these operations can cause some potentially unexpected results:
End of explanation
i = [2, 3, 3, 4, 4, 4]
x[i] += 1
x
Explanation: Where did the 4 go? The result of this operation is to first assign x[0] = 4, followed by x[0] = 6.
The result, of course, is that x[0] contains the value 6.
End of explanation
x = np.zeros(10)
np.add.at(x, i, 1)
x
Explanation: You might expect that x[3] would contain the value 2, and x[4] would contain the value 3, as this is how many times each index is repeated. Why is this not the case?
Conceptually, this is because x[i] += 1 is meant as a shorthand of x[i] = x[i] + 1. x[i] + 1 is evaluated, and then the result is assigned to the indices in x.
With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
End of explanation
np.random.seed(42)
x = np.random.randn(100)
# compute a histogram by hand
bins = np.linspace(-5, 5, 20)
counts = np.zeros_like(bins)
# find the appropriate bin for each x
i = np.searchsorted(bins, x)
# add 1 to each of these bins
np.add.at(counts, i, 1)
# The counts now reflect the number of points
# within each bin–in other words, a histogram:
line, = plt.plot(bins, counts);
line.set_drawstyle("steps")
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
Explanation: The at() method does an in-place application of the given operator at the specified indices (here, i) with the specified value (here, 1).
Another method that is similar in spirit is the reduceat() method of ufuncs, which you can read about in the NumPy documentation.
Example: Binning Data
You can use these ideas to efficiently bin data to create a histogram by hand.
For example, imagine we have 1,000 values and would like to quickly find where they fall within an array of bins.
We could compute it using ufunc.at like this:
End of explanation
x = np.random.randn(1000000)
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
Explanation: Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be?
If you dig into the np.histogram source code (you can do this in IPython by typing np.histogram??), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large...
End of explanation
x = np.array([2, 1, 4, 3, 5])
np.sort(x)
x
Explanation: What this comparison shows is that algorithmic efficiency is almost never a simple question. An algorithm efficient for large datasets will not always be the best choice for small datasets, and vice versa.
The key to efficiently using Python in data-intensive applications is knowing about general convenience routines like np.histogram and when they're appropriate, but also knowing how to make use of lower-level functionality when you need more pointed behavior.
Sorting Arrays
Up to this point we have been concerned mainly with tools to access and operate on array data with NumPy.
This section covers algorithms related to sorting values in NumPy arrays.
Fast Sorting in NumPy: np.sort and np.argsort
Although Python has built-in sort and sorted functions to work with lists, NumPy's np.sort function turns out to be much more efficient and useful.
To return a sorted version of the array without modifying the input, you can use np.sort:
End of explanation
i = np.argsort(x)
i
Explanation: A related function is argsort, which instead returns the indices of the sorted elements:
End of explanation
x[i]
Explanation: The first element of this result gives the index of the smallest element, the second value gives the index of the second smallest, and so on.
These indices can then be used (via fancy indexing) to construct the sorted array if desired:
End of explanation
rand = np.random.RandomState(42)
X = rand.randint(0, 10, (4, 6))
X
np.sort(X, axis=0) # sort each column of X
np.sort(X, axis=1) # sort each row of X
Explanation: Sorting along rows or columns
End of explanation
x = np.array([7, 2, 3, 1, 6, 5, 4])
np.partition(x, 3)
Explanation: Keep in mind that this treats each row or column as an independent array, and any relationships between the row or column values will be lost!
Partial Sorts: Partitioning
Sometimes we're not interested in sorting the entire array, but simply want to find the k smallest values in the array. np.partition takes an array and a number K; the result is a new array with the smallest K values to the left of the partition, and the remaining values to the right, in arbitrary order:
End of explanation
np.partition(X, 2, axis=1)
Explanation: Note that the first three values in the resulting array are the three smallest in the array, and the remaining array positions contain the remaining values.
Within the two partitions, the elements have arbitrary order.
Similarly to sorting, we can partition along an arbitrary axis of a multidimensional array:
End of explanation
X = rand.rand(50, 2)
plt.scatter(X[:, 0], X[:, 1], s=100);
# compute the distance between each pair of points
dist_sq = np.sum((X[:, np.newaxis, :] - X[np.newaxis, :, :]) ** 2, axis=-1)
dist_sq.shape, np.all(dist_sq.diagonal() == 0)
Explanation: The result is an array where the first two slots in each row contain the smallest values from that row, with the remaining values filling the remaining slots.
Finally, just as there is a np.argsort that computes indices of the sort, there is a np.argpartition that computes indices of the partition.
Example: k-Nearest Neighbors
Let's quickly see how we might use this argsort function along multiple axes to find the nearest neighbors of each point in a set.
We'll start by creating a random set of 10 points on a two-dimensional plane:
End of explanation
nearest = np.argsort(dist_sq, axis=1)
nearest[:,0]
Explanation: With the pairwise square-distances converted, we can now use np.argsort to sort along each row.
The leftmost columns will then give the indices of the nearest neighbors:
End of explanation
K = 2
nearest_partition = np.argpartition(dist_sq, K + 1, axis=1)
plt.scatter(X[:, 0], X[:, 1], s=100)
K = 2 # draw lines from each point to its two nearest neighbors
for i in range(X.shape[0]):
for j in nearest_partition[i, :K+1]:
plt.plot(*zip(X[j], X[i]), color='black')
Explanation: Notice that the first column is order because each point's closest neighbor is itself.
If we're simply interested in the nearest $k$ neighbors, all we need is to partition each row so that the smallest $k + 1$ squared distances come first, with larger distances filling the remaining positions of the array:
End of explanation |
8,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating Features from GeoTiff Files
From GeoTiff Files available for India over a period of more than 20 years, we want to generate features from those files for the problem of prediction of district wise crop yield in India.
Due to gdal package, had to make a separate environment using conda. So install packages for this notebook in that environment itself. Check from the anaconda prompt, the names of all the envs are available
Step1: For Windows
python
base_ = "C
Step4: Prepare one ds variable here itself, for the transformation of the coordinate system below.
Step5: New features
12 months (Numbered 1 to 12)
10 TIF files (12 for SAT_8)
Mean & Variance
Step16: The value of n is going to b same for all the 10 Band files of a month hence ame for all the 20 features at a time.
Can reduce the number 480 here using this.
Step17: Overflow is happening because the pixel value is returned in small int type. Need to convert it into int type.
Step18: Calculating the time per iteration of each code block in the huge loop above helped in recognizing the culprit
Step19: Observations
Step20: 641 Districts in India in total
Step27: DOUBT | Python Code:
from osgeo import ogr, osr, gdal
import fiona
from shapely.geometry import Point, shape
import numpy as np
import pandas as pd
import os
import sys
import tarfile
import timeit
Explanation: Generating Features from GeoTiff Files
From GeoTiff Files available for India over a period of more than 20 years, we want to generate features from those files for the problem of prediction of district wise crop yield in India.
Due to gdal package, had to make a separate environment using conda. So install packages for this notebook in that environment itself. Check from the anaconda prompt, the names of all the envs are available:
shell
$ conda info --envs
$ activate env_name
End of explanation
# Change this for Win7,macOS
bases = "C:\Users\deepak\Desktop\Repo\Maps\Districts\Census\Dist.shp"
# base_ = "/Users/macbook/Documents/BTP/Satellite/Data/Maps/Districts/Census_2011"
fc = fiona.open(bases)
def reverse_geocode(pt):
for feature in fc:
if shape(feature['geometry']).contains(pt):
return feature['properties']['DISTRICT']
return "NRI"
# base = "/Users/macbook/Documents/BTP/Satellite/Data/Sat" # macOS
base = "G:\BTP\Satellite\Data\Test" # Win7
def extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(os.path.join(base,root)) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s' % (root, filename))
else:
print('Extracting data for %s' % root)
tar = tarfile.open(os.path.join(base,filename))
sys.stdout.flush()
tar.extractall(os.path.join(base,root))
tar.close()
# extracting all the tar files ... (if not extracted)
for directory, subdirList, fileList in os.walk(base):
for filename in fileList:
if filename.endswith(".tar.gz"):
d = extract(filename)
directories = [os.path.join(base, d) for d in sorted(os.listdir(base)) if os.path.isdir(os.path.join(base, d))]
# print directories
ds = gdal.Open(base + "\LE07_L1TP_146039_20101223_20161211_01_T1\LE07_L1TP_146039_20101223_20161211_01_T1_B1.TIF")
Explanation: For Windows
python
base_ = "C:\Users\deepak\Desktop\Repo\Maps\Districts\Census_2011"
For macOS
python
base_ = "/Users/macbook/Documents/BTP/Satellite/Data/Maps/Districts/Census_2011"
End of explanation
# get the existing coordinate system
old_cs= osr.SpatialReference()
old_cs.ImportFromWkt(ds.GetProjectionRef())
# create the new coordinate system
wgs84_wkt =
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.01745329251994328,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4326"]]
new_cs = osr.SpatialReference()
new_cs.ImportFromWkt(wgs84_wkt)
# create a transform object to convert between coordinate systems
transform = osr.CoordinateTransformation(old_cs,new_cs)
type(ds)
def pixel2coord(x, y, xoff, a, b, yoff, d, e):
Returns global coordinates from coordinates x,y of the pixel
xp = a * x + b * y + xoff
yp = d * x + e * y + yoff
return(xp, yp)
ricep = pd.read_csv("C:\Users\deepak\Desktop\Repo\BTP\Ricep.csv")
ricep = ricep.drop(["Unnamed: 0"],axis=1)
ricep["value"] = ricep["Production"]/ricep["Area"]
ricep.head()
Explanation: Prepare one ds variable here itself, for the transformation of the coordinate system below.
End of explanation
a = np.empty((ricep.shape[0],1))*np.NAN
Explanation: New features
12 months (Numbered 1 to 12)
10 TIF files (12 for SAT_8)
Mean & Variance
End of explanation
'features' contain collumn indexes for the new features
'dictn' is the dictionary mapping name of collumn index to the index number
features = []
dictn = {}
k = 13
for i in range(1,13):
for j in range(1,11):
s = str(i) + "_B" + str(j) + "_"
features.append(s+"M")
features.append(s+"V")
dictn[s+"M"] = k
dictn[s+"V"] = k+1
k = k+2
for i in range(1,13):
for j in range(1,11):
s = str(i) + "_B" + str(j) + "_"
features.append(s+"Mn")
features.append(s+"Vn")
len(features)
tmp = pd.DataFrame(index=range(ricep.shape[0]),columns=features)
ricex = pd.concat([ricep,tmp], axis=1)
ricex.head()
k = 10
hits = 0
times = [0,0,0,0,0,0,0,0]
nums = [0,0,0,0,0,0,0,0]
bx = False
stx = timeit.default_timer()
for directory in directories:
if bx: continue
else: bx = True
dictx = {}
Identifying Month, Year, Spacecraft ID
date = directory.split('\\')[-1].split('_')[3] # Change for Win7
satx = directory.split('\\')[-1][3]
month = date[4:6]
year = date[0:4]
Visiting every GeoTIFF file
for _,_,files in os.walk(directory):
for filename in files:
# make sure not going into the extra folders
if filename.endswith(".TIF"):
if filename[-5] == '8': continue
#--------------------------------------------------------------------------------------
# Check for a single iteration. Which step takes the longest time.
# improve that step
# Do it all for B1.tif only.
# for others save the row indexes found in B1's iteration
# so dont have to search the dataframe again and again.
# but keep track of the pixels which were not found, so have to skip them too
# so that correct pixel value goes to the correct row in dataframe
# have to traverse the tiff file to get the pixel values
# what all steps are common for all the 10 tif files
# the district search from the pixel's lat,lon coordinates
# the row index search using district and the year
# the same n is read and wrote for all the ...
# If nothing works, maybe we can reduce the number of features, by just visiting
# First 5 TIF files for each scene.
#--------------------------------------------------------------------------------------
print os.path.join(directory,filename)
ds = gdal.Open(os.path.join(directory,filename))
if ds == None: continue
col, row, _ = ds.RasterXSize, ds.RasterYSize, ds.RasterCount
xoff, a, b, yoff, d, e = ds.GetGeoTransform()
Now go to each pixel, find its lat,lon. Hence its district, and the pixel value
Find the row with same (Year,District), in Crop Dataset.
Find the feature using Month, Band, SATx
For this have to find Mean & Variance
for i in range(0,col,col/k):
for j in range(0,row,row/k):
st = timeit.default_timer()
########### fetching the lat and lon coordinates
x,y = pixel2coord(i, j, xoff, a, b, yoff, d, e)
lonx, latx, z = transform.TransformPoint(x,y)
times[0] += timeit.default_timer() - st
nums[0] += 1
st = timeit.default_timer()
########### fetching the name of district
district = ""
#----------------------------------------------------------
if filename[-5] == '1':
point = Point(lonx,latx)
district = reverse_geocode(point)
dictx[str(lonx)+str(latx)] = district
else:
district = dictx[str(lonx)+str(latx)]
#----------------------------------------------------------
times[1] += timeit.default_timer() - st
nums[1] += 1
if district == "NRI": continue
st = timeit.default_timer()
########### Locating the row in DataFrame which we want to update
district = district.lower()
district = district.strip()
r = ricex.index[(ricex['ind_district'] == district) & (ricex['Crop_Year'] == int(year))].tolist()
times[3] += timeit.default_timer() - st
nums[3] += 1
if len(r) == 1:
st = timeit.default_timer()
########### The pixel value for that location
px,py = i,j
pix = ds.ReadAsArray(px,py,1,1)
pix = pix[0][0]
times[2] += timeit.default_timer() - st
nums[2] += 1
st = timeit.default_timer()
Found the row, so now ..
Find Collumn index corresponding to Month, Band
hits = hits + 1
#print ("Hits: ", hits)
####### Band Number ########
band = filename.split("\\")[-1].split("_")[7:][0].split(".")[0][1]
bnd = band
if band == '6':
if filename.split("\\")[-1].split("_")[7:][2][0] == '1':
bnd = band
else:
bnd = '9'
elif band == 'Q':
bnd = '10'
sm = month + "_B" + bnd +"_M"
cm = dictn[sm]
r = r[0]
# cm is the collumn indexe for mean
# r[0] is the row index
times[4] += timeit.default_timer() - st
nums[4] += 1
##### Checking if values are null ...
valm = ricex.iloc[r,cm]
if pd.isnull(valm):
st = timeit.default_timer()
ricex.iloc[r,cm] = pix
ricex.iloc[r,cm+1] = pix*pix
ricex.iloc[r,cm+240] = 1
times[5] += timeit.default_timer() - st
nums[5] += 1
continue
st = timeit.default_timer()
##### if the values are not null ...
valv = ricex.iloc[r,cm+1]
n = ricex.iloc[r,cm+240]
n = n+1
times[6] += timeit.default_timer() - st
nums[6] += 1
st = timeit.default_timer()
# Mean & Variance update
ricex.iloc[r,cm] = valm + (pix-valm)/n
ricex.iloc[r,cm+1] = ((n-2)/(n-1))*valv + (pix-valm)*(pix-valm)/n
ricex.iloc[r,cm+240] = n
times[7] += timeit.default_timer() - st
nums[7] += 1
#print ("No match for the district " + district + " for the year " + year)
elapsed = timeit.default_timer() - stx
print (elapsed)
print "Seconds"
Explanation: The value of n is going to b same for all the 10 Band files of a month hence ame for all the 20 features at a time.
Can reduce the number 480 here using this.
End of explanation
print hits
Explanation: Overflow is happening because the pixel value is returned in small int type. Need to convert it into int type.
End of explanation
print times
print nums
for i in range(8):
x = times[i]/nums[i]
print (str(i) + ": " + str(x))
Explanation: Calculating the time per iteration of each code block in the huge loop above helped in recognizing the culprit
End of explanation
ricex.describe()
ricex.to_csv("ricex_test1.csv")
fc
fc.schema
fc.crs
len(fc)
Explanation: Observations:
- [1] & [2] are the most time consuming
- [1] is reverse geocoding
- [2] is pixel value extraction
- move [2] inside the if condition to check if the row exists
- only then we need the pixel value
- deal with [1] using dictionary
End of explanation
import timeit
a = timeit.default_timer()
for i in range(1000000):
j = 1
b = timeit.default_timer() - a
b
Explanation: 641 Districts in India in total
End of explanation
for directory in directories:
Identifying Month, Year, Spacecraft ID
date = directory.split('\\')[-1].split('_')[3] # Change for Win7
satx = directory.split('\\')[-1][3]
month = date[4:6]
year = date[0:4]
print "LANDSAT {}, MONTH: {}, YEAR: {}".format(satx,month,year)
Visiting every GeoTIFF file
for _,_,files in os.walk(directory):
for filename in files:
if filename.endswith(".TIF"):
print filename.split("\\")[-1].split("_")[7:]
ds = gdal.Open(os.path.join(directory,filename))
if ds == None: continue
col, row, _ = ds.RasterXSize, ds.RasterYSize, ds.RasterCount
xoff, a, b, yoff, d, e = ds.GetGeoTransform()
print "Col: {0:6}, Row:{1:6}".format(col,row)
Now go to each pixel, find its lat,lon. Hence its district, and the pixel value
Find the row with same (Year,District), in Crop Dataset.
Find the feature using Month, Band, SATx
For this have to find Mean & Variance
Explanation: DOUBT: Why is the size of LANDSAT 7 file smaller than LANDSAT 8 file? Inspite of the fact that, the number of pixels is equal in the band files for both cases. Investigate this ...
To calculate the relative dimensions of all 10 band files for a scene.
Whatever the dimension, its same for all 10 files, except Band 8, who has 4 times the pixels as any other file
End of explanation |
8,613 | Given the following text description, write Python code to implement the functionality described.
Description:
Significant Inversions in an Array
Function that sorts the input array and returns the number of inversions in the array ; Recursive function that sorts the input array and returns the number of inversions in the array ; Divide the array into two parts and call _mergeSortAndCountInv ( ) for each of the parts ; Inversion count will be sum of the inversions in the left - part , the right - part and the number of inversions in merging ; Merge the two parts ; Function that merges the two sorted arrays and returns the inversion count in the arrays ; i is the index for the left subarray ; j is the index for the right subarray ; k is the index for the resultant merged subarray ; First pass to count number of significant inversions ; i is the index for the left subarray ; j is the index for the right subarray ; k is the index for the resultant merged subarray ; Second pass to merge the two sorted arrays ; Copy the remaining elements of the left subarray ( if there are any ) to temp ; Copy the remaining elements of the right subarray ( if there are any ) to temp ; Copy back the merged elements to the original array ; Driver code
| Python Code:
def mergeSort(arr , array_size ) :
temp =[0 for i in range(array_size ) ]
return _mergeSort(arr , temp , 0 , array_size - 1 )
def _mergeSort(arr , temp , left , right ) :
mid , inv_count = 0 , 0
if(right > left ) :
mid =(right + left ) // 2
inv_count = _mergeSort(arr , temp , left , mid )
inv_count += _mergeSort(arr , temp , mid + 1 , right )
inv_count += merge(arr , temp , left , mid + 1 , right )
return inv_count
def merge(arr , temp , left , mid , right ) :
inv_count = 0
i = left
j = mid
k = left
while(( i <= mid - 1 ) and(j <= right ) ) :
if(arr[i ] > 2 * arr[j ] ) :
inv_count +=(mid - i )
j += 1
else :
i += 1
i = left
j = mid
k = left
while(( i <= mid - 1 ) and(j <= right ) ) :
if(arr[i ] <= arr[j ] ) :
temp[k ] = arr[i ]
i , k = i + 1 , k + 1
else :
temp[k ] = arr[j ]
k , j = k + 1 , j + 1
while(i <= mid - 1 ) :
temp[k ] = arr[i ]
i , k = i + 1 , k + 1
while(j <= right ) :
temp[k ] = arr[j ]
j , k = j + 1 , k + 1
for i in range(left , right + 1 ) :
arr[i ] = temp[i ]
return inv_count
arr =[1 , 20 , 6 , 4 , 5 ]
n = len(arr )
print(mergeSort(arr , n ) )
|
8,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<a href="https
Step1: Exercise 1
Step2: Code up your own SVM solution below
Step3: Code up your own SVM solution below
Step4: Code up your own solution | Python Code:
import numpy as np
np.random.seed(4242)
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Detecting Pedestrians in the Wild | Contents | Implementing a Spam Filter with Bayesian Learning >
Additional SVM Exercises
The book refers to a couple of follow-up exercises, where you should think about which SVM kernel is most likely to perform best on the data.
Think you know the answer? Code it up!
You can use the pre-generated datasets below.
End of explanation
n_samples = 500
n_features = 2
X1 = np.random.rand(n_samples, n_features)
y1 = np.ones((n_samples, 1))
idx_neg = (X1[:, 0] - 0.5) ** 2 + (X1[:, 1] - 0.5) ** 2 < 0.03
y1[idx_neg] = 0
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10, 6))
plt.scatter(X1[:, 0], X1[:, 1], c=y1, s=100)
Explanation: Exercise 1
End of explanation
X2 = np.random.rand(n_samples, n_features)
y2 = np.ones((n_samples, 1))
idx_neg = (X2[:, 0] < 0.5) * (X2[:, 1] < 0.5) + (X2[:, 0] > 0.5) * (X2[:, 1] > 0.5)
y2[idx_neg] = 0
plt.figure(figsize=(10, 6))
plt.scatter(X2[:, 0], X2[:, 1], c=y2, s=100)
Explanation: Code up your own SVM solution below:
Exercise 2
End of explanation
rho_pos = np.random.rand(n_samples // 2, 1) / 2.0 + 0.5
rho_neg = np.random.rand(n_samples // 2, 1) / 4.0
rho = np.vstack((rho_pos, rho_neg))
phi_pos = np.pi * 0.75 + np.random.rand(n_samples // 2, 1) * np.pi * 0.5
phi_neg = np.random.rand(n_samples // 2, 1) * 2 * np.pi
phi = np.vstack((phi_pos, phi_neg))
X3 = np.array([[r * np.cos(p), r * np.sin(p)] for r, p in zip(rho, phi)])
y3 = np.vstack((np.ones((n_samples // 2, 1)), np.zeros((n_samples // 2, 1))))
plt.figure(figsize=(10, 6))
plt.scatter(X3[:, 0], X3[:, 1], c=y3, s=100)
Explanation: Code up your own SVM solution below:
Exercise 3
End of explanation
rho_pos = np.linspace(0, 2, n_samples // 2)
rho_neg = np.linspace(0, 2, n_samples // 2) + 0.5
rho = np.vstack((rho_pos, rho_neg))
phi_pos = 2 * np.pi * rho_pos
phi = np.vstack((phi_pos, phi_pos))
X4 = np.array([[r * np.cos(p), r * np.sin(p)] for r, p in zip(rho, phi)])
y4 = np.vstack((np.ones((n_samples // 2, 1)), np.zeros((n_samples // 2, 1))))
plt.figure(figsize=(10, 6))
plt.scatter(X4[:, 0], X4[:, 1], c=y4, s=100)
Explanation: Code up your own solution:
Exercise 4
End of explanation |
8,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This example demonstrates using a network pretrained on ImageNet for classification. The model used was converted from the VGG_CNN_S model (http
Step1: Setup
Step2: Define the network
Step3: Load the model parameters and metadata
Step4: Trying it out
Get some test images
We'll download the ILSVRC2012 validation URLs and pick a few at random
Step5: Helper to fetch and preprocess images
Step6: Process test images and print top 5 predicted labels | Python Code:
!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg_cnn_s.pkl
Explanation: Introduction
This example demonstrates using a network pretrained on ImageNet for classification. The model used was converted from the VGG_CNN_S model (http://arxiv.org/abs/1405.3531) in Caffe's Model Zoo.
For details of the conversion process, see the example notebook "Using a Caffe Pretrained Network - CIFAR10".
License
The model is licensed for non-commercial use only
Download the model (393 MB)
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import lasagne
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.layers import MaxPool2DLayer as PoolLayer
from lasagne.layers import LocalResponseNormalization2DLayer as NormLayer
from lasagne.utils import floatX
Explanation: Setup
End of explanation
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
net['conv1'] = ConvLayer(net['input'], num_filters=96, filter_size=7, stride=2)
net['norm1'] = NormLayer(net['conv1'], alpha=0.0001) # caffe has alpha = alpha * pool_size
net['pool1'] = PoolLayer(net['norm1'], pool_size=3, stride=3, ignore_border=False)
net['conv2'] = ConvLayer(net['pool1'], num_filters=256, filter_size=5)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, ignore_border=False)
net['conv3'] = ConvLayer(net['pool2'], num_filters=512, filter_size=3, pad=1)
net['conv4'] = ConvLayer(net['conv3'], num_filters=512, filter_size=3, pad=1)
net['conv5'] = ConvLayer(net['conv4'], num_filters=512, filter_size=3, pad=1)
net['pool5'] = PoolLayer(net['conv5'], pool_size=3, stride=3, ignore_border=False)
net['fc6'] = DenseLayer(net['pool5'], num_units=4096)
net['drop6'] = DropoutLayer(net['fc6'], p=0.5)
net['fc7'] = DenseLayer(net['drop6'], num_units=4096)
net['drop7'] = DropoutLayer(net['fc7'], p=0.5)
net['fc8'] = DenseLayer(net['drop7'], num_units=1000, nonlinearity=lasagne.nonlinearities.softmax)
output_layer = net['fc8']
Explanation: Define the network
End of explanation
import pickle
model = pickle.load(open('vgg_cnn_s.pkl'))
CLASSES = model['synset words']
MEAN_IMAGE = model['mean image']
lasagne.layers.set_all_param_values(output_layer, model['values'])
Explanation: Load the model parameters and metadata
End of explanation
import urllib
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
Explanation: Trying it out
Get some test images
We'll download the ILSVRC2012 validation URLs and pick a few at random
End of explanation
import io
import skimage.transform
def prep_image(url):
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)
# Resize so smallest dim = 256, preserving aspect ratio
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Shuffle axes to c01
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
# Convert to BGR
im = im[::-1, :, :]
im = im - MEAN_IMAGE
return rawim, floatX(im[np.newaxis])
Explanation: Helper to fetch and preprocess images
End of explanation
for url in image_urls:
try:
rawim, im = prep_image(url)
prob = np.array(lasagne.layers.get_output(output_layer, im, deterministic=True).eval())
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(250, 70 + n * 20, '{}. {}'.format(n+1, CLASSES[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
Explanation: Process test images and print top 5 predicted labels
End of explanation |
8,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Path Loss Models
This notebook illustrates some path loss models.
Initializations
First we set the Python path and import some libraries.
Step1: Now we import some pyphysim stuff
Step2: Path Loss Classes Representation in IPython | Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
Explanation: Path Loss Models
This notebook illustrates some path loss models.
Initializations
First we set the Python path and import some libraries.
End of explanation
from pyphysim.channels import pathloss
Explanation: Now we import some pyphysim stuff
End of explanation
pl_general = pathloss.PathLossGeneral(n=3.7, C=120)
pl_general.handle_small_distances_bool = True
pl_general
pl_3gpp = pathloss.PathLoss3GPP1()
pl_3gpp.handle_small_distances_bool = True
pl_3gpp
pl_fs = pathloss.PathLossFreeSpace()
pl_fs.n = 2
pl_fs.fc = 900
pl_fs.handle_small_distances_bool = True
pl_fs
d = np.linspace(0.01, 0.5, 100)
fig, ax = plt.subplots()
pl_general.plot_deterministic_path_loss_in_dB(d,
ax,
extra_args={'label': 'General'})
pl_fs.plot_deterministic_path_loss_in_dB(d,
ax,
extra_args={'label': 'Free Space'})
pl_3gpp.plot_deterministic_path_loss_in_dB(d, ax, extra_args={'label': '3GPP'})
ax.grid()
ax.set_ylabel('Path Loss (in dB)')
ax.set_xlabel('Distance (in Km)')
ax.legend(loc=5)
plt.show()
Explanation: Path Loss Classes Representation in IPython
End of explanation |
8,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Funciones
Una función es una sección de código reutilizable, escrita para realizar una tarea específica en un programa.
¿Por qué son útiles las funciones? Fíjate en los siguientes ejemplos.
Step2: Imagina que por tercera vez, me piden que calcule un cuadrado. ¿Repito el proceso de manera indefinida?
El ejemplo es un poco extremo, pero sirve para ilustrar que, a veces, nuestros programas repiten siempre el mismo proceso (en este caso, un cálculo) variando los datos con los que trabajamos. En este caso, es mucho más eficaz reutilizar el código y adaptarlo ligeramente para generalizar el cálculo de cuadrados a cualquier número.
¿Me sigues? ¿Sí? Pues vamos a ver cómo hacerlo definiendo una función.
La sintaxis de una función forma un bloque de código de la siguiente manera
Step4: Una vez definida una función, podemos llamarla tantas veces sea necesario y ejecutarla en cualquier sitio de distintas maneras. Fíjate en los ejemplos
Step5: Fíjate por qué es importante añadir una cadena de texto entre comillas triples en nuestras funciones
Step7: En realidad, ya conoces unas cuantas funciones que existen de manera predefinida en Python. ¿Recuerdas las funciones len() y type(), por ejemplo, para calcular la longitud y el tipo de las estructuras de datos? Ahora lo único que estamos haciendo es definiendo nuestras propias funciones.
Siguiendo con las operaciones, imagínate ahora que nos piden calcular cubos de distintos números, podríamos definir una función llamada cubo
Step9: Podemos generalizar más y definir una función para calcular potencias, de manera que tomemos como entrada dos números, la base y el exponente.
Step11: Antes hemos mencionado que una función debía de devolver simpre algún valor, y por eso todas las definiciones de funciones terminan con una instrucción return. Bien, esto no era del todo cierto. Fíjate en este ejemplo.
Step12: En esta celda utilizo la función range que no habíamos visto antes para indicar el número exacto de veces que quiero que se ejecute un bucle for.
Fíjate en cómo la función range, cuando se ejecuta con un único argumento N, devuelve una lista con N números sobre la que podemos iterar. Bueno, en realidad, esto no es correcto. La realidad es que devuelve una estructura de datos especial llamada iterador, con los números enteros entre el 0 y N-1). Pero para nuestros intereses, es mejor la primera explicación.
Step13: range se puede ejecutar también con dos y tres argumenos numéricos. Cuando especificamos dos argumentos, podemos definir el primer y el último elemento de la lista resultante. Cuando especificamos tres, el último indica el salto entre cada par de elemenos. Fíjate en estos ejemplos.
Step15: Obviamente, podemos definir funciones que hagan otras cosas diferentes a cálculo de potencias. Pensemos en un ejemplo de tratamiento de texto que nos puede resultar más útil.
Step19: ¿Te atreves a modificar el código para | Python Code:
# necesito calcular el cuadrado de un número
# defino una variable asignándole el valor 6
n = 6
# calculo el cuadrado y lo imprimo por pantalla
cuadrado = n**2
print(cuadrado)
# ahora me piden que calcule otro cuadrado, en este caso de 8
# repito el proceso
n = 8
cuadrado = n**2
print(cuadrado)
Explanation: Funciones
Una función es una sección de código reutilizable, escrita para realizar una tarea específica en un programa.
¿Por qué son útiles las funciones? Fíjate en los siguientes ejemplos.
End of explanation
# creamos una función llamada cuadrado que toma como entrada
# un número cualquiera y devuelve el cuadrado de dicho número
def cuadrado(numero):
devuelve el cuadrado del número especificado como entrada
micuadrado = numero**2
return micuadrado
# las variables definidas dentro de funciones limitan su alcance al interior de la funcion
# si tratas de utilizarlas fuera, es como si nunca las hubieras creado
# esta celda, da error
print(micuadrado)
Explanation: Imagina que por tercera vez, me piden que calcule un cuadrado. ¿Repito el proceso de manera indefinida?
El ejemplo es un poco extremo, pero sirve para ilustrar que, a veces, nuestros programas repiten siempre el mismo proceso (en este caso, un cálculo) variando los datos con los que trabajamos. En este caso, es mucho más eficaz reutilizar el código y adaptarlo ligeramente para generalizar el cálculo de cuadrados a cualquier número.
¿Me sigues? ¿Sí? Pues vamos a ver cómo hacerlo definiendo una función.
La sintaxis de una función forma un bloque de código de la siguiente manera:
def FUNCION(ENTRADA):
comentario explicando qué hace esta función
SALIDA = INSTRUCCIONES
return SALIDA
Las funciones se definen usando la instrucción def y están compuestas de tres partes:
El encabezado, que incluye la instrucción def, el nombre de la función, cualquier argumento que contenga la función especificado entre paréntesis (), y el signo de dos puntos :.
Una cadena de documentación con tres pares de comillas, que explica de forma breve qué hace la función.
El cuerpo, que es el bloque de código que describe los procedimientos que la función lleva a cabo. El cuerpo es un bloque de código y por eso aparece indentado (de igual forma que las sentencias if, elif, y else)
Una función siempre devuelve algo: una estructura de datos, un mensaje, etc. Por lo tanto, la última línea de cualquier función es una instrucción return.
Piensa en la función como una especie de artefacto o caja negra que toma una entrada, la manipula/procesa/convierte/modifica y devuelve un resultado.
End of explanation
print(cuadrado(6))
print(cuadrado(8+1))
print(cuadrado(3) + 1)
otroNumero = cuadrado(235.9)
print(otroNumero)
# en esta celda estoy ejecutando una función indicando un argumento de tipo cadena
# y falla porque la función reliza operaciones matemáticas que solo funcionan con números
print(cuadrado("a"))
Explanation: Una vez definida una función, podemos llamarla tantas veces sea necesario y ejecutarla en cualquier sitio de distintas maneras. Fíjate en los ejemplos:
End of explanation
cuadrado??
Explanation: Fíjate por qué es importante añadir una cadena de texto entre comillas triples en nuestras funciones: sirve como documentación (docstring) de ayuda cuando utilizamos la función help() o accedemos a la ayuda de iPython.
End of explanation
def cubo(numero):
devuelve el cubo del número especificado como entrada
return numero**3
print(cubo(2))
# un millar es 10^3
millar = cubo(10)
print(millar)
Explanation: En realidad, ya conoces unas cuantas funciones que existen de manera predefinida en Python. ¿Recuerdas las funciones len() y type(), por ejemplo, para calcular la longitud y el tipo de las estructuras de datos? Ahora lo único que estamos haciendo es definiendo nuestras propias funciones.
Siguiendo con las operaciones, imagínate ahora que nos piden calcular cubos de distintos números, podríamos definir una función llamada cubo:
End of explanation
def potencia(base, exponente):
calcula potencias
return base**exponente
# calcula 10^2
print(potencia(10, 2))
# calcula 32^5
print(potencia(32, 5))
# un billón es un 10 seguido de 12 ceros
billon = potencia(10, 12)
print(billon)
Explanation: Podemos generalizar más y definir una función para calcular potencias, de manera que tomemos como entrada dos números, la base y el exponente.
End of explanation
def saludo():
Imprime un saludo por pantalla
print("¡Hola, amigo!")
Explanation: Antes hemos mencionado que una función debía de devolver simpre algún valor, y por eso todas las definiciones de funciones terminan con una instrucción return. Bien, esto no era del todo cierto. Fíjate en este ejemplo.
End of explanation
for i in range(3):
saludo()
# fíjate cómo el valor de i se actualiza en cada iteración
for i in range(10):
print("Voy por el número", i)
Explanation: En esta celda utilizo la función range que no habíamos visto antes para indicar el número exacto de veces que quiero que se ejecute un bucle for.
Fíjate en cómo la función range, cuando se ejecuta con un único argumento N, devuelve una lista con N números sobre la que podemos iterar. Bueno, en realidad, esto no es correcto. La realidad es que devuelve una estructura de datos especial llamada iterador, con los números enteros entre el 0 y N-1). Pero para nuestros intereses, es mejor la primera explicación.
End of explanation
# range devuelve los números entre el 10 al 19
for i in range(10, 20):
print(i)
print("-------------------")
# range devuelve los números del 10 al 19, avanzando de 3 en 3
for i in range(10, 20, 3):
print(i)
range??
Explanation: range se puede ejecutar también con dos y tres argumenos numéricos. Cuando especificamos dos argumentos, podemos definir el primer y el último elemento de la lista resultante. Cuando especificamos tres, el último indica el salto entre cada par de elemenos. Fíjate en estos ejemplos.
End of explanation
def pluralize(word):
Convierte a plural cualquier palabra singular en inglés
return word + "s"
Explanation: Obviamente, podemos definir funciones que hagan otras cosas diferentes a cálculo de potencias. Pensemos en un ejemplo de tratamiento de texto que nos puede resultar más útil.
End of explanation
print(pluralize("cat"))
print(pluralize("question"))
print(pluralize("berry"))
print(pluralize("business"))
print(pluralize("box"))
print(pluralize("flush"))
print(pluralize("coach"))
# plurales irregulares
print(pluralize("foot"))
print(pluralize("woman"))
print(pluralize("child"))
def terminaEnVocal(palabra):
comprueba si una palabra termina en vocal
vocales = "aeiou"
if palabra[-1] in vocales:
return True
else:
return False
print(terminaEnVocal("cava"))
print(terminaEnVocal("jamón"))
def convierteAMayusculaLaUltimaLetra(palabra):
devuelve la palabra transformando la última letra a mayúscula
return palabra[:-1] + palabra[-1].upper()
print(convierteAMayusculaLaUltimaLetra("cava"))
print(convierteAMayusculaLaUltimaLetra("jamón"))
print("berry"[:-1] + "ies")
def pluralize(word):
Convierte a plural cualquier palabra singular en inglés
if word.endswith("ss") or word.endswith("x"):
return word + "es"
else:
return word + "s"
Explanation: ¿Te atreves a modificar el código para:
que funcionen correctamente las palabras acabadas en -s?
manejar plurales irregulares?
End of explanation |
8,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align
Step1: 与之前章节一样,本节示例的相关性分析只限制在abupy内置沙盒数据中,和上一节一样首先将内置沙盒中美股,A股,港股, 比特币,莱特币,期货市场中的symbol都列出来,然后组成训练集和测试集,买入卖出因子等相同设置
Step2: 使用load_abu_result_tuple读取第15节中保存在本地的训练集数据:
Step7: 1. 从不同视角训练新的主裁
如下示例编写如何从不同视角组成裁判AbuUmpMainMul,其关心的视角为短线21天趋势角度,长线一年的价格rank,长线波动,以及atr波动如下所示:
Step8: 如上所示,即完成一个全新ump主裁的编写:
ump主裁需要继承AbuUmpMainBase,买入ump需要混入BuyUmpMixin
编写内部类继承自AbuMLPd,实现make_xy,即从训练集数据中筛选自己关心的特征
实现get_predict_col,返回关心的特征字符串名称
实现get_fiter_class,返回继承自AbuMLPd的内部类
实现class_unique_id,为裁判起一个唯一的名称
备注:feature.AbuFeatureDeg()等为abupy中内置的特征类,稍后会讲解自定义特征类
现在有了新的裁判AbuUmpMainMul,接下使用相同的训练集开始训练裁判,和第16节一样使用ump_main_clf_dump:
Step9: 显示的特征为四个特征即为新的主裁AbuUmpMainMul所关心的决策视角和行为。
下面示例如何使用自定义裁判,如下所示:
Step10: 即添加新的裁到系统中流程为:
打开使用用户自定义裁判开关:ump.manager.g_enable_user_ump = True
把新的裁判AbuUmpMainMul类名称使用append_user_ump添加到系统中,或者直接将上面训练好的对象ump_mul添加也可以
备注:使用append_user_ump添加的裁判参数可以是类名称,也可以是类对象,但裁判必须是使用ump_main_clf_dump训练好的。
下面使用新的裁判AbuUmpMainMul对测试集交易进行回测,如下:
Step11: 使用测试集不使用任何主裁拦截情况下进行回测度量对比,如下所示
Step12: 可以看到使用新的裁判胜率和盈亏比都稍微有提高,可以再加上几个内置裁判一起决策,如下示例:
Step17: 2. 从不同视角训练新的边裁
和主裁类似,如下示例编写如何从不同视角组成边裁AbuUmpMainMul,其关心的视角为短线21天趋势角度,长线一年的价格rank,长线波动,以及atr波动如下所示:
Step18: 如上所示,即完成一个全新ump边裁的编写,与主裁的实现非常类似:
ump边裁需要继承AbuUmpEdgeBase,买入ump需要混入BuyUmpMixin
编写内部类继承自AbuMLPd,实现make_xy,即从训练集数据中筛选自己关心的特征
实现get_predict_col,返回关心的特征字符串名称
实现get_fiter_class,返回继承自AbuMLPd的内部类
实现class_unique_id,为裁判起一个唯一的名称
备注:feature.AbuFeatureDeg()等为abupy中内置的特征类,稍后会讲解自定义特征类
现在有了新的边裁AbuUmpEdgeMul,接下使用相同的训练集开始训练裁判,和第17节一样使用ump_edge_clf_dump:
Step19: 显示的特征为新的边裁AbuUmpEdgeMul所关心的决策视角和行为。
下面将新的边裁添加到系统中流程为:
打开使用用户自定义裁判开关:ump.manager.g_enable_user_ump = True
把新的裁判AbuUmpEdgeMul类名称使用append_user_ump添加到系统中,或者直接将上面训练好的对象edge_mul添加也可以
流程和主裁一致,这里使用ump.manager.clear_user_ump()先把自定义裁判清空一下,即将上面添加到系统中自定义主裁清除。
备注:使用append_user_ump添加的裁判参数可以是类名称,也可以是类对象,但边裁必须是使用ump_edge_clf_dump训练好的。
Step20: 可以看到使用新的边裁胜率和盈亏比并不太理想,可以再加上几个内置边裁一起决策,如下示例:
Step25: 上面的完成的裁判都是使用abupy内置的特征类,如果希望能裁判能够从一些新的视角或者行为来进行决策,那么就需要添加新的视角来录制比赛(回测交易),下面的内容将示例讲自定义新的特征类(新的视角),自定义裁判使用这些新的特征类进行训练:
3. 添加新的视角来录制比赛(记录回测特征)
如下示例如何添加新的视角来录制比赛,编写AbuFeatureDegExtend,其与内置的角度主裁使用的特征很像,也是录制买入,卖出时的拟合角度特征 主裁角度记录21,42,60,252日走势拟合角度,本示例记录10,30,50,90,120日走势拟合角度特征,如下所示:
Step26: 如上所示,即添加完成一个新的视角来录制比赛(回测交易),即用户自定义特征类:
特征类需要继承AbuFeatureBase,支持买入特征混入BuyFeatureMixin,支持卖出特征混入SellFeatureMixin,本例都支持,都混入
需要实现get_feature_keys,返回自定义特征列名称
需要实现calc_feature,根据参数中的金融时间数据计算具体的特征值
备注:更多特征类的编写示例请阅读ABuMLFeature中相关源代码
现在有了新的特征类AbuFeatureDegExtend,首先需要使用feature.append_user_feature将新的特征加入到系统中:
Step27: 现在新的特征类已经在系统中了,使用第15节回测的相同设置进行回测,如下所示:
Step28: 可以发现回测的度量结果和之前是一摸一样的,不同点在于orders_pd_train_deg_extend中有了新的特征,如下筛选出所有拟合角特征:
Step29: 可以看到除了之前主裁使用的拟合角度21,42,60,252,外还增加了10,30,50,90,120日拟合角度特征值,下面把AbuFeatureDegExtend的特征都筛选出来看看:
Step31: 可以看到由于AbuFeatureDegExtend混入了BuyFeatureMixin和SellFeatureMixin,所以同时生成了买入趋势角度特征和卖出趋势角度特征,买入特征就是在之前的章节中主裁和边裁使用的特征,之前的章节讲解的都是根据买入特征进行决策拦截,没有涉及过针对卖出的交易进行决策拦截的示例,在之后的章节中会完整示例,请关注公众号的更新提醒。
4. 主裁使用新的视角来决策交易
下面开始编写主裁使用AbuFeatureDegExtend:
Step32: AbuUmpMainDegExtend的编写与之前类似,接下来使用AbuUmpMainDegExtend进行测试集回测,仍然不要忘记要先训练裁判,然后使用append_user_ump添加到系统中:
Step34: 如上所示拦截了15笔交易,11笔正确,拦截正确率比较高,达到73%正确。
5. 边裁使用新的视角来决策交易
下面开始编写边裁使用AbuFeatureDegExtend:
Step35: AbuUmpEegeDegExtend的编写与之前类似,接下来使用AbuUmpEegeDegExtend进行测试集回测,仍然不要忘记要先训练裁判,然后使用append_user_ump添加到系统中:
Step36: 如上所示拦截了12笔交易,9笔正确,拦截正确率比较高,达到75%正确。
最后使用新编写的主裁趋势角度扩展类,边裁趋势角度扩展类,和内置的主裁角度,内置的边裁角度共同决进行回测: | Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak, ABuProgress
from abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, AbuFuturesCn, ABuSymbolPd, AbuOrderPdProxy
from abupy import AbuUmpMainDeg, AbuUmpMainJump, AbuUmpMainPrice, AbuUmpMainWave, AbuFuturesCn, EStoreAbu, AbuML
from abupy import AbuUmpEdgeDeg, AbuUmpEdgePrice, AbuUmpEdgeWave, AbuUmpEdgeFull
from abupy import AbuMLPd, AbuUmpMainBase, AbuUmpEdgeBase, BuyUmpMixin, ump
from abupy import feature, AbuFeatureBase, BuyFeatureMixin, SellFeatureMixin, ABuRegUtil
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第18节 自定义裁判决策交易</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
上一节示例了ump边裁的使用以及回测示例,最后说过对决策效果提升最为重要的是:训练更多交易数据,更多策略来提升裁判的拦截水平及拦截认知范围,给每一个裁判看更多的比赛录像(回测数据),提高比赛录像水准,从多个不同视角录制比赛(回测交易),扩展裁判。
对于ump模块每个裁判类有自己关心的交易特征,使用特定的特征做为决策依据,即:每个裁判有自己在比赛中所特定关心的视角或者行为,之前的章节讲解的都是abupy内置裁判的使用示例,本节将讲解示例自定义裁判,通过不同的视角录制比赛。
对于训练更多交易数据,更多策略来提升裁判的拦截水平及拦截认知范围请阅读《量化交易之路》中的相关全市场回测。
首先导入abupy中本节使用的模块:
End of explanation
us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS']
cn_choice_symbols = ['002230', '300104', '300059', '601766', '600085', '600036', '600809', '000002', '002594']
hk_choice_symbols = ['hk03333', 'hk00700', 'hk02333', 'hk01359', 'hk00656', 'hk03888', 'hk02318']
tc_choice_symbols = ['btc', 'ltc']
# 期货市场的直接从AbuFuturesCn().symbo中读取
ft_choice_symbols = AbuFuturesCn().symbol.tolist()
# 训练集:沙盒中所有美股 + 沙盒中所有A股 + 沙盒中所有港股 + 比特币
train_choice_symbols = us_choice_symbols + cn_choice_symbols + hk_choice_symbols + tc_choice_symbols[:1]
# 测试集:沙盒中所有期货 + 莱特币
test_choice_symbols = ft_choice_symbols + tc_choice_symbols[1:]
# 设置初始资金数
read_cash = 1000000
# 买入因子依然延用向上突破因子
buy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
# 卖出因子继续使用上一节使用的因子
sell_factors = [
{'stop_loss_n': 1.0, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
# 回测生成买入时刻特征
abupy.env.g_enable_ml_feature = True
Explanation: 与之前章节一样,本节示例的相关性分析只限制在abupy内置沙盒数据中,和上一节一样首先将内置沙盒中美股,A股,港股, 比特币,莱特币,期货市场中的symbol都列出来,然后组成训练集和测试集,买入卖出因子等相同设置:
End of explanation
abu_result_tuple_train = abu.load_abu_result_tuple(n_folds=2, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='lecture_train')
orders_pd_train = abu_result_tuple_train.orders_pd
AbuMetricsBase.show_general(*abu_result_tuple_train, returns_cmp=True, only_info=True)
Explanation: 使用load_abu_result_tuple读取第15节中保存在本地的训练集数据:
End of explanation
class AbuUmpMainMul(AbuUmpMainBase, BuyUmpMixin):
从不同视角训练新的主裁示例,AbuUmpMainBase子类,混入BuyUmpMixin,做为买入ump类
class UmpMulFiter(AbuMLPd):
@ump.ump_main_make_xy
def make_xy(self, **kwarg):
# regex='result|buy_deg_ang21|buy_price_rank252|buy_wave_score3|buy_atr_std'
regex = 'result|{}|{}|{}|{}'.format(feature.AbuFeatureDeg().get_feature_ump_keys(ump_cls=AbuUmpMainMul)[-1],
feature.AbuFeaturePrice().get_feature_ump_keys(ump_cls=AbuUmpMainMul)[-1],
feature.AbuFeatureWave().get_feature_ump_keys(ump_cls=AbuUmpMainMul)[-1],
feature.AbuFeatureAtr().get_feature_ump_keys(ump_cls=AbuUmpMainMul)[-1])
# noinspection PyUnresolvedReferences
mul_df = self.order_has_ret.filter(regex=regex)
return mul_df
def get_predict_col(self):
主裁单混特征keys:['buy_deg_ang21', 'buy_price_rank252', 'buy_wave_score3', 'buy_atr_std']
:return: ['buy_deg_ang21', 'buy_price_rank252', 'buy_wave_score3', 'buy_atr_std']
return [feature.AbuFeatureDeg().get_feature_ump_keys(ump_cls=AbuUmpMainMul)[-1],
feature.AbuFeaturePrice().get_feature_ump_keys(ump_cls=AbuUmpMainMul)[-1],
feature.AbuFeatureWave().get_feature_ump_keys(ump_cls=AbuUmpMainMul)[-1],
feature.AbuFeatureAtr().get_feature_ump_keys(ump_cls=AbuUmpMainMul)[-1]]
def get_fiter_class(self):
主裁单混特征返回的AbuMLPd子类:AbuUmpMainMul.UmpMulFiter
:return: AbuUmpMainMul.UmpMulFiter
return AbuUmpMainMul.UmpMulFiter
@classmethod
def class_unique_id(cls):
具体ump类关键字唯一名称,类方法:return 'mul_main'
主要针对外部user设置自定义ump使用, 需要user自己保证class_unique_id的唯一性,内部不做检测
具体使用见ABuUmpManager中extend_ump_block方法
return 'mul_main'
# 通过import的方式导入AbuUmpMainMul, 因为在windows系统上,启动并行后,在ipython notebook中定义的类会在子进程中无法找到
from abupy import AbuUmpMainMul
Explanation: 1. 从不同视角训练新的主裁
如下示例编写如何从不同视角组成裁判AbuUmpMainMul,其关心的视角为短线21天趋势角度,长线一年的价格rank,长线波动,以及atr波动如下所示:
End of explanation
ump_mul = AbuUmpMainMul.ump_main_clf_dump(orders_pd_train, p_ncs=slice(20, 40, 1))
ump_mul.fiter.df.head()
Explanation: 如上所示,即完成一个全新ump主裁的编写:
ump主裁需要继承AbuUmpMainBase,买入ump需要混入BuyUmpMixin
编写内部类继承自AbuMLPd,实现make_xy,即从训练集数据中筛选自己关心的特征
实现get_predict_col,返回关心的特征字符串名称
实现get_fiter_class,返回继承自AbuMLPd的内部类
实现class_unique_id,为裁判起一个唯一的名称
备注:feature.AbuFeatureDeg()等为abupy中内置的特征类,稍后会讲解自定义特征类
现在有了新的裁判AbuUmpMainMul,接下使用相同的训练集开始训练裁判,和第16节一样使用ump_main_clf_dump:
End of explanation
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 把新的裁判AbuUmpMainMul类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpMainMul)
Explanation: 显示的特征为四个特征即为新的主裁AbuUmpMainMul所关心的决策视角和行为。
下面示例如何使用自定义裁判,如下所示:
End of explanation
abu_result_tuple_test_ump_main_user, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=test_choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple_test_ump_main_user, returns_cmp=True, only_info=True)
Explanation: 即添加新的裁到系统中流程为:
打开使用用户自定义裁判开关:ump.manager.g_enable_user_ump = True
把新的裁判AbuUmpMainMul类名称使用append_user_ump添加到系统中,或者直接将上面训练好的对象ump_mul添加也可以
备注:使用append_user_ump添加的裁判参数可以是类名称,也可以是类对象,但裁判必须是使用ump_main_clf_dump训练好的。
下面使用新的裁判AbuUmpMainMul对测试集交易进行回测,如下:
End of explanation
# 关闭用户自定义裁判开关
ump.manager.g_enable_user_ump = False
abu_result_tuple_test, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=test_choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple_test, returns_cmp=True, only_info=True)
Explanation: 使用测试集不使用任何主裁拦截情况下进行回测度量对比,如下所示:
End of explanation
# 开启内置跳空主裁
abupy.env.g_enable_ump_main_jump_block = True
# 开启内置价格主裁
abupy.env.g_enable_ump_main_price_block = True
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 把新的裁判AbuUmpMainMul类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpMainMul)
abu_result_tuple_test_ump_builtin_and_user, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=test_choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple_test_ump_builtin_and_user, returns_cmp=True, only_info=True)
Explanation: 可以看到使用新的裁判胜率和盈亏比都稍微有提高,可以再加上几个内置裁判一起决策,如下示例:
End of explanation
class AbuUmpEdgeMul(AbuUmpEdgeBase, BuyUmpMixin):
从不同视角训练新的边裁示例,AbuUmpEdgeBase子类,混入BuyUmpMixin,做为买入ump类
class UmpMulFiter(AbuMLPd):
@ump.ump_edge_make_xy
def make_xy(self, **kwarg):
filter_list = ['profit', 'profit_cg']
# ['profit', 'profit_cg', 'buy_deg_ang21', 'buy_price_rank252', 'buy_wave_score3', 'buy_atr_std']
filter_list.extend(
[feature.AbuFeatureDeg().get_feature_ump_keys(ump_cls=AbuUmpEdgeMul)[-1],
feature.AbuFeaturePrice().get_feature_ump_keys(ump_cls=AbuUmpEdgeMul)[-1],
feature.AbuFeatureWave().get_feature_ump_keys(ump_cls=AbuUmpEdgeMul)[-1],
feature.AbuFeatureAtr().get_feature_ump_keys(ump_cls=AbuUmpEdgeMul)[-1]])
mul_df = self.order_has_ret.filter(filter_list)
return mul_df
def get_predict_col(self):
边裁单混特征keys:['buy_deg_ang21', 'buy_price_rank252', 'buy_wave_score3', 'buy_atr_std']
:return: ['buy_deg_ang21', 'buy_price_rank252', 'buy_wave_score3', 'buy_atr_std']
return [feature.AbuFeatureDeg().get_feature_ump_keys(ump_cls=AbuUmpEdgeMul)[-1],
feature.AbuFeaturePrice().get_feature_ump_keys(ump_cls=AbuUmpEdgeMul)[-1],
feature.AbuFeatureWave().get_feature_ump_keys(ump_cls=AbuUmpEdgeMul)[-1],
feature.AbuFeatureAtr().get_feature_ump_keys(ump_cls=AbuUmpEdgeMul)[-1]]
def get_fiter_class(self):
边裁单混特征返回的AbuMLPd子类:AbuUmpEdgeMul.UmpMulFiter
:return: AbuUmpEdgeMul.UmpMulFiter
return AbuUmpEdgeMul.UmpMulFiter
@classmethod
def class_unique_id(cls):
具体ump类关键字唯一名称,类方法:return 'mul_edge'
主要针对外部user设置自定义ump使用, 需要user自己保证class_unique_id的唯一性,内部不做检测
具体使用见ABuUmpManager中extend_ump_block方法
return 'mul_edge'
# 通过import的方式导入AbuUmpEdgeMul, 因为在windows系统上,启动并行后,在ipython notebook中定义的类会在子进程中无法找到
from abupy import AbuUmpEdgeMul
Explanation: 2. 从不同视角训练新的边裁
和主裁类似,如下示例编写如何从不同视角组成边裁AbuUmpMainMul,其关心的视角为短线21天趋势角度,长线一年的价格rank,长线波动,以及atr波动如下所示:
End of explanation
edge_mul = AbuUmpEdgeMul.ump_edge_clf_dump(orders_pd_train)
edge_mul.fiter.df.head()
Explanation: 如上所示,即完成一个全新ump边裁的编写,与主裁的实现非常类似:
ump边裁需要继承AbuUmpEdgeBase,买入ump需要混入BuyUmpMixin
编写内部类继承自AbuMLPd,实现make_xy,即从训练集数据中筛选自己关心的特征
实现get_predict_col,返回关心的特征字符串名称
实现get_fiter_class,返回继承自AbuMLPd的内部类
实现class_unique_id,为裁判起一个唯一的名称
备注:feature.AbuFeatureDeg()等为abupy中内置的特征类,稍后会讲解自定义特征类
现在有了新的边裁AbuUmpEdgeMul,接下使用相同的训练集开始训练裁判,和第17节一样使用ump_edge_clf_dump:
End of explanation
# 清空用户自定义的裁判
ump.manager.clear_user_ump()
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 把新的裁判AbuUmpEdgeMul类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpEdgeMul)
abu_result_tuple_test_ump_user_edge, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=test_choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple_test_ump_user_edge, returns_cmp=True, only_info=True)
Explanation: 显示的特征为新的边裁AbuUmpEdgeMul所关心的决策视角和行为。
下面将新的边裁添加到系统中流程为:
打开使用用户自定义裁判开关:ump.manager.g_enable_user_ump = True
把新的裁判AbuUmpEdgeMul类名称使用append_user_ump添加到系统中,或者直接将上面训练好的对象edge_mul添加也可以
流程和主裁一致,这里使用ump.manager.clear_user_ump()先把自定义裁判清空一下,即将上面添加到系统中自定义主裁清除。
备注:使用append_user_ump添加的裁判参数可以是类名称,也可以是类对象,但边裁必须是使用ump_edge_clf_dump训练好的。
End of explanation
# 开启内置价格边裁
abupy.env.g_enable_ump_edge_price_block = True
# 开启内置波动边裁
abupy.env.g_enable_ump_edge_wave_block = True
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 把新的裁判AbuUmpEdgeMul类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpEdgeMul)
abu_result_tuple_test_ump_builtin_and_user_edge, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=test_choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple_test_ump_builtin_and_user_edge, returns_cmp=True, only_info=True)
Explanation: 可以看到使用新的边裁胜率和盈亏比并不太理想,可以再加上几个内置边裁一起决策,如下示例:
End of explanation
class AbuFeatureDegExtend(AbuFeatureBase, BuyFeatureMixin, SellFeatureMixin):
示例添加新的视角来录制比赛,角度特征,支持买入,卖出
def __init__(self):
20, 40, 60, 90, 120日走势角度特征
# frozenset包一下,一旦定下来就不能修改,否则特征对不上
self.deg_keys = frozenset([10, 30, 50, 90, 120])
def get_feature_keys(self, buy_feature):
迭代生成所有走势角度特征feature的列名称定, 使用feature_prefix区分买入,卖出前缀key
:param buy_feature: 是否是买入特征构造(bool)
:return: 角度特征的键值对字典中的key序列
return ['{}deg_ang{}'.format(self.feature_prefix(buy_feature=buy_feature), dk) for dk in self.deg_keys]
def calc_feature(self, kl_pd, combine_kl_pd, day_ind, buy_feature):
根据买入或者卖出时的金融时间序列,以及交易日信息构造拟合角度特征
:param kl_pd: 择时阶段金融时间序列
:param combine_kl_pd: 合并择时阶段之前1年的金融时间序列
:param day_ind: 交易发生的时间索引,即对应self.kl_pd.key
:param buy_feature: 是否是买入特征构造(bool)
:return: 构造角度特征的键值对字典
# 返回的角度特征键值对字典
deg_dict = {}
for dk in self.deg_keys:
# 迭代预设角度周期,计算构建特征
if day_ind - dk >= 0:
# 如果择时时间序列够提取特征,使用kl_pd截取特征交易周期收盘价格
deg_close = kl_pd[day_ind - dk + 1:day_ind + 1].close
else:
# 如果择时时间序列不够提取特征,使用combine_kl_pd截取特征交易周期,首先截取直到day_ind的时间序列
combine_kl_pd = combine_kl_pd.loc[:kl_pd.index[day_ind]]
# 如combine_kl_pd长度大于特征周期长度-> 截取combine_kl_pd[-dk:].close,否则取combine_kl_pd所有交易收盘价格
deg_close = combine_kl_pd[-dk:].close if combine_kl_pd.shape[0] > dk else combine_kl_pd.close
# 使用截取特征交易周期收盘价格deg_close做为参数,通过calc_regress_deg计算趋势拟合角度
ang = ABuRegUtil.calc_regress_deg(deg_close, show=False)
# 标准化拟合角度值
ang = 0 if np.isnan(ang) else round(ang, 3)
# 角度特征键值对字典添加拟合角度周期key和对应的拟合角度值
deg_dict['{}deg_ang{}'.format(self.feature_prefix(buy_feature=buy_feature), dk)] = ang
return deg_dict
# 通过import的方式导入AbuFeatureDegExtend, 因为在windows系统上,启动并行后,在ipython notebook中定义的类会在子进程中无法找到
from abupy import AbuFeatureDegExtend
Explanation: 上面的完成的裁判都是使用abupy内置的特征类,如果希望能裁判能够从一些新的视角或者行为来进行决策,那么就需要添加新的视角来录制比赛(回测交易),下面的内容将示例讲自定义新的特征类(新的视角),自定义裁判使用这些新的特征类进行训练:
3. 添加新的视角来录制比赛(记录回测特征)
如下示例如何添加新的视角来录制比赛,编写AbuFeatureDegExtend,其与内置的角度主裁使用的特征很像,也是录制买入,卖出时的拟合角度特征 主裁角度记录21,42,60,252日走势拟合角度,本示例记录10,30,50,90,120日走势拟合角度特征,如下所示:
End of explanation
feature.append_user_feature(AbuFeatureDegExtend)
Explanation: 如上所示,即添加完成一个新的视角来录制比赛(回测交易),即用户自定义特征类:
特征类需要继承AbuFeatureBase,支持买入特征混入BuyFeatureMixin,支持卖出特征混入SellFeatureMixin,本例都支持,都混入
需要实现get_feature_keys,返回自定义特征列名称
需要实现calc_feature,根据参数中的金融时间数据计算具体的特征值
备注:更多特征类的编写示例请阅读ABuMLFeature中相关源代码
现在有了新的特征类AbuFeatureDegExtend,首先需要使用feature.append_user_feature将新的特征加入到系统中:
End of explanation
# 关闭内置跳空主裁
abupy.env.g_enable_ump_main_jump_block = False
# 关闭内置价格主裁
abupy.env.g_enable_ump_main_price_block = False
# 关闭内置角度主裁
abupy.env.g_enable_ump_main_deg_block = False
# 关闭内置波动主裁
abupy.env.g_enable_ump_main_wave_block = False
# 关闭内置价格边裁
abupy.env.g_enable_ump_edge_price_block = False
# 关闭内置波动边裁
abupy.env.g_enable_ump_edge_wave_block = False
# 关闭内置角度边裁
abupy.env.g_enable_ump_edge_deg_block = False
# 关闭内置综合边裁
abupy.env.g_enable_ump_edge_full_block = False
# 关闭用户自定义裁判开关
ump.manager.g_enable_user_ump = False
abu_result_tuple_train_deg_extend, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=train_choice_symbols)
ABuProgress.clear_output()
orders_pd_train_deg_extend = abu_result_tuple_train_deg_extend.orders_pd
AbuMetricsBase.show_general(*abu_result_tuple_train_deg_extend, returns_cmp=True ,only_info=True)
Explanation: 现在新的特征类已经在系统中了,使用第15节回测的相同设置进行回测,如下所示:
End of explanation
orders_pd_train_deg_extend.filter(regex='buy_*deg').head()
Explanation: 可以发现回测的度量结果和之前是一摸一样的,不同点在于orders_pd_train_deg_extend中有了新的特征,如下筛选出所有拟合角特征:
End of explanation
orders_pd_train_deg_extend.filter(AbuFeatureDegExtend().get_feature_keys(True) +
AbuFeatureDegExtend().get_feature_keys(False)).head()
Explanation: 可以看到除了之前主裁使用的拟合角度21,42,60,252,外还增加了10,30,50,90,120日拟合角度特征值,下面把AbuFeatureDegExtend的特征都筛选出来看看:
End of explanation
class AbuUmpMainDegExtend(AbuUmpMainBase, BuyUmpMixin):
主裁使用新的视角来决策交易,AbuUmpMainBase子类,混入BuyUmpMixin,做为买入ump类
class UmpExtendFeatureFiter(AbuMLPd):
@ump.ump_main_make_xy
def make_xy(self, **kwarg):
# 这里使用get_feature_ump_keys,只需要传递当前类名称即可,其根据是买入ump还是卖出ump返回对应特征列
col = AbuFeatureDegExtend().get_feature_ump_keys(ump_cls=AbuUmpMainDegExtend)
regex = 'result|{}'.format('|'.join(col))
extend_deg_df = self.order_has_ret.filter(regex=regex)
return extend_deg_df
def get_predict_col(self):
# 这里使用get_feature_ump_keys,只需要传递当前类名称即可,其根据是买入ump还是卖出ump返回对应特征列
col = AbuFeatureDegExtend().get_feature_ump_keys(ump_cls=AbuUmpMainDegExtend)
return col
def get_fiter_class(self):
return AbuUmpMainDegExtend.UmpExtendFeatureFiter
@classmethod
def class_unique_id(cls):
return 'extend_main_deg'
# 通过import的方式导入AbuUmpMainDegExtend, 因为在windows系统上,启动并行后,在ipython notebook中定义的类会在子进程中无法找到
from abupy import AbuUmpMainDegExtend
Explanation: 可以看到由于AbuFeatureDegExtend混入了BuyFeatureMixin和SellFeatureMixin,所以同时生成了买入趋势角度特征和卖出趋势角度特征,买入特征就是在之前的章节中主裁和边裁使用的特征,之前的章节讲解的都是根据买入特征进行决策拦截,没有涉及过针对卖出的交易进行决策拦截的示例,在之后的章节中会完整示例,请关注公众号的更新提醒。
4. 主裁使用新的视角来决策交易
下面开始编写主裁使用AbuFeatureDegExtend:
End of explanation
# 首先训练新裁判AbuUmpMainDegExtend,注意这里训练要使用orders_pd_train_deg_extend,不能用之前的orders_pd_train,否则没有特征列
ump_deg_extend = AbuUmpMainDegExtend.ump_main_clf_dump(orders_pd_train_deg_extend, p_ncs=slice(20, 40, 1))
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 先clear一下
ump.manager.clear_user_ump()
# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpMainDegExtend)
ump_deg_extend.fiter.df.head()
abu_result_tuple_test_ump_extend_deg, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=test_choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple_test_ump_extend_deg, returns_cmp=True ,only_info=True)
proxy = AbuOrderPdProxy(abu_result_tuple_test.orders_pd)
with proxy.proxy_work(abu_result_tuple_test_ump_extend_deg.orders_pd) as (order1, order2):
block_order = order1 - order2
print('正确拦截失败的交易数量{}, 错误拦截的交易数量{}'.format(block_order.result.value_counts()[-1], block_order.result.value_counts()[1]))
Explanation: AbuUmpMainDegExtend的编写与之前类似,接下来使用AbuUmpMainDegExtend进行测试集回测,仍然不要忘记要先训练裁判,然后使用append_user_ump添加到系统中:
End of explanation
class AbuUmpEegeDegExtend(AbuUmpEdgeBase, BuyUmpMixin):
边裁使用新的视角来决策交易,AbuUmpEdgeBase子类,混入BuyUmpMixin,做为买入ump类
class UmpExtendEdgeFiter(AbuMLPd):
@ump.ump_edge_make_xy
def make_xy(self, **kwarg):
filter_list = ['profit', 'profit_cg']
col = AbuFeatureDegExtend().get_feature_ump_keys(ump_cls=AbuUmpEegeDegExtend)
filter_list.extend(col)
mul_df = self.order_has_ret.filter(filter_list)
return mul_df
def get_predict_col(self):
# 这里使用get_feature_ump_keys,只需要传递当前类名称即可,其根据是买入ump还是卖出ump返回对应特征列
col = AbuFeatureDegExtend().get_feature_ump_keys(ump_cls=AbuUmpEegeDegExtend)
return col
def get_fiter_class(self):
return AbuUmpEegeDegExtend.UmpExtendEdgeFiter
@classmethod
def class_unique_id(cls):
return 'extend_edge_deg'
# 通过import的方式导入AbuUmpEegeDegExtend, 因为在windows系统上,启动并行后,在ipython notebook中定义的类会在子进程中无法找到
from abupy import AbuUmpEegeDegExtend
Explanation: 如上所示拦截了15笔交易,11笔正确,拦截正确率比较高,达到73%正确。
5. 边裁使用新的视角来决策交易
下面开始编写边裁使用AbuFeatureDegExtend:
End of explanation
# 首先训练新裁判AbuUmpMainDegExtend,注意这里训练要使用orders_pd_train_deg_extend,不能用之前的orders_pd_train,否则没有特征列
ump_deg_edge_extend = AbuUmpEegeDegExtend.ump_edge_clf_dump(orders_pd_train_deg_extend)
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 先clear一下
ump.manager.clear_user_ump()
# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpEegeDegExtend)
ump_deg_edge_extend.fiter.df.head()
abu_result_tuple_test_ump_edge_extend_deg, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=test_choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple_test_ump_edge_extend_deg, returns_cmp=True ,only_info=True)
proxy = AbuOrderPdProxy(abu_result_tuple_test.orders_pd)
with proxy.proxy_work(abu_result_tuple_test_ump_extend_deg.orders_pd) as (order1, order2):
block_order = order1 - order2
print('正确拦截失败的交易数量{}, 错误拦截的交易数量{}'.format(block_order.result.value_counts()[-1], block_order.result.value_counts()[1]))
Explanation: AbuUmpEegeDegExtend的编写与之前类似,接下来使用AbuUmpEegeDegExtend进行测试集回测,仍然不要忘记要先训练裁判,然后使用append_user_ump添加到系统中:
End of explanation
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 先clear一下
ump.manager.clear_user_ump()
# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpEegeDegExtend)
# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpMainDegExtend)
# 打开内置角度边裁
abupy.env.g_enable_ump_edge_deg_block = True
# 打开内置角度主裁
abupy.env.g_enable_ump_main_deg_block = True
abu_result_tuple_test_ump_end, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=test_choice_symbols)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple_test_ump_end, returns_cmp=True ,only_info=True)
Explanation: 如上所示拦截了12笔交易,9笔正确,拦截正确率比较高,达到75%正确。
最后使用新编写的主裁趋势角度扩展类,边裁趋势角度扩展类,和内置的主裁角度,内置的边裁角度共同决进行回测:
End of explanation |
8,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Station Plot with Layout
Make a station plot, complete with sky cover and weather symbols, using a
station plot layout built into MetPy.
The station plot itself is straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
The StationPlotLayout class is used to standardize the plotting various parameters
(i.e. temperature), keeping track of the location, formatting, and even the units for use in
the station plot. This makes it easy (if using standardized names) to re-use a given layout
of a station plot.
Step1: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
Step2: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
Step3: Next grab the simple variables out of the data we have (attaching correct units), and
put them into a dictionary that we will hand the plotting function later
Step4: Notice that the names (the keys) in the dictionary are the same as those that the
layout is expecting.
Now perform a few conversions
Step5: All the data wrangling is finished, just need to set up plotting and go
Step6: The payoff
Step7: or instead, a custom layout can be used | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import pandas as pd
from metpy.calc import get_wind_components
from metpy.cbook import get_test_data
from metpy.plots import (add_metpy_logo, simple_layout, StationPlot,
StationPlotLayout, wx_code_map)
from metpy.units import units
Explanation: Station Plot with Layout
Make a station plot, complete with sky cover and weather symbols, using a
station plot layout built into MetPy.
The station plot itself is straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
The StationPlotLayout class is used to standardize the plotting various parameters
(i.e. temperature), keeping track of the location, formatting, and even the units for use in
the station plot. This makes it easy (if using standardized names) to re-use a given layout
of a station plot.
End of explanation
with get_test_data('station_data.txt') as f:
data_arr = pd.read_csv(f, header=0, usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
names=['stid', 'lat', 'lon', 'slp', 'air_temperature',
'cloud_fraction', 'dew_point_temperature', 'weather',
'wind_dir', 'wind_speed'],
na_values=-99999)
data_arr.set_index('stid', inplace=True)
Explanation: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
End of explanation
# Pull out these specific stations
selected = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',
'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',
'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',
'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',
'SAT', 'BUY', '0CO', 'ZPC', 'VIH']
# Loop over all the whitelisted sites, grab the first data, and concatenate them
data_arr = data_arr.loc[selected]
# Drop rows with missing winds
data_arr = data_arr.dropna(how='any', subset=['wind_dir', 'wind_speed'])
# First, look at the names of variables that the layout is expecting:
simple_layout.names()
Explanation: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
End of explanation
# This is our container for the data
data = {}
# Copy out to stage everything together. In an ideal world, this would happen on
# the data reading side of things, but we're not there yet.
data['longitude'] = data_arr['lon'].values
data['latitude'] = data_arr['lat'].values
data['air_temperature'] = data_arr['air_temperature'].values * units.degC
data['dew_point_temperature'] = data_arr['dew_point_temperature'].values * units.degC
data['air_pressure_at_sea_level'] = data_arr['slp'].values * units('mbar')
Explanation: Next grab the simple variables out of the data we have (attaching correct units), and
put them into a dictionary that we will hand the plotting function later:
End of explanation
# Get the wind components, converting from m/s to knots as will be appropriate
# for the station plot
u, v = get_wind_components(data_arr['wind_speed'].values * units('m/s'),
data_arr['wind_dir'].values * units.degree)
data['eastward_wind'], data['northward_wind'] = u, v
# Convert the fraction value into a code of 0-8, which can be used to pull out
# the appropriate symbol
data['cloud_coverage'] = (8 * data_arr['cloud_fraction']).fillna(10).values.astype(int)
# Map weather strings to WMO codes, which we can use to convert to symbols
# Only use the first symbol if there are multiple
wx_text = data_arr['weather'].fillna('')
data['present_weather'] = [wx_code_map[s.split()[0] if ' ' in s else s] for s in wx_text]
Explanation: Notice that the names (the keys) in the dictionary are the same as those that the
layout is expecting.
Now perform a few conversions:
Get wind components from speed and direction
Convert cloud fraction values to integer codes [0 - 8]
Map METAR weather codes to WMO codes for weather symbols
End of explanation
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
Explanation: All the data wrangling is finished, just need to set up plotting and go:
Set up the map projection and set up a cartopy feature for state borders
End of explanation
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1080, 290, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS, linewidth=2)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
simple_layout.plot(stationplot, data)
plt.show()
Explanation: The payoff
End of explanation
# Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted
# out to Farenheit tenths. Extra data will be ignored
custom_layout = StationPlotLayout()
custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots')
custom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred')
custom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF',
color='darkgreen')
# Also, we'll add a field that we don't have in our dataset. This will be ignored
custom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue')
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1080, 290, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS, linewidth=2)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
custom_layout.plot(stationplot, data)
plt.show()
Explanation: or instead, a custom layout can be used:
End of explanation |
8,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.0 Load data from http
Step1: 1.2 Calculate Actual Profit
Step2: 1.3 Load data from 'Calories' worksheet and plot
Step3: 1.4 add calorie data to sales worksheet
Step4: 1.5 pivot table
Step5: 1.6 pivot table | Python Code:
# code written in python_3. (for py_2.7 users some changes may be required)
import pandas # load pandas dataframe lib
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import numpy as np
# find path to your Concessions.xlsx
# df = short for dataframe == excel worksheet
# zero indexing in python, so first worksheet = 0
df_sales = pandas.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch01_complete/Concessions.xlsx','rb'), sheetname=0)
df_sales = df_sales.iloc[0:, 0:4]
df_sales.head() # use .head() to just show top 4 results
df_sales.dtypes # explore the dataframe
df_sales['Item'].head() # how to select a col
df_sales['Price'].describe() # basic stats
Explanation: 1.0 Load data from http://media.wiley.com/product_ancillary/6X/11186614/DOWNLOAD/ch01.zip, Concessions.xlsx
End of explanation
df_sales = df_sales.assign(Actual_Profit = df_sales['Price']*df_sales['Profit']) # adds new col
df_sales.head()
Explanation: 1.2 Calculate Actual Profit
End of explanation
# find path to your Concessions.xlsx
df_cals = pandas.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch01_complete/Concessions.xlsx','rb'), sheetname=1)
df_cals = df_cals.iloc[0:14, 0:2] # take data from 'Calories' worksheet
df_cals.head()
df_cals = df_cals.set_index('Item') # index df by items
# Items ranked by calories = .sort_values(by='Calories',ascending=True)
# rot = axis rotation
ax = df_cals.sort_values(by='Calories',ascending=True).plot(kind='bar', title ="Calories",figsize=(15,5),legend=False, fontsize=10, alpha=0.75, rot=20,)
plt.xlabel("") # no x-axis lable
plt.show()
Explanation: 1.3 Load data from 'Calories' worksheet and plot
End of explanation
df_sales = df_sales.assign(Calories=df_sales['Item'].map(df_cals['Calories'])) # map num calories from df_cals per item in df_sales (==Vlookup)
df_sales.head()
Explanation: 1.4 add calorie data to sales worksheet
End of explanation
pivot = pandas.pivot_table(df_sales, index=["Item"], values=["Price"], aggfunc=len) # len == 'count of price'
pivot.columns = ['Count'] # renames col
pivot.index.name = None # removes intex title which is not needed
pivot
Explanation: 1.5 pivot table: number of sales per item
End of explanation
# revenue = price * number of sales
pivot = pandas.pivot_table(df_sales, index=["Item"], values=["Price"], columns=["Category"], aggfunc=np.sum, fill_value='')
pivot.index.name = None
pivot.columns = pivot.columns.get_level_values(1) # sets cols to product categories
pivot
# set up decision variables
items = df_cals.index.tolist()
items
cost = dict(zip(df_cals.index, df_cals.Calories)) # calarific cost of each item
cost
from pulp import *
# create the LinProg object, set up as a minimisation problem
prob = pulp.LpProblem('Diet', pulp.LpMinimize)
vars = LpVariable.dicts("Number of",items, lowBound = 0, cat='Integer')
# Obj Func
prob += lpSum([cost[c]*vars[c] for c in items])
prob += sum(vars[c] for c in items)
# add constraint representing demand for soldiers
prob += (lpSum([cost[c]*vars[c] for c in items]) == 2400)
print(prob)
prob.solve()
# Is the solution optimal?
print("Status:", LpStatus[prob.status])
# Each of the variables is printed with it's value
for v in prob.variables():
print(v.name, "=", v.varValue)
# The optimised objective function value is printed to the screen
print("Minimum Number of Items = ", value(prob.objective))
Explanation: 1.6 pivot table: revenue per item / category
End of explanation |
8,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now that we have created and saved a configuration file, let’s read it back and explore the data it holds.
Step1: Please note that default values have precedence over fallback values. For instance, in our example the 'CompressionLevel' key was specified only in the 'DEFAULT' section. If we try to get it from the section 'topsecret.server.com', we will always get the default, even if we specify a fallback
Step2: The same fallback argument can be used with the getint(), getfloat() and getboolean() methods, for example | Python Code:
config = configparser.ConfigParser()
config.sections()
config.read('example.ini')
config.sections()
'bitbucket.org' in config
'bytebong.com' in config
config['bitbucket.org']['User']
config['DEFAULT']['Compression']
topsecret = config['topsecret.server.com']
topsecret['ForwardX11']
topsecret['Port']
for key in config['bitbucket.org']:
print(key)
config['bitbucket.org']['ForwardX11']
type(topsecret['Port'])
type(int(topsecret['Port']))
type(topsecret.getint('Port'))
type(topsecret.getfloat('Port'))
int(topsecret['Port']) - 22.0
int(topsecret['Port']) - 22
try:
topsecret.getint('ForwardX11')
except ValueError:
print(True)
topsecret.getboolean('ForwardX11')
config['bitbucket.org'].getboolean('ForwardX11')
config.getboolean('bitbucket.org', 'Compression')
topsecret.get('Port')
topsecret.get('CompressionLevel')
topsecret.get('Cipher')
topsecret.get('Cipher', '3des-cbc')
Explanation: Now that we have created and saved a configuration file, let’s read it back and explore the data it holds.
End of explanation
topsecret.get('CompressionLevel', '3')
Explanation: Please note that default values have precedence over fallback values. For instance, in our example the 'CompressionLevel' key was specified only in the 'DEFAULT' section. If we try to get it from the section 'topsecret.server.com', we will always get the default, even if we specify a fallback:
End of explanation
'BatchMode' in topsecret
topsecret.getboolean('BatchMode', fallback=True)
config['DEFAULT']['BatchMode'] = 'no'
topsecret.getboolean('BatchMode', fallback=True)
Explanation: The same fallback argument can be used with the getint(), getfloat() and getboolean() methods, for example:
End of explanation |
8,622 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Python PCA - Plotting Explained Variance Ratio with Matplotlib
| Python Code::
import matplotlib.pyplot as plt
plt.figure(figsize=(10,6))
plt.bar(x=range(0,len(X_train.columns)),
height=pca.explained_variance_ratio_,
tick_label=X_train.columns)
plt.title('Explained Variance Ratio')
plt.ylabel('Explained Variance Ratio')
plt.xlabel('Component')
plt.show()
|
8,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
return image_data*0.8/255+0.1
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count,labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 3
learning_rate = 0.05
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
8,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Heatmap
The HeatMap mark represents a 2d matrix of values as a color image. It can be used to visualize a 2d function, or a grayscale image for instance.
HeatMap is very similar to the GridHeatMap, but should be preferred for a greater number of points (starting at around 100x100), to avoid overloading the browser. GridHeatMap offers more control (interactions, selections), and is better suited for a smaller number of points.
Step1: Data Input
x is a 1d array, corresponding to the abscissas of the points (size N)
y is a 1d array, corresponding to the ordinates of the points (size M)
color is a 2d array, $\text{color}_{ij}$ is the intensity of the point $(x_i, y_j)$ (size (N, M))
Scales must be defined for each attribute
Step2: Plotting a 2-dimensional function
This is a visualization of the function $f(x, y) = \text{cos}(x^2+y^2)$
Step3: Displaying an image
The HeatMap can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the color attribute | Python Code:
import numpy as np
from bqplot import Figure, LinearScale, ColorScale, Color, Axis, HeatMap, ColorAxis
from ipywidgets import Layout
Explanation: Heatmap
The HeatMap mark represents a 2d matrix of values as a color image. It can be used to visualize a 2d function, or a grayscale image for instance.
HeatMap is very similar to the GridHeatMap, but should be preferred for a greater number of points (starting at around 100x100), to avoid overloading the browser. GridHeatMap offers more control (interactions, selections), and is better suited for a smaller number of points.
End of explanation
x = np.linspace(-5, 5, 200)
y = np.linspace(-5, 5, 200)
X, Y = np.meshgrid(x, y)
color = np.cos(X ** 2 + Y ** 2)
Explanation: Data Input
x is a 1d array, corresponding to the abscissas of the points (size N)
y is a 1d array, corresponding to the ordinates of the points (size M)
color is a 2d array, $\text{color}_{ij}$ is the intensity of the point $(x_i, y_j)$ (size (N, M))
Scales must be defined for each attribute:
- a LinearScale, LogScale or OrdinalScale for x and y
- a ColorScale for color
End of explanation
x_sc, y_sc, col_sc = LinearScale(), LinearScale(), ColorScale(scheme="RdYlBu")
heat = HeatMap(x=x, y=y, color=color, scales={"x": x_sc, "y": y_sc, "color": col_sc})
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation="vertical")
ax_c = ColorAxis(scale=col_sc)
fig = Figure(
marks=[heat],
axes=[ax_x, ax_y, ax_c],
title="Cosine",
layout=Layout(width="650px", height="650px"),
min_aspect_ratio=1,
max_aspect_ratio=1,
padding_y=0,
)
fig
Explanation: Plotting a 2-dimensional function
This is a visualization of the function $f(x, y) = \text{cos}(x^2+y^2)$
End of explanation
from scipy.misc import ascent
Z = ascent()
Z = Z[::-1, :]
aspect_ratio = Z.shape[1] / Z.shape[0]
col_sc = ColorScale(scheme="Greys", reverse=True)
scales = {"color": col_sc}
ascent = HeatMap(color=Z, scales=scales)
img = Figure(
title="Ascent",
marks=[ascent],
layout=Layout(width="650px", height="650px"),
min_aspect_ratio=aspect_ratio,
max_aspect_ratio=aspect_ratio,
padding_y=0,
)
img
Explanation: Displaying an image
The HeatMap can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the color attribute
End of explanation |
8,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Customization basics
Step2: Import TensorFlow
To get started, import the tensorflow module. As of TensorFlow 2.0, eager execution is turned on by default. This enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
Step3: Tensors
A Tensor is a multi-dimensional array. Similar to NumPy ndarray objects, tf.Tensor objects have a data type and a shape. Additionally, tf.Tensors can reside in accelerator memory (like a GPU). TensorFlow offers a rich library of operations (tf.add, tf.matmul, tf.linalg.inv etc.) that consume and produce tf.Tensors. These operations automatically convert native Python types, for example
Step4: Each tf.Tensor has a shape and a datatype
Step5: The most obvious differences between NumPy arrays and tf.Tensors are
Step6: GPU acceleration
Many TensorFlow operations are accelerated using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation—copying the tensor between CPU and GPU memory, if necessary. Tensors produced by an operation are typically backed by the memory of the device on which the operation executed, for example
Step7: Device Names
The Tensor.device property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with GPU
Step9: Datasets
This section uses the tf.data.Dataset API to build a pipeline for feeding data to your model. The tf.data.Dataset API is used to build performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
Create a source Dataset
Create a source dataset using one of the factory functions like Dataset.from_tensors, Dataset.from_tensor_slices, or using objects that read from files like TextLineDataset or TFRecordDataset. See the TensorFlow Dataset guide for more information.
Step10: Apply transformations
Use the transformations functions like map, batch, and shuffle to apply transformations to dataset records.
Step11: Iterate
tf.data.Dataset objects support iteration to loop over records | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
from __future__ import absolute_import, division, print_function
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
Explanation: Customization basics: tensors and operations
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/customization/basics"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This is an introductory TensorFlow tutorial shows how to:
Import the required package
Create and use tensors
Use GPU acceleration
Demonstrate tf.data.Dataset
End of explanation
import tensorflow as tf
Explanation: Import TensorFlow
To get started, import the tensorflow module. As of TensorFlow 2.0, eager execution is turned on by default. This enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
End of explanation
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
Explanation: Tensors
A Tensor is a multi-dimensional array. Similar to NumPy ndarray objects, tf.Tensor objects have a data type and a shape. Additionally, tf.Tensors can reside in accelerator memory (like a GPU). TensorFlow offers a rich library of operations (tf.add, tf.matmul, tf.linalg.inv etc.) that consume and produce tf.Tensors. These operations automatically convert native Python types, for example:
End of explanation
x = tf.matmul([[1]], [[2, 3]])
print(x)
print(x.shape)
print(x.dtype)
Explanation: Each tf.Tensor has a shape and a datatype:
End of explanation
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
Explanation: The most obvious differences between NumPy arrays and tf.Tensors are:
Tensors can be backed by accelerator memory (like GPU, TPU).
Tensors are immutable.
NumPy Compatibility
Converting between a TensorFlow tf.Tensors and a NumPy ndarray is easy:
TensorFlow operations automatically convert NumPy ndarrays to Tensors.
NumPy operations automatically convert Tensors to NumPy ndarrays.
Tensors are explicitly converted to NumPy ndarrays using their .numpy() method. These conversions are typically cheap since the array and tf.Tensor share the underlying memory representation, if possible. However, sharing the underlying representation isn't always possible since the tf.Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion involves a copy from GPU to host memory.
End of explanation
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.config.experimental.list_physical_devices("GPU"))
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
Explanation: GPU acceleration
Many TensorFlow operations are accelerated using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation—copying the tensor between CPU and GPU memory, if necessary. Tensors produced by an operation are typically backed by the memory of the device on which the operation executed, for example:
End of explanation
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.config.experimental.list_physical_devices("GPU"):
print("On GPU:")
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
Explanation: Device Names
The Tensor.device property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with GPU:<N> if the tensor is placed on the N-th GPU on the host.
Explicit Device Placement
In TensorFlow, placement refers to how individual operations are assigned (placed on) a device for execution. As mentioned, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation and copies tensors to that device, if needed. However, TensorFlow operations can be explicitly placed on specific devices using the tf.device context manager, for example:
End of explanation
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write(Line 1
Line 2
Line 3
)
ds_file = tf.data.TextLineDataset(filename)
Explanation: Datasets
This section uses the tf.data.Dataset API to build a pipeline for feeding data to your model. The tf.data.Dataset API is used to build performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
Create a source Dataset
Create a source dataset using one of the factory functions like Dataset.from_tensors, Dataset.from_tensor_slices, or using objects that read from files like TextLineDataset or TFRecordDataset. See the TensorFlow Dataset guide for more information.
End of explanation
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
Explanation: Apply transformations
Use the transformations functions like map, batch, and shuffle to apply transformations to dataset records.
End of explanation
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
Explanation: Iterate
tf.data.Dataset objects support iteration to loop over records:
End of explanation |
8,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
레버리지와 아웃라이어
레버리지 (Leverage)
개별적인 데이터 표본이 회귀 분석 결과에 미치는 영향은 레버리지(leverage)분석을 통해 알 수 있다.
레버리지는 target value $y$가 예측된(predicted) target $\hat{y}$에 미치는 영향을 나타낸 값이다. self-influence, self-sensitivity 라고도 한다
레버리지는 RegressionResults 클래스의 get_influence 메서드로 구할 수 있다.
weight vector
$$ w = (X^TX)^{-1} X^T y $$
$$ \hat{y} = X w = X((X^TX)^{-1} X^T y ) = ( X(X^TX)^{-1} X^T) y = Hy $$
leverage $h_{ii}$
$$ h_{ii}=(H)_{ii} $$
leverage 특성
$$ 0 \leq h_{ii} \leq 1 $$
$$ \sum_i^N h_{ii} = 2 $$
leverages는 어떤 데이터 포인트가 예측점을 자기 자신의 위치로 끌어 당기는 정도
만약 $h_{ii} \simeq 1$이면
$$ \hat{y} \simeq y $$
Step1: Outlier
Good Leverage Points
leverage가(영향력이) 높지만 residual이(오차가) 작은 데이터
Bad Leverage Points = Outliner
leverage도(영향력도) 높지만 residual도(오차도) 큰 데이터
Step2: Influence
Cook's Distance
(normalized) residual과 leverage의 복합 측도
$$ D_i = \frac{e_i^2}{\text{RSS}}\left[\frac{h_{ii}}{(1-h_{ii})^2}\right] $$
Fox' Outlier Recommendation
$$ D_i > \dfrac{4}{N − 2} $$ | Python Code:
from sklearn.datasets import make_regression
X0, y, coef = make_regression(n_samples=100, n_features=1, noise=20, coef=True, random_state=1)
# add high-leverage points
X0 = np.vstack([X0, np.array([[4], [3]])])
X = sm.add_constant(X0)
y = np.hstack([y, [300, 150]])
plt.scatter(X0, y)
plt.show()
model = sm.OLS(pd.DataFrame(y), pd.DataFrame(X))
result = model.fit()
print(result.summary())
influence = result.get_influence()
hat = influence.hat_matrix_diag
plt.stem(hat)
plt.axis([-2, len(y)+2, 0, 0.2])
plt.show()
print("hat.sum() =", hat.sum())
plt.scatter(X0, y)
sm.graphics.abline_plot(model_results=result, ax=plt.gca())
idx = hat > 0.05
plt.scatter(X0[idx], y[idx], s=300, c="r", alpha=0.5)
plt.axis([-3, 5, -300, 400])
plt.show()
plt.scatter(X0, y)
sm.graphics.abline_plot(model_results=result, ax=plt.gca())
idx = hat > 0.05
plt.scatter(X0[idx], y[idx], s=300, c="r", alpha=0.5)
plt.axis([-3, 5, -300, 400])
plt.show()
model2 = sm.OLS(y[:-1], X[:-1])
result2 = model2.fit()
plt.scatter(X0, y);
sm.graphics.abline_plot(model_results=result, c="r", linestyle="--", ax=plt.gca())
sm.graphics.abline_plot(model_results=result2, c="g", alpha=0.7, ax=plt.gca())
plt.plot(X0[-1], y[-1], marker='x', c="m", ms=20, mew=5)
plt.axis([-3, 5, -300, 400])
plt.legend(["before", "after"], loc="upper left")
plt.show()
model3 = sm.OLS(y[1:], X[1:])
result3 = model3.fit()
plt.scatter(X0, y);
sm.graphics.abline_plot(model_results=result, c="r", linestyle="--", ax=plt.gca())
sm.graphics.abline_plot(model_results=result3, c="g", alpha=0.7, ax=plt.gca())
plt.plot(X0[0], y[0], marker='x', c="m", ms=20, mew=5)
plt.axis([-3, 5, -300, 400])
plt.legend(["before", "after"], loc="upper left")
plt.show()
Explanation: 레버리지와 아웃라이어
레버리지 (Leverage)
개별적인 데이터 표본이 회귀 분석 결과에 미치는 영향은 레버리지(leverage)분석을 통해 알 수 있다.
레버리지는 target value $y$가 예측된(predicted) target $\hat{y}$에 미치는 영향을 나타낸 값이다. self-influence, self-sensitivity 라고도 한다
레버리지는 RegressionResults 클래스의 get_influence 메서드로 구할 수 있다.
weight vector
$$ w = (X^TX)^{-1} X^T y $$
$$ \hat{y} = X w = X((X^TX)^{-1} X^T y ) = ( X(X^TX)^{-1} X^T) y = Hy $$
leverage $h_{ii}$
$$ h_{ii}=(H)_{ii} $$
leverage 특성
$$ 0 \leq h_{ii} \leq 1 $$
$$ \sum_i^N h_{ii} = 2 $$
leverages는 어떤 데이터 포인트가 예측점을 자기 자신의 위치로 끌어 당기는 정도
만약 $h_{ii} \simeq 1$이면
$$ \hat{y} \simeq y $$
End of explanation
plt.figure(figsize=(10, 2))
plt.stem(result.resid)
plt.xlim([-2, len(y)+2])
plt.show()
sm.graphics.plot_leverage_resid2(result)
plt.show()
Explanation: Outlier
Good Leverage Points
leverage가(영향력이) 높지만 residual이(오차가) 작은 데이터
Bad Leverage Points = Outliner
leverage도(영향력도) 높지만 residual도(오차도) 큰 데이터
End of explanation
sm.graphics.influence_plot(result, plot_alpha=0.3)
plt.show()
cooks_d2, pvals = influence.cooks_distance
fox_cr = 4 / (len(y) - 2)
idx = np.where(cooks_d2 > fox_cr)[0]
plt.scatter(X0, y)
plt.scatter(X0[idx], y[idx], s=300, c="r", alpha=0.5)
plt.axis([-3, 5, -300, 400])
from statsmodels.graphics import utils
utils.annotate_axes(range(len(idx)), idx, zip(X0[idx], y[idx]), [(-20,15)]*len(idx), size="large", ax=plt.gca())
plt.show()
idx = np.nonzero(result.outlier_test().ix[:, -1].abs() < 0.9)[0]
plt.scatter(X0, y)
plt.scatter(X0[idx], y[idx], s=300, c="r", alpha=0.5)
plt.axis([-3, 5, -300, 400]);
utils.annotate_axes(range(len(idx)), idx, zip(X0[idx], y[idx]), [(-10,10)]*len(idx), size="large", ax=plt.gca())
plt.show()
plt.figure(figsize=(10, 2))
plt.stem(result.outlier_test().ix[:, -1])
plt.show()
Explanation: Influence
Cook's Distance
(normalized) residual과 leverage의 복합 측도
$$ D_i = \frac{e_i^2}{\text{RSS}}\left[\frac{h_{ii}}{(1-h_{ii})^2}\right] $$
Fox' Outlier Recommendation
$$ D_i > \dfrac{4}{N − 2} $$
End of explanation |
8,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run the CCF analysis
You should have already cross-correlated all your spectra against a suitable model using Search.py. This notebook goes through the shifting and subtracting of the average CCF, and measuring the companion velocities.
Step1: Get and shift the Cross-correlation functions to the primary star rest frame
Run with T > 6000 first, to measure the primary star radial velocity in my CCFs. Then, run with T = 4000 or so to detect the companion.
Step2: Measure the companion RVs.
The measured CCF velocities are in the primary star rest frame + some $\Delta v$ caused by the mismatch between the modeled index of refraction and the real index of refraction at McDonald observatory. That is a constant though. So, we get
$v_m(t) = v_2(t) - v_1(t) + \Delta v$
and
$v_2(t) = v_m(t) + v_1(t) - \Delta v$
where $v_m(t)$ are the measured radial velocities in the residual CCFS. But, all that get's done in the MCMC fitting code. Let's just measure $v_m$ for as many times as possible.
Step3: Fix the dates to line up with the RV output file | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import CombineCCFs
import numpy as np
from astropy import units as u, constants
from HelperFunctions import Gauss, integral
import os
import lmfit
import emcee
import triangle
from scipy.interpolate import InterpolatedUnivariateSpline as spline
sns.set_context('paper', font_scale=2.0)
home = os.environ['HOME']
Explanation: Run the CCF analysis
You should have already cross-correlated all your spectra against a suitable model using Search.py. This notebook goes through the shifting and subtracting of the average CCF, and measuring the companion velocities.
End of explanation
hdf_file = '{}/School/Research/McDonaldData/PlanetData/PsiDraA/Cross_correlations/CCF.hdf5'.format(home)
output_dir = '{}/School/Research/McDonaldData/PlanetData/Paper/Figures/'.format(home)
T = 4400
vsini = 5
logg = 4.5
metal = 0.0
dV = 0.1
c = constants.c.cgs.to(u.m/u.s).value
xgrid = np.arange(-400, 400+dV/2., dV)
ccfs, original_files = CombineCCFs.get_ccfs(T=T, vsini=vsini, logg=logg, metal=metal,
hdf_file=hdf_file, xgrid=xgrid)
# Plot all the CCFs
cmap = sns.cubehelix_palette(reverse=False, as_cmap=True, gamma=1, rot=0.7, start=2)
fig, ax = plt.subplots(1, 1)
out = ax.imshow(ccfs, cmap=cmap, aspect='auto', origin='lower')#, vmin=vmin, vmax=vmax)
min_v = -75.
max_v = 75.
dv_ticks = 50.0/dV
ax.set_xlim(((min_v+400)/dV, (max_v+400)/dV))
ticks = np.arange((min_v+400)/dV, (max_v+400)/dV+1, dv_ticks)
ax.set_xticks((ticks))
#ax.set_xticklabels((-150, -100, -50, 0, 50, 100, 150))
ax.set_xticklabels((-75, -25, 25, 75))
ax.set_xlabel('Velocity in primary star rest frame (km/s)')
ax.set_yticklabels(())
ax.set_ylabel('Observation Date')
# Colorbar
cb = plt.colorbar(out)
cb.set_label('CCF Power')
# Save
plt.savefig('{}Original_CCFs.pdf'.format(output_dir))
avg_ccf = np.mean(ccfs, axis=0)
normed_ccfs = ccfs - avg_ccf
# Get the time each observation was made (in julian date)
dates = np.array([CombineCCFs.fits.getheader(fname)['HJD'] for fname in original_files])
# Set up the scaling manually
low, high = np.min(normed_ccfs), np.max(normed_ccfs)
rng = max(abs(low), abs(high))
vmin = np.sign(low) * rng
vmax = np.sign(high) * rng
# Make the actual plot
#cmap = sns.cubehelix_palette(reverse=False, as_cmap=True, gamma=1, rot=0.7, start=2) #defined above now...
fig, ax = plt.subplots(1, 1)
out = ax.imshow(normed_ccfs, cmap=cmap, aspect='auto', origin='lower')#, vmin=vmin, vmax=vmax)
ax.set_xlim(((min_v+400)/dV, (max_v+400)/dV))
ticks = np.arange((min_v+400)/dV, (max_v+400)/dV+1, dv_ticks)
ax.set_xticks((ticks))
#ax.set_xticklabels((-150, -100, -50, 0, 50, 100, 150))
ax.set_xticklabels((-75, -25, 25, 75))
ax.set_xlabel('Velocity (km/s)')
ax.set_yticklabels(())
ax.set_ylabel('Observation Date')
# Colorbar
cb = plt.colorbar(out)
cb.set_label('CCF Power')
fig.subplots_adjust(bottom=0.18, left=0.10, top=0.95, right=0.90)
plt.savefig('{}Resid_CCFs.pdf'.format(output_dir))
# Make the same plot, but with the y-axis linearized so that I can give dates.
X,Y = np.meshgrid(xgrid, dates-2450000)
fig, ax = plt.subplots(1, 1, figsize=(6,4))
out = ax.pcolormesh(X,Y,normed_ccfs, cmap=cmap, rasterized=True)
ax.set_xlabel('Velocity (km/s)')
ax.set_ylabel('JD - 2450000')
ax.set_xlim((-75, 75))
ax.set_ylim((Y.min(), Y.max()))
# Colorbar
cb = plt.colorbar(out)
cb.set_label('CCF Power')
fig.subplots_adjust(bottom=0.18, left=0.15, top=0.95, right=0.90)
plt.savefig('{}Resid_CCFs.pdf'.format(output_dir))
print(output_dir)
ax.pcolormesh?
def fwhm(x, y, search_range=(-500, 500)):
good = (x > search_range[0]) & (x < search_range[1])
x = x[good].copy()
y = y[good].copy()
idx = np.argmax(y)
ymax = y[idx]
half_max = ymax / 2.0
# Find the first pixels less than half_max to the left of idx
for di in range(1, idx):
if y[idx-di] < half_max:
break
slope = (y[idx-(di+1)] - y[idx-di])/(x[idx-(di+1)] - x[idx-di])
left = x[idx-di] + (half_max - y[idx-di])/slope
# Find the first pixels less than half_max to the right of idx
for di in range(1, len(x)-idx-1):
if y[idx+di] < half_max:
break
slope = (y[idx+(di+1)] - y[idx+di])/(x[idx+(di+1)] - x[idx+di])
right = x[idx+di] + (half_max - y[idx+di])/slope
return left, x[idx], right
# Make plot of a normal residual CCF
sns.set_style('white')
sns.set_style('ticks')
i = 50
corr = normed_ccfs[i]
fig, ax = plt.subplots()
ax.plot(xgrid, corr, 'k-', lw=3)
l, m, h = fwhm(xgrid, corr, search_range=(-50, 10))
ylim = ax.get_ylim()
ax.plot([m, m], ylim, 'g--', alpha=0.7)
ax.plot([l, l], ylim, 'r--', alpha=0.7)
ax.plot([h, h], ylim, 'r--', alpha=0.7)
ax.set_xlim((-60, 20))
ax.set_ylim(ylim)
ax.set_xlabel('Velocity (km/s)')
ax.set_ylabel('CCF Power')
fig.subplots_adjust(bottom=0.18, left=0.20, top=0.95, right=0.95)
plt.savefig('{}Typical_CCF.pdf'.format(output_dir))
Explanation: Get and shift the Cross-correlation functions to the primary star rest frame
Run with T > 6000 first, to measure the primary star radial velocity in my CCFs. Then, run with T = 4000 or so to detect the companion.
End of explanation
# Measure the radial velocities of the companion as the peak and FWHM of the residual CCF
date = []
rv1 = []
rv1_err = []
rv2 = []
rv2_err = []
import time
for i in range(10, len(original_files)):
#for i in range(20, 21):
header = CombineCCFs.fits.getheader(original_files[i])
jd = header['HJD']
prim_rv = CombineCCFs.get_prim_rv(original_files[i], data_shift=0.0)
measurements = CombineCCFs.get_measured_rv(original_files[i])
rv1.append(measurements[0])
rv1_err.append(measurements[1])
try:
l, m, h = fwhm(xgrid.copy(), normed_ccfs[i], search_range=(-50, 10))
print('i = {}, HJD = {}\n\t{:.1f} +{:.1f}/-{:.1f}\n\t{}'.format(i, jd, m, h-m, m-l, original_files[i]))
date.append(jd)
rv2.append((h+l)/2.)
rv2_err.append((h-l)/2.355)
except:
date.append(jd)
rv2.append(np.nan)
rv2_err.append(np.nan)
continue
plt.plot(xgrid, normed_ccfs[i])
plt.xlim((-100, 50))
ylim = plt.ylim()
plt.plot([(h+l)/2., (h+l)/2.], ylim, 'g--')
plt.savefig('Figures/CCF_{}.pdf'.format(original_files[i][:-5]))
plt.cla()
rv1 = np.array(rv1)
rv1_err = np.array(rv1_err)
rv2 = np.array(rv2)
rv2_err = np.array(rv2_err)
# I don't trust the very early measurements
rv2[:6] = np.nan*np.ones(6)
rv2_err[:6] = np.nan*np.ones(6)
# Save the RVs
np.savetxt('rv_data.txt', (date, rv1, rv1_err, rv2, rv2_err))
Explanation: Measure the companion RVs.
The measured CCF velocities are in the primary star rest frame + some $\Delta v$ caused by the mismatch between the modeled index of refraction and the real index of refraction at McDonald observatory. That is a constant though. So, we get
$v_m(t) = v_2(t) - v_1(t) + \Delta v$
and
$v_2(t) = v_m(t) + v_1(t) - \Delta v$
where $v_m(t)$ are the measured radial velocities in the residual CCFS. But, all that get's done in the MCMC fitting code. Let's just measure $v_m$ for as many times as possible.
End of explanation
import pandas as pd
rv1_data = pd.read_fwf('../Planet-Finder/data/psi1draa_140p_28_37_ASW.dat', header=None)
t1 = rv1_data[0].values
rv1 = rv1_data[2].values / 1000. # Convert from m/s to km/s
rv1_err = rv1_data[3].values / 1000.
new_rv2 = np.empty_like(t1)
new_rv2_err = np.empty_like(t1)
for i, t1_i in enumerate(t1):
#idx = np.searchsorted(t1, t2)
idx = np.argmin(np.abs(date-t1_i))
if abs(t1_i - date[idx]) < 0.001:
print i, t1_i, date[idx], t1_i-date[idx]
new_rv2[i] = rv2[idx]
new_rv2_err[i] = rv2_err[idx]
else:
print i, t1_i, date[idx], np.nan
new_rv2[i] = np.nan
new_rv2_err[i] = np.nan
np.savetxt('../Planet-Finder/data/rv_data.txt', (t1, rv1, rv1_err, new_rv2, new_rv2_err))
Explanation: Fix the dates to line up with the RV output file
End of explanation |
8,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the SIAF class
The Science Instrument Aperture File, or SIAF, provides approximate conversions of sky positions to detector positions in support of operations. (More sophisticated corrections, e.g. for correcting and analyzing science data, are included in the JWST instrument pipelines.)
The SIAF class in jwxml allows conversions between the coordinate frames defined for science "apertures" (which could map to detectors, detector subarrays, or special modes). The choice of coordinate frames is discussed in STScI technical report JWST-STScI-001256 (Sec. 3) and summarized in technical report JWST-STScI-001550 "Description and Use of the JWST Science Instrument Aperture File", Cox et al. (2008) as follows
Step1: To use the SIAF class, we must import it
Step2: The first argument to the SIAF() initializer is an instrument name. For JWST, this is one of "NIRCam", "NIRSpec", "NIRISS", "MIRI", or "FGS". Let's load the NIRCam aperture file
Step3: This transparently loads the bundled NIRCam SIAF XML file from the PRD_DATA_ROOT (specific to your installation of jwxml)
Step4: If you have an XML file conforming to the same format, you can supply it as an argument
Step5: Woah, that's a bit hard to interpret! How many are we plotting?
Step6: Some useful facts about the naming scheme
Step7: Coordinate conversions
So far we have been plotting in the Tel frame. Let's examine the direction of the V2 and V3 axes as seen in the Det frame to demonstrate the coordinate conversion methods. Detector axes on NIRSpec and MIRI are both rotated relative to (V2, V3) so let's pull out a MIRI aperture to use
Step8: Plotting in the Idl frame, the edges of the MIRI imager full aperture are "straight" relative to the coordinate axes.
Step9: We know the MIRI instrument is rotated relative to the (V2, V3) coordinates to fit it in the area of the focal plane with minimized wavefront error. To see this, we must plot in the Tel frame
Step10: Let's make some "test vectors" to see how the (V2, V3) axes get transformed into the Idl frame. First, plot vectors parallel to V2 and V3 in the Tel frame
Step11: As expected, they are parallel with the plot axes (but not with the edges of the MIRIM_FULL aperture). Now, we convert from Tel to Idl with the aptly named Tel2Idl method of the Aperture instance named mirim_full.
Step12: These converted coordinates are just regular Python floating point numbers, as you may expect. The (V2Ref, V3Ref) point corresponds to (0, 0) in Idl coordinates
Step13: Let's plot the vectors in the Idl frame now and see how they compare
Step14: There's the rotation we expected, as well as a coordinate parity flip. Nice.
jwxml supports converting forward and backward between any pair of coordinate frames specified in the SIAF, through similarly named methods (e.g. Tel2Idl, Idl2Sci, Tel2Det, etc.). Be careful! Coordinates are tricky.
The SIAF provides a single, authoritative reference for coordinate transformations that account for the characteristics of the as-built instruments, tracked in the Project Reference Database. To understand the meaning of the various available Aperture attributes, you should consult JWST-STScI-001550 "Description and Use of the JWST Science Instrument Aperture File", by Cox et al. (2008).
This notebook was run with these software versions and reference file versions from the Project Reference Database (PRD) | Python Code:
%pylab inline --no-import-all
plt.style.use('ggplot')
Explanation: Using the SIAF class
The Science Instrument Aperture File, or SIAF, provides approximate conversions of sky positions to detector positions in support of operations. (More sophisticated corrections, e.g. for correcting and analyzing science data, are included in the JWST instrument pipelines.)
The SIAF class in jwxml allows conversions between the coordinate frames defined for science "apertures" (which could map to detectors, detector subarrays, or special modes). The choice of coordinate frames is discussed in STScI technical report JWST-STScI-001256 (Sec. 3) and summarized in technical report JWST-STScI-001550 "Description and Use of the JWST Science Instrument Aperture File", Cox et al. (2008) as follows:
Detector (Det): The detector frame is defined by hardware considerations and usually represents raw detector units ("pixels"). Each detector or SCA will have such a pixel- based coordinate system. The pixel numbering may be based on details of how the data are read out.
Science (Sci): The science image frame is the representation normally displayed by science analysis software such as IRAF. This frame also has units of pixels but is frequently only a portion of the Det frame. Non-illuminated or reference pixels may be supressed in the science image and this will be described by supplying different image sizes and reference points. There may also be differences in axis orientation (in multiples of 90 degrees) between Det and Sci frames. In this way one can ensure that Sci frames for different SCAs have a common orientation when their projected fields of view on the sky are displayed.
Ideal (Idl): The ideal system is distortion-corrected and is measured in units of arc-seconds. Its coordinates define the position in a tangent plane projection, and are not Euler angles.
Telescope (V2,V3): The V2,V3 coordinates locate points on a spherical coordinate system, also measured in arc-seconds. This frame is tied to JWST and applies to the whole field of view, encompassing all instruments. The coordinates (V2,V3) are Euler angles in a spherical frame rather than cartesian coordinates.
The file itself is a text document in XML format, conforming to a standard defined in the JWST ground sytem. jwxml includes copies of these files and allows users to both visualize the apertures defined and perform transformations between the various defined coordinate systems using transformation matrices and polynomials in the file.
Let us first set up the notebook for plotting:
End of explanation
import jwxml
from jwxml import SIAF
Explanation: To use the SIAF class, we must import it:
End of explanation
nircam_siaf = SIAF('NIRCam')
Explanation: The first argument to the SIAF() initializer is an instrument name. For JWST, this is one of "NIRCam", "NIRSpec", "NIRISS", "MIRI", or "FGS". Let's load the NIRCam aperture file:
End of explanation
jwxml.PRD_DATA_ROOT
Explanation: This transparently loads the bundled NIRCam SIAF XML file from the PRD_DATA_ROOT (specific to your installation of jwxml):
End of explanation
nircam_siaf.plot()
Explanation: If you have an XML file conforming to the same format, you can supply it as an argument:
nircam_siaf = SIAF('NIRCam', filename='./my_custom_nircam_siaf.xml')
The SIAF instance has a plot method, showing all the defined apertures:
End of explanation
len(nircam_siaf.apernames)
Explanation: Woah, that's a bit hard to interpret! How many are we plotting?
End of explanation
for apername in nircam_siaf.apernames:
if '_FULL' in apername and 'OSS' not in apername and 'MASK' not in apername:
nircam_siaf.apertures[apername].plot(frame='Tel')
Explanation: Some useful facts about the naming scheme:
Apertures with OSS in the name are used by the Onboard Scripting System, and are typically not of interest outside that context.
Apertures with MASK in the name are in the shifted field-of-view for the NIRCam coronagraphs and are only used for coronagraphy.
Omitting these apertures gives a more manageable plot. (When plotting an Aperture object, supply the frame='Tel' argument to plot() if you want to plot in the shared (V2, V3) coordinates.)
End of explanation
miri_siaf = SIAF('MIRI')
mirim_full = miri_siaf.apertures['MIRIM_FULL']
Explanation: Coordinate conversions
So far we have been plotting in the Tel frame. Let's examine the direction of the V2 and V3 axes as seen in the Det frame to demonstrate the coordinate conversion methods. Detector axes on NIRSpec and MIRI are both rotated relative to (V2, V3) so let's pull out a MIRI aperture to use:
End of explanation
mirim_full.plot(frame='Idl')
Explanation: Plotting in the Idl frame, the edges of the MIRI imager full aperture are "straight" relative to the coordinate axes.
End of explanation
mirim_full.plot(frame='Tel')
Explanation: We know the MIRI instrument is rotated relative to the (V2, V3) coordinates to fit it in the area of the focal plane with minimized wavefront error. To see this, we must plot in the Tel frame:
End of explanation
v2_ref, v3_ref = mirim_full.V2Ref, mirim_full.V3Ref
delta = 20 # arcsec
mirim_full.plot(frame='Tel', label=False)
plt.plot([v2_ref, v2_ref + delta], [v3_ref, v3_ref], label='V2', lw=3)
plt.plot([v2_ref, v2_ref], [v3_ref, v3_ref + delta], label='V3', lw=3)
plt.legend()
Explanation: Let's make some "test vectors" to see how the (V2, V3) axes get transformed into the Idl frame. First, plot vectors parallel to V2 and V3 in the Tel frame:
End of explanation
v3_ref_idl_x, v3_ref_idl_y = mirim_full.Tel2Idl(v2_ref, v3_ref)
v2_vec_x, v2_vec_y = mirim_full.Tel2Idl(v2_ref + delta, v3_ref)
v3_vec_x, v3_vec_y = mirim_full.Tel2Idl(v2_ref, v3_ref + delta)
Explanation: As expected, they are parallel with the plot axes (but not with the edges of the MIRIM_FULL aperture). Now, we convert from Tel to Idl with the aptly named Tel2Idl method of the Aperture instance named mirim_full.
End of explanation
v3_ref_idl_x, v3_ref_idl_y
Explanation: These converted coordinates are just regular Python floating point numbers, as you may expect. The (V2Ref, V3Ref) point corresponds to (0, 0) in Idl coordinates:
End of explanation
mirim_full.plot(frame='Idl', label=False)
plt.plot([0, v2_vec_x], [0, v2_vec_y], label='V2', lw=3)
plt.plot([0, v3_vec_x], [0, v3_vec_y], label='V3', lw=3)
plt.legend()
Explanation: Let's plot the vectors in the Idl frame now and see how they compare:
End of explanation
print("jwxml version", jwxml.__version__)
print("PRD version", jwxml.PRD_VERSION)
Explanation: There's the rotation we expected, as well as a coordinate parity flip. Nice.
jwxml supports converting forward and backward between any pair of coordinate frames specified in the SIAF, through similarly named methods (e.g. Tel2Idl, Idl2Sci, Tel2Det, etc.). Be careful! Coordinates are tricky.
The SIAF provides a single, authoritative reference for coordinate transformations that account for the characteristics of the as-built instruments, tracked in the Project Reference Database. To understand the meaning of the various available Aperture attributes, you should consult JWST-STScI-001550 "Description and Use of the JWST Science Instrument Aperture File", by Cox et al. (2008).
This notebook was run with these software versions and reference file versions from the Project Reference Database (PRD):
End of explanation |
8,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenMP example
In this example we illustrate how OpenMP can be used to speedup the calculation of the likelihood.
First we set the number of openmp threads. This is done via an environmental variable called OMP_NUM_THREADS. In this example we set the value of the variable from Python but typically this will be done directly in a shell script before running the example i.e. something like
Step1: Note that only the value of OMP_NUM_THREADS at import time infulences the execution. To experiment with OpenMP restart the notebook kernel, change the value in the cell above reexecute. You should see the time of execution change in the last cell.
Some general settings
Step2: Load data
HJCFIT depends on DCPROGS/DCPYPS module for data input and setting kinetic mechanism
Step3: Initialise Single-Channel Record from dcpyps. Note that SCRecord takes a list of file names; several SCN files from the same patch can be loaded.
Step4: Load demo mechanism (C&H82 numerical example)
Step5: Prepare likelihood function
Step6: Time evaluation of likelihood function | Python Code:
import os
os.environ['OMP_NUM_THREADS'] = '4'
Explanation: OpenMP example
In this example we illustrate how OpenMP can be used to speedup the calculation of the likelihood.
First we set the number of openmp threads. This is done via an environmental variable called OMP_NUM_THREADS. In this example we set the value of the variable from Python but typically this will be done directly in a shell script before running the example i.e. something like:
export OMP_NUM_THREADS=4
python script.py
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import sys, time, math
import numpy as np
from numpy import linalg as nplin
Explanation: Note that only the value of OMP_NUM_THREADS at import time infulences the execution. To experiment with OpenMP restart the notebook kernel, change the value in the cell above reexecute. You should see the time of execution change in the last cell.
Some general settings:
End of explanation
from dcpyps.samples import samples
from dcpyps import dataset, mechanism, dcplots, dcio
# LOAD DATA: Burzomato 2004 example set.
scnfiles = [["./samples/glydemo/A-10.scn"],
["./samples/glydemo/B-30.scn"],
["./samples/glydemo/C-100.scn"],
["./samples/glydemo/D-1000.scn"]]
tr = [0.000030, 0.000030, 0.000030, 0.000030]
tc = [0.004, -1, -0.06, -0.02]
conc = [10e-6, 30e-6, 100e-6, 1000e-6]
Explanation: Load data
HJCFIT depends on DCPROGS/DCPYPS module for data input and setting kinetic mechanism:
End of explanation
# Initaialise SCRecord instance.
recs = []
bursts = []
for i in range(len(scnfiles)):
rec = dataset.SCRecord(scnfiles[i], conc[i], tr[i], tc[i])
recs.append(rec)
bursts.append(rec.bursts.intervals())
Explanation: Initialise Single-Channel Record from dcpyps. Note that SCRecord takes a list of file names; several SCN files from the same patch can be loaded.
End of explanation
# LOAD FLIP MECHANISM USED in Burzomato et al 2004
mecfn = "./samples/mec/demomec.mec"
version, meclist, max_mecnum = dcio.mec_get_list(mecfn)
mec = dcio.mec_load(mecfn, meclist[2][0])
# PREPARE RATE CONSTANTS.
# Fixed rates.
#fixed = np.array([False, False, False, False, False, False, False, True,
# False, False, False, False, False, False])
for i in range(len(mec.Rates)):
mec.Rates[i].fixed = False
# Constrained rates.
mec.Rates[21].is_constrained = True
mec.Rates[21].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[21].constrain_args = [17, 3]
mec.Rates[19].is_constrained = True
mec.Rates[19].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[19].constrain_args = [17, 2]
mec.Rates[16].is_constrained = True
mec.Rates[16].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[16].constrain_args = [20, 3]
mec.Rates[18].is_constrained = True
mec.Rates[18].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[18].constrain_args = [20, 2]
mec.Rates[8].is_constrained = True
mec.Rates[8].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[8].constrain_args = [12, 1.5]
mec.Rates[13].is_constrained = True
mec.Rates[13].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[13].constrain_args = [9, 2]
mec.update_constrains()
# Rates constrained by microscopic reversibility
mec.set_mr(True, 7, 0)
mec.set_mr(True, 14, 1)
# Update constrains
mec.update_constrains()
#Propose initial guesses different from recorded ones
initial_guesses = [5000.0, 500.0, 2700.0, 2000.0, 800.0, 15000.0, 300.0, 120000, 6000.0,
0.45E+09, 1500.0, 12000.0, 4000.0, 0.9E+09, 7500.0, 1200.0, 3000.0,
0.45E+07, 2000.0, 0.9E+07, 1000, 0.135E+08]
mec.set_rateconstants(initial_guesses)
mec.update_constrains()
Explanation: Load demo mechanism (C&H82 numerical example)
End of explanation
def dcprogslik(x, lik, m, c):
m.theta_unsqueeze(np.exp(x))
l = 0
for i in range(len(c)):
m.set_eff('c', c[i])
l += lik[i](m.Q)
return -l * math.log(10)
# Import HJCFIT likelihood function
from dcprogs.likelihood import Log10Likelihood
kwargs = {'nmax': 2, 'xtol': 1e-12, 'rtol': 1e-12, 'itermax': 100,
'lower_bound': -1e6, 'upper_bound': 0}
likelihood = []
for i in range(len(recs)):
likelihood.append(Log10Likelihood(bursts[i], mec.kA,
recs[i].tres, recs[i].tcrit, **kwargs))
theta = mec.theta()
Explanation: Prepare likelihood function
End of explanation
%timeit dcprogslik(np.log(theta), likelihood, mec, conc)
Explanation: Time evaluation of likelihood function
End of explanation |
8,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 编写自己的回调函数
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: Keras 回调函数概述
所有回调函数都将 keras.callbacks.Callback 类作为子类,并重写在训练、测试和预测的各个阶段调用的一组方法。回调函数对于在训练期间了解模型的内部状态和统计信息十分有用。
您可以将回调函数的列表(作为关键字参数 callbacks)传递给以下模型方法:
keras.Model.fit()
keras.Model.evaluate()
keras.Model.predict()
回调函数方法概述
全局方法
on_(train|test|predict)_begin(self, logs=None)
在 fit/evaluate/predict 开始时调用。
on_(train|test|predict)_end(self, logs=None)
在 fit/evaluate/predict 结束时调用。
Batch-level methods for training/testing/predicting
on_(train|test|predict)_batch_begin(self, batch, logs=None)
正好在训练/测试/预测期间处理批次之前调用。
on_(train|test|predict)_batch_end(self, batch, logs=None)
在训练/测试/预测批次结束时调用。在此方法中,logs 是包含指标结果的字典。
周期级方法(仅训练)
on_epoch_begin(self, epoch, logs=None)
在训练期间周期开始时调用。
on_epoch_end(self, epoch, logs=None)
在训练期间周期开始时调用。
基本示例
让我们来看一个具体的例子。首先,导入 Tensorflow 并定义一个简单的序列式 Keras 模型:
Step3: 然后,从 Keras 数据集 API 加载 MNIST 数据进行训练和测试:
Step4: 接下来,定义一个简单的自定义回调函数来记录以下内容:
fit/evaluate/predict 开始和结束的时间
每个周期开始和结束的时间
每个训练批次开始和结束的时间
每个评估(测试)批次开始和结束的时间
每次推断(预测)批次开始和结束的时间
Step5: 我们来试一下:
Step6: logs 字典的用法
logs 字典包含损失值,以及批次或周期结束时的所有指标。示例包括损失和平均绝对误差。
Step8: self.model 属性的用法
除了在调用其中一种方法时接收日志信息外,回调还可以访问与当前一轮训练/评估/推断有关的模型:self.model。
以下是您可以在回调函数中使用 self.model 进行的一些操作:
设置 self.model.stop_training = True 以立即中断训练。
转变优化器(可作为 self.model.optimizer)的超参数,例如 self.model.optimizer.learning_rate。
定期保存模型。
在每个周期结束时,在少量测试样本上记录 model.predict() 的输出,以用作训练期间的健全性检查。
在每个周期结束时提取中间特征的可视化,随时间推移监视模型当前的学习内容。
其他
下面我们通过几个示例来看看它是如何工作的。
Keras 回调函数应用示例
在达到最小损失时尽早停止
第一个示例展示了如何通过设置 self.model.stop_training(布尔)属性来创建能够在达到最小损失时停止训练的 Callback。您还可以提供参数 patience 来指定在达到局部最小值后应该等待多少个周期然后停止。
tf.keras.callbacks.EarlyStopping 提供了一种更完整、更通用的实现。
Step11: 学习率规划
在此示例中,我们展示了如何在学习过程中使用自定义回调来动态更改优化器的学习率。
有关更通用的实现,请查看 callbacks.LearningRateScheduler。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow import keras
Explanation: 编写自己的回调函数
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/guide/keras/custom_callback"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/custom_callback.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/custom_callback.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/custom_callback.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
简介
回调是一种可以在训练、评估或推断过程中自定义 Keras 模型行为的强大工具。示例包括使用 TensorBoard 来呈现训练进度和结果的 tf.keras.callbacks.TensorBoard,以及用来在训练期间定期保存模型的 tf.keras.callbacks.ModelCheckpoint。
在本指南中,您将了解什么是 Keras 回调函数,它可以做什么,以及如何构建自己的回调函数。我们提供了一些简单回调函数应用的演示,以帮助您入门。
设置
End of explanation
# Define the Keras model to add callbacks to
def get_model():
model = keras.Sequential()
model.add(keras.layers.Dense(1, input_dim=784))
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
loss="mean_squared_error",
metrics=["mean_absolute_error"],
)
return model
Explanation: Keras 回调函数概述
所有回调函数都将 keras.callbacks.Callback 类作为子类,并重写在训练、测试和预测的各个阶段调用的一组方法。回调函数对于在训练期间了解模型的内部状态和统计信息十分有用。
您可以将回调函数的列表(作为关键字参数 callbacks)传递给以下模型方法:
keras.Model.fit()
keras.Model.evaluate()
keras.Model.predict()
回调函数方法概述
全局方法
on_(train|test|predict)_begin(self, logs=None)
在 fit/evaluate/predict 开始时调用。
on_(train|test|predict)_end(self, logs=None)
在 fit/evaluate/predict 结束时调用。
Batch-level methods for training/testing/predicting
on_(train|test|predict)_batch_begin(self, batch, logs=None)
正好在训练/测试/预测期间处理批次之前调用。
on_(train|test|predict)_batch_end(self, batch, logs=None)
在训练/测试/预测批次结束时调用。在此方法中,logs 是包含指标结果的字典。
周期级方法(仅训练)
on_epoch_begin(self, epoch, logs=None)
在训练期间周期开始时调用。
on_epoch_end(self, epoch, logs=None)
在训练期间周期开始时调用。
基本示例
让我们来看一个具体的例子。首先,导入 Tensorflow 并定义一个简单的序列式 Keras 模型:
End of explanation
# Load example MNIST data and pre-process it
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784).astype("float32") / 255.0
x_test = x_test.reshape(-1, 784).astype("float32") / 255.0
# Limit the data to 1000 samples
x_train = x_train[:1000]
y_train = y_train[:1000]
x_test = x_test[:1000]
y_test = y_test[:1000]
Explanation: 然后,从 Keras 数据集 API 加载 MNIST 数据进行训练和测试:
End of explanation
class CustomCallback(keras.callbacks.Callback):
def on_train_begin(self, logs=None):
keys = list(logs.keys())
print("Starting training; got log keys: {}".format(keys))
def on_train_end(self, logs=None):
keys = list(logs.keys())
print("Stop training; got log keys: {}".format(keys))
def on_epoch_begin(self, epoch, logs=None):
keys = list(logs.keys())
print("Start epoch {} of training; got log keys: {}".format(epoch, keys))
def on_epoch_end(self, epoch, logs=None):
keys = list(logs.keys())
print("End epoch {} of training; got log keys: {}".format(epoch, keys))
def on_test_begin(self, logs=None):
keys = list(logs.keys())
print("Start testing; got log keys: {}".format(keys))
def on_test_end(self, logs=None):
keys = list(logs.keys())
print("Stop testing; got log keys: {}".format(keys))
def on_predict_begin(self, logs=None):
keys = list(logs.keys())
print("Start predicting; got log keys: {}".format(keys))
def on_predict_end(self, logs=None):
keys = list(logs.keys())
print("Stop predicting; got log keys: {}".format(keys))
def on_train_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: start of batch {}; got log keys: {}".format(batch, keys))
def on_train_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: end of batch {}; got log keys: {}".format(batch, keys))
def on_test_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Evaluating: start of batch {}; got log keys: {}".format(batch, keys))
def on_test_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Evaluating: end of batch {}; got log keys: {}".format(batch, keys))
def on_predict_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Predicting: start of batch {}; got log keys: {}".format(batch, keys))
def on_predict_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Predicting: end of batch {}; got log keys: {}".format(batch, keys))
Explanation: 接下来,定义一个简单的自定义回调函数来记录以下内容:
fit/evaluate/predict 开始和结束的时间
每个周期开始和结束的时间
每个训练批次开始和结束的时间
每个评估(测试)批次开始和结束的时间
每次推断(预测)批次开始和结束的时间
End of explanation
model = get_model()
model.fit(
x_train,
y_train,
batch_size=128,
epochs=1,
verbose=0,
validation_split=0.5,
callbacks=[CustomCallback()],
)
res = model.evaluate(
x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()]
)
res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()])
Explanation: 我们来试一下:
End of explanation
class LossAndErrorPrintingCallback(keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs=None):
print(
"Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"])
)
def on_test_batch_end(self, batch, logs=None):
print(
"Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"])
)
def on_epoch_end(self, epoch, logs=None):
print(
"The average loss for epoch {} is {:7.2f} "
"and mean absolute error is {:7.2f}.".format(
epoch, logs["loss"], logs["mean_absolute_error"]
)
)
model = get_model()
model.fit(
x_train,
y_train,
batch_size=128,
epochs=2,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()],
)
res = model.evaluate(
x_test,
y_test,
batch_size=128,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()],
)
Explanation: logs 字典的用法
logs 字典包含损失值,以及批次或周期结束时的所有指标。示例包括损失和平均绝对误差。
End of explanation
import numpy as np
class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
Stop training when the loss is at its min, i.e. the loss stops decreasing.
Arguments:
patience: Number of epochs to wait after min has been hit. After this
number of no improvement, training stops.
def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get("loss")
if np.less(current, self.best):
self.best = current
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=30,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],
)
Explanation: self.model 属性的用法
除了在调用其中一种方法时接收日志信息外,回调还可以访问与当前一轮训练/评估/推断有关的模型:self.model。
以下是您可以在回调函数中使用 self.model 进行的一些操作:
设置 self.model.stop_training = True 以立即中断训练。
转变优化器(可作为 self.model.optimizer)的超参数,例如 self.model.optimizer.learning_rate。
定期保存模型。
在每个周期结束时,在少量测试样本上记录 model.predict() 的输出,以用作训练期间的健全性检查。
在每个周期结束时提取中间特征的可视化,随时间推移监视模型当前的学习内容。
其他
下面我们通过几个示例来看看它是如何工作的。
Keras 回调函数应用示例
在达到最小损失时尽早停止
第一个示例展示了如何通过设置 self.model.stop_training(布尔)属性来创建能够在达到最小损失时停止训练的 Callback。您还可以提供参数 patience 来指定在达到局部最小值后应该等待多少个周期然后停止。
tf.keras.callbacks.EarlyStopping 提供了一种更完整、更通用的实现。
End of explanation
class CustomLearningRateScheduler(keras.callbacks.Callback):
Learning rate scheduler which sets the learning rate according to schedule.
Arguments:
schedule: a function that takes an epoch index
(integer, indexed from 0) and current learning rate
as inputs and returns a new learning rate as output (float).
def __init__(self, schedule):
super(CustomLearningRateScheduler, self).__init__()
self.schedule = schedule
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model.optimizer, "lr"):
raise ValueError('Optimizer must have a "lr" attribute.')
# Get the current learning rate from model's optimizer.
lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))
# Call schedule function to get the scheduled learning rate.
scheduled_lr = self.schedule(epoch, lr)
# Set the value back to the optimizer before this epoch starts
tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
print("\nEpoch %05d: Learning rate is %6.4f." % (epoch, scheduled_lr))
LR_SCHEDULE = [
# (epoch to start, learning rate) tuples
(3, 0.05),
(6, 0.01),
(9, 0.005),
(12, 0.001),
]
def lr_schedule(epoch, lr):
Helper function to retrieve the scheduled learning rate based on epoch.
if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:
return lr
for i in range(len(LR_SCHEDULE)):
if epoch == LR_SCHEDULE[i][0]:
return LR_SCHEDULE[i][1]
return lr
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=15,
verbose=0,
callbacks=[
LossAndErrorPrintingCallback(),
CustomLearningRateScheduler(lr_schedule),
],
)
Explanation: 学习率规划
在此示例中,我们展示了如何在学习过程中使用自定义回调来动态更改优化器的学习率。
有关更通用的实现,请查看 callbacks.LearningRateScheduler。
End of explanation |
8,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this short tutorial, we will compute the static connectivity of the EEG singals.
Load data
Step1: Static connectivity
As a first example, we are going to compute the static connectivity of the EEG signals using the IPLV estimator.
Step2: Define the frequency band we are interested to examine, in Hz
Step3: Define the sampling frequency, in Hz
Step5: We will invoke the estimator using the full by-name arguments. The last arguement, pairs is None by default, which means all "full connectivity", otherwise you check the documentation about the structure of the value.
Step6: Make the connectivity matrix symmetric
Step7: Plot
Plot the matrix using the standard Matplotlib functions | Python Code:
import numpy as np
import scipy
from scipy import io
eeg = np.load("data/eeg_eyes_opened.npy")
num_trials, num_channels, num_samples = np.shape(eeg)
eeg_ts = np.squeeze(eeg[0, :, :])
Explanation: In this short tutorial, we will compute the static connectivity of the EEG singals.
Load data
End of explanation
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
from dyconnmap.fc import iplv
Explanation: Static connectivity
As a first example, we are going to compute the static connectivity of the EEG signals using the IPLV estimator.
End of explanation
band = [1.0, 4.0]
Explanation: Define the frequency band we are interested to examine, in Hz
End of explanation
sampling_frequency = 160.0
Explanation: Define the sampling frequency, in Hz
End of explanation
ts, avg = iplv(eeg_ts, fb=band, fs=sampling_frequency, pairs=None)
print(fTime series array shape: {np.shape(ts)}
Average time series array shape: {np.shape(avg)})
Explanation: We will invoke the estimator using the full by-name arguments. The last arguement, pairs is None by default, which means all "full connectivity", otherwise you check the documentation about the structure of the value.
End of explanation
avg_symm = avg + avg.T
np.fill_diagonal(avg_symm, 1.0)
Explanation: Make the connectivity matrix symmetric
End of explanation
import matplotlib.pyplot as plt
mtx_min = 0.0 # we know it's 0.0 because of the estimator's properties
mtx_max = np.max(avg)
plt.figure(figsize=(6, 6))
cax = plt.imshow(avg_symm, vmin=mtx_min, vmax=mtx_max, cmap=plt.cm.Spectral)
cb = plt.colorbar(fraction=0.046, pad=0.04)
cb.ax.set_ylabel('Imaginary PLV', fontdict={'fontsize': 20})
plt.title('Connectivity Matrix', fontdict={'fontsize': 20})
plt.xlabel('ROI', fontdict={'fontsize': 20})
plt.ylabel('ROI', fontdict={'fontsize': 20})
plt.show()
Explanation: Plot
Plot the matrix using the standard Matplotlib functions
End of explanation |
8,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Control Structures
Simple for loop
Write a for loop which iterates over the list of breakfast items "sausage", "eggs", "bacon" and "spam" and prints out the name of item
Step1: Write then a for which loop determines the squares of the odd
integers up to 10. Use the range() function.
Step2: Looping through a dictionary
Write a loop that prints out the names of the fruits in the dictionary containing the fruit prices.
Step3: Next, write a loop that sums up the prices.
Step4: While loop
Fibonacci numbers are a sequence of integers defined by the recurrence relation
F[n] = F[n-1] + F[n-2]
with the initial values F[0]=0, F[1]=1.
Create a list of Fibonacci numbers F[n] < 100 using a while loop.
Step5: If - else
Write a control structure which checks whether an integer is
negative, zero, or belongs to the prime numbers 3,5,7,11,17
and perform e.g. corresponding print statement.
Use keyword in when checking for belonging to prime numbers.
Step6: Advanced exercises
Don't worry if you don't have time to finish all of these. They are not essential.
Looping through multidimensional lists
Start from a two dimensional list of (x,y) value pairs, and sort it according to y values. (Hint
Step7: Next, create a new list containing only the sorted y values.
Step8: Finally, create a new list consisting of sums the (x,y) pairs where both x and y are positive.
Step9: List comprehension is often convenient in this kind of situations
Step10: FizzBuzz
This is a classic job interview question. Depending on the interviewer or interviewee it can filter out up to 95% of the interviewees for a position. The task is not difficult but it's easy to make simple mistakes.
If a number is divisible by 3, instead of the number print "Fizz", if a number is divisible by 5, print "Buzz" and if the number is divisible by both 3 and 5, print "FizzBuzz".
Step11: Food for thought
Step12: List comprehension
Using a list comprehension create a new list, temperatures_kelvin from following Celsius temperatures and convert them by adding the value 273.15 to each. | Python Code:
breakfast = ["sausage", "eggs", "bacon", "spam"]
for item in breakfast:
print(item)
Explanation: Control Structures
Simple for loop
Write a for loop which iterates over the list of breakfast items "sausage", "eggs", "bacon" and "spam" and prints out the name of item
End of explanation
squares = []
for i in range(1, 10, 2):
squares.append(i**2)
print(squares)
Explanation: Write then a for which loop determines the squares of the odd
integers up to 10. Use the range() function.
End of explanation
fruits = {'banana' : 5, 'strawberry' : 7, 'pineapple' : 3}
for fruit in fruits:
print(fruit)
Explanation: Looping through a dictionary
Write a loop that prints out the names of the fruits in the dictionary containing the fruit prices.
End of explanation
sum = 0
for price in fruits.values():
sum += price
print(sum)
Explanation: Next, write a loop that sums up the prices.
End of explanation
f = [0, 1]
while True:
new = f[-1] + f[-2]
if new > 100:
break
f.append(new)
print(f)
Explanation: While loop
Fibonacci numbers are a sequence of integers defined by the recurrence relation
F[n] = F[n-1] + F[n-2]
with the initial values F[0]=0, F[1]=1.
Create a list of Fibonacci numbers F[n] < 100 using a while loop.
End of explanation
number = 7
if number < 0:
print("Negative")
elif number == 0:
print("Zero")
elif number in [3, 5, 7, 11, 17]:
print("Prime")
Explanation: If - else
Write a control structure which checks whether an integer is
negative, zero, or belongs to the prime numbers 3,5,7,11,17
and perform e.g. corresponding print statement.
Use keyword in when checking for belonging to prime numbers.
End of explanation
xys = [[2, 3], [0, -1], [4, -2], [1, 6]]
tmp = []
for x, y in xys:
tmp.append([y,x])
tmp.sort()
for i, (y,x) in enumerate(tmp):
xys[i] = [x,y]
print(xys)
Explanation: Advanced exercises
Don't worry if you don't have time to finish all of these. They are not essential.
Looping through multidimensional lists
Start from a two dimensional list of (x,y) value pairs, and sort it according to y values. (Hint: you may need to create a temporary list).
End of explanation
ys = []
for x, y in xys:
ys.append(y)
print(ys)
Explanation: Next, create a new list containing only the sorted y values.
End of explanation
sums = []
for x, y in xys:
if x > 0 and y > 0:
sums.append(x + y)
print(sums)
Explanation: Finally, create a new list consisting of sums the (x,y) pairs where both x and y are positive.
End of explanation
xys = [[2, 3], [0, -1], [4, -2], [1, 6]]
tmp = [[y, x] for x, y in xys]
tmp.sort()
xys = [[x, y] for y, x in tmp]
# One liner is possible but not very readable anymore:
xys = [[x, y] for y, x in sorted([[ytmp, xtmp] for xtmp, ytmp in xys])]
# Summing positives with one liner is ok:
sums = [x+y for x,y in xys if x > 0 and y > 0]
Explanation: List comprehension is often convenient in this kind of situations:
End of explanation
for number in range(1, 101):
if number % 3 == 0 and number % 5 == 0:
print("FizzBuzz")
elif number % 3 == 0:
print("Fizz")
elif number % 5 == 0:
print("Buzz")
print(number)
Explanation: FizzBuzz
This is a classic job interview question. Depending on the interviewer or interviewee it can filter out up to 95% of the interviewees for a position. The task is not difficult but it's easy to make simple mistakes.
If a number is divisible by 3, instead of the number print "Fizz", if a number is divisible by 5, print "Buzz" and if the number is divisible by both 3 and 5, print "FizzBuzz".
End of explanation
import random
while True:
value = random.random()
if value < 0.1:
break
print("done")
Explanation: Food for thought: How do people commonly fail this test and why?
Breaking
The python random module generates pseudorandom numbers.
Write a while loop that runs until
the output of random.random() is below 0.1 and break when
the value is below 0.1.
End of explanation
temperatures_celsius = [0, -15, 20.15, 13.3, -5.2]
temperatures_kelvin = [c+273.15 for c in temperatures_celsius]
Explanation: List comprehension
Using a list comprehension create a new list, temperatures_kelvin from following Celsius temperatures and convert them by adding the value 273.15 to each.
End of explanation |
8,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Kernel Learning
By Saurabh Mahindre - <a href="https
Step1: Introduction
<em>Multiple kernel learning</em> (MKL) is about using a combined kernel i.e. a kernel consisting of a linear combination of arbitrary kernels over different domains. The coefficients or weights of the linear combination can be learned as well.
Kernel based methods such as support vector machines (SVMs) employ a so-called kernel function $k(x_{i},x_{j})$ which intuitively computes the similarity between two examples $x_{i}$ and $x_{j}$. </br>
Selecting the kernel function
$k()$ and it's parameters is an important issue in training. Kernels designed by humans usually capture one aspect of data. Choosing one kernel means to select exactly one such aspect. Which means combining such aspects is often better than selecting.
In shogun the MKL is the base class for MKL. We can do classifications
Step2: Prediction on toy data
In order to see the prediction capabilities, let us generate some data using the GMM class. The data is sampled by setting means (GMM notebook) such that it sufficiently covers X-Y grid and is not too easy to classify.
Step3: Generating Kernel weights
Just to help us visualize let's use two gaussian kernels (GaussianKernel) with considerably different widths. As required in MKL, we need to append them to the Combined kernel. To generate the optimal weights (i.e $\beta$s in the above equation), training of MKL is required. This generates the weights as seen in this example.
Step4: Binary classification using MKL
Now with the data ready and training done, we can do the binary classification. The weights generated can be intuitively understood. We will see that on plotting individual subkernels outputs and outputs of the MKL classification. To apply on test features, we need to reinitialize the kernel with kernel.init and pass the test features. After that it's just a matter of doing mkl.apply to generate outputs.
Step5: To justify the weights, let's train and compare two subkernels with the MKL classification output. Training MKL classifier with a single kernel appended to a combined kernel makes no sense and is just like normal single kernel based classification, but let's do it for comparison.
Step6: As we can see the multiple kernel output seems just about right. Kernel 1 gives a sort of overfitting output while the kernel 2 seems not so accurate. The kernel weights are hence so adjusted to get a refined output. We can have a look at the errors by these subkernels to have more food for thought. Most of the time, the MKL error is lesser as it incorporates aspects of both kernels. One of them is strict while other is lenient, MKL finds a balance between those.
Step7: MKL for knowledge discovery
MKL can recover information about the problem at hand. Let us see this with a binary classification problem. The task is to separate two concentric classes shaped like circles. By varying the distance between the boundary of the circles we can control the separability of the problem. Starting with an almost non-separable scenario, the data quickly becomes separable as the distance between the circles increases.
Step8: These are the type of circles we want to distinguish between. We can try classification with a constant separation between the circles first.
Step9: As we can see the MKL classifier classifies them as expected. Now let's vary the separation and see how it affects the weights.The choice of the kernel width of the Gaussian kernel used for classification is expected to depend on the separation distance of the learning problem. An increased distance between the circles will correspond to a larger optimal kernel width. This effect should be visible in the results of the MKL, where we used MKL-SVMs with four kernels with different widths (1,5,7,10).
Step10: In the above plot we see the kernel weightings obtained for the four kernels. Every line shows one weighting. The courses of the kernel weightings reflect the development of the learning problem
Step11: Let's plot five of the examples to get a feel of the dataset.
Step12: We combine a Gaussian kernel and a PolyKernel. To test, examples not included in training data are used.
This is just a demonstration but we can see here how MKL is working behind the scene. What we have is two kernels with significantly different properties. The gaussian kernel defines a function space that is a lot larger than that of the linear kernel or the polynomial kernel. The gaussian kernel has a low width, so it will be able to represent more and more complex relationships between the training data. But it requires enough data to train on. The number of training examples here is 1000, which seems a bit less as total examples are 10000. We hope the polynomial kernel can counter this problem, since it will fit the polynomial for you using a lot less data than the squared exponential. The kernel weights are printed below to add some insight.
Step13: The misclassified examples are surely pretty tough to predict. As seen from the accuracy MKL seems to work a shade better in the case. One could try this out with more and different types of kernels too.
One-class classification using MKL
One-class classification can be done using MKL in shogun. This is demonstrated in the following simple example using MKLOneClass. We will see how abnormal data is detected. This is also known as novelty detection. Below we generate some toy data and initialize combined kernels and features.
Step14: Now that everything is initialized, let's see MKLOneclass in action by applying it on the test data and on the X-Y grid. | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import shogun as sg
%matplotlib inline
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
Explanation: Multiple Kernel Learning
By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a>
This notebook is about multiple kernel learning in shogun. We will see how to construct a combined kernel, determine optimal kernel weights using MKL and use it for different types of classification and novelty detection.
Introduction
Mathematical formulation
Using a Combined kernel
Example: Toy Data
Generating Kernel weights
Binary classification using MKL
MKL for knowledge discovery
Multiclass classification using MKL
One-class classification using MKL
End of explanation
kernel = sg.create_kernel("CombinedKernel")
Explanation: Introduction
<em>Multiple kernel learning</em> (MKL) is about using a combined kernel i.e. a kernel consisting of a linear combination of arbitrary kernels over different domains. The coefficients or weights of the linear combination can be learned as well.
Kernel based methods such as support vector machines (SVMs) employ a so-called kernel function $k(x_{i},x_{j})$ which intuitively computes the similarity between two examples $x_{i}$ and $x_{j}$. </br>
Selecting the kernel function
$k()$ and it's parameters is an important issue in training. Kernels designed by humans usually capture one aspect of data. Choosing one kernel means to select exactly one such aspect. Which means combining such aspects is often better than selecting.
In shogun the MKL is the base class for MKL. We can do classifications: binary, one-class, multiclass and regression too: regression.
Mathematical formulation (skip if you just want code examples)
</br>In a SVM, defined as:
$$f({\bf x})=\text{sign} \left(\sum_{i=0}^{N-1} \alpha_i k({\bf x}, {\bf x_i})+b\right)$$</br>
where ${\bf x_i},{i = 1,...,N}$ are labeled training examples ($y_i \in {±1}$).
One could make a combination of kernels like:
$${\bf k}(x_i,x_j)=\sum_{k=0}^{K} \beta_k {\bf k_k}(x_i, x_j)$$
where $\beta_k > 0$ and $\sum_{k=0}^{K} \beta_k = 1$
In the multiple kernel learning problem for binary classification one is given $N$ data points ($x_i, y_i$ )
($y_i \in {±1}$), where $x_i$ is translated via $K$ mappings $\phi_k(x) \rightarrow R^{D_k} $, $k=1,...,K$ , from the input into $K$ feature spaces $(\phi_1(x_i),...,\phi_K(x_i))$ where $D_k$ denotes dimensionality of the $k$-th feature space.
In MKL $\alpha_i$,$\beta$ and bias are determined by solving the following optimization program. For details see [1].
$$\mbox{min} \hspace{4mm} \gamma-\sum_{i=1}^N\alpha_i$$
$$ \mbox{w.r.t.} \hspace{4mm} \gamma\in R, \alpha\in R^N \nonumber$$
$$\mbox {s.t.} \hspace{4mm} {\bf 0}\leq\alpha\leq{\bf 1}C,\;\;\sum_{i=1}^N \alpha_i y_i=0 \nonumber$$
$$ {\frac{1}{2}\sum_{i,j=1}^N \alpha_i \alpha_j y_i y_j \leq \gamma}, \forall k=1,\ldots,K\nonumber\
$$
Here C is a pre-specified regularization parameter.
Within shogun this optimization problem is solved using semi-infinite programming. For 1-norm MKL one of the two approaches described in [1] is used.
The first approach (also called the wrapper algorithm) wraps around a single kernel SVMs, alternatingly solving for $\alpha$ and $\beta$. It is using a traditional SVM to generate new violated constraints and thus requires a single kernel SVM and any of the SVMs contained in shogun can be used. In the MKL step either a linear program is solved via glpk or cplex or analytically or a newton (for norms>1) step is performed.
The second much faster but also more memory demanding approach performing interleaved optimization, is integrated into the chunking-based SVMlight.
Using a Combined kernel
Shogun provides an easy way to make a combination of kernels using the CombinedKernel class, to which we can append any kernel from the many options shogun provides. It is especially useful to combine kernels working on different domains and to combine kernels looking at independent features and requires CombinedFeatures to be used. Similarly the CombinedFeatures is used to combine a number of feature objects into a single CombinedFeatures object
End of explanation
num=30;
num_components=4
means=np.zeros((num_components, 2))
means[0]=[-1,1]
means[1]=[2,-1.5]
means[2]=[-1,-3]
means[3]=[2,1]
covs=np.array([[1.0,0.0],[0.0,1.0]])
# gmm=sg.create_distribution("GMM")
# gmm.set_pseudo_count(num_components)
gmm=sg.GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.set_coef(np.array([1.0,0.0,0.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
xnte=np.array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(np.array([0.0,1.0,0.0,0.0]))
xntr1=np.array([gmm.sample() for i in range(num)]).T
xnte1=np.array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(np.array([0.0,0.0,1.0,0.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
xpte=np.array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(np.array([0.0,0.0,0.0,1.0]))
xptr1=np.array([gmm.sample() for i in range(num)]).T
xpte1=np.array([gmm.sample() for i in range(5000)]).T
traindata=np.concatenate((xntr,xntr1,xptr,xptr1), axis=1)
trainlab=np.concatenate((-np.ones(2*num), np.ones(2*num)))
testdata=np.concatenate((xnte,xnte1,xpte,xpte1), axis=1)
testlab=np.concatenate((-np.ones(10000), np.ones(10000)))
#convert to shogun features and generate labels for data
feats_train=sg.create_features(traindata)
labels=sg.BinaryLabels(trainlab)
_=plt.jet()
plt.figure(figsize=(18,5))
plt.subplot(121)
# plot train data
_=plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=100)
plt.title('Toy data for classification')
plt.axis('equal')
colors=["blue","blue","red","red"]
# a tool for visualisation
from matplotlib.patches import Ellipse
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = np.degrees(np.arctan2(*vecs[:, 0][::-1]))
width, height = 2 * nstd * np.sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
for i in range(num_components):
plt.gca().add_artist(get_gaussian_ellipse_artist(means[i], covs, color=colors[i]))
Explanation: Prediction on toy data
In order to see the prediction capabilities, let us generate some data using the GMM class. The data is sampled by setting means (GMM notebook) such that it sufficiently covers X-Y grid and is not too easy to classify.
End of explanation
width0=0.5
kernel0=sg.create_kernel("GaussianKernel", width=width0)
width1=25
kernel1=sg.create_kernel("GaussianKernel", width=width1)
#combine kernels
kernel.add("kernel_array", kernel0)
kernel.add("kernel_array", kernel1)
kernel.init(feats_train, feats_train)
mkl = sg.create_machine("MKLClassification", mkl_norm=1, C1=1, C2=1, kernel=kernel, labels=labels)
#train to get weights
mkl.train()
w=kernel.get_subkernel_weights()
print(w)
Explanation: Generating Kernel weights
Just to help us visualize let's use two gaussian kernels (GaussianKernel) with considerably different widths. As required in MKL, we need to append them to the Combined kernel. To generate the optimal weights (i.e $\beta$s in the above equation), training of MKL is required. This generates the weights as seen in this example.
End of explanation
size=100
x1=np.linspace(-5, 5, size)
x2=np.linspace(-5, 5, size)
x, y=np.meshgrid(x1, x2)
#Generate X-Y grid test data
grid=sg.create_features(np.array((np.ravel(x), np.ravel(y))))
kernel0t=sg.create_kernel("GaussianKernel", width=width0)
kernel1t=sg.create_kernel("GaussianKernel", width=width1)
kernelt=sg.create_kernel("CombinedKernel")
kernelt.add("kernel_array", kernel0t)
kernelt.add("kernel_array", kernel1t)
#initailize with test grid
kernelt.init(feats_train, grid)
mkl.put("kernel", kernelt)
#prediction
grid_out=mkl.apply()
z=grid_out.get_values().reshape((size, size))
plt.figure(figsize=(10,5))
plt.title("Classification using MKL")
c=plt.pcolor(x, y, z)
_=plt.contour(x, y, z, linewidths=1, colors='black')
_=plt.colorbar(c)
Explanation: Binary classification using MKL
Now with the data ready and training done, we can do the binary classification. The weights generated can be intuitively understood. We will see that on plotting individual subkernels outputs and outputs of the MKL classification. To apply on test features, we need to reinitialize the kernel with kernel.init and pass the test features. After that it's just a matter of doing mkl.apply to generate outputs.
End of explanation
z=grid_out.get("labels").reshape((size, size))
# MKL
plt.figure(figsize=(20,5))
plt.subplot(131, title="Multiple Kernels combined")
c=plt.pcolor(x, y, z)
_=plt.contour(x, y, z, linewidths=1, colors='black')
_=plt.colorbar(c)
comb_ker0=sg.create_kernel("CombinedKernel")
comb_ker0.add("kernel_array", kernel0)
comb_ker0.init(feats_train, feats_train)
mkl.put("kernel", comb_ker0)
mkl.train()
comb_ker0t=sg.create_kernel("CombinedKernel")
comb_ker0t.add("kernel_array", kernel0)
comb_ker0t.init(feats_train, grid)
mkl.put("kernel",comb_ker0t)
out0=mkl.apply()
# subkernel 1
z=out0.get("labels").reshape((size, size))
plt.subplot(132, title="Kernel 1")
c=plt.pcolor(x, y, z)
_=plt.contour(x, y, z, linewidths=1, colors='black')
_=plt.colorbar(c)
comb_ker1=sg.create_kernel("CombinedKernel")
comb_ker1.add("kernel_array",kernel1)
comb_ker1.init(feats_train, feats_train)
mkl.put("kernel", comb_ker1)
mkl.train()
comb_ker1t=sg.create_kernel("CombinedKernel")
comb_ker1t.add("kernel_array", kernel1)
comb_ker1t.init(feats_train, grid)
mkl.put("kernel", comb_ker1t)
out1=mkl.apply()
# subkernel 2
z=out1.get("labels").reshape((size, size))
plt.subplot(133, title="kernel 2")
c=plt.pcolor(x, y, z)
_=plt.contour(x, y, z, linewidths=1, colors='black')
_=plt.colorbar(c)
Explanation: To justify the weights, let's train and compare two subkernels with the MKL classification output. Training MKL classifier with a single kernel appended to a combined kernel makes no sense and is just like normal single kernel based classification, but let's do it for comparison.
End of explanation
kernelt.init(feats_train, sg.create_features(testdata))
mkl.put("kernel", kernelt)
out = mkl.apply()
evaluator = sg.create_evaluation("ErrorRateMeasure")
print("Test error is %2.2f%% :MKL" % (100*evaluator.evaluate(out,sg.BinaryLabels(testlab))))
comb_ker0t.init(feats_train, sg.create_features(testdata))
mkl.put("kernel", comb_ker0t)
out = mkl.apply()
evaluator = sg.create_evaluation("ErrorRateMeasure")
print("Test error is %2.2f%% :Subkernel1"% (100*evaluator.evaluate(out,sg.BinaryLabels(testlab))))
comb_ker1t.init(feats_train, sg.create_features(testdata))
mkl.put("kernel", comb_ker1t)
out = mkl.apply()
evaluator = sg.create_evaluation("ErrorRateMeasure")
print("Test error is %2.2f%% :subkernel2" % (100*evaluator.evaluate(out,sg.BinaryLabels(testlab))))
Explanation: As we can see the multiple kernel output seems just about right. Kernel 1 gives a sort of overfitting output while the kernel 2 seems not so accurate. The kernel weights are hence so adjusted to get a refined output. We can have a look at the errors by these subkernels to have more food for thought. Most of the time, the MKL error is lesser as it incorporates aspects of both kernels. One of them is strict while other is lenient, MKL finds a balance between those.
End of explanation
def circle(x, radius, neg):
y=np.sqrt(np.square(radius)-np.square(x))
if neg:
return[x, -y]
else:
return [x,y]
def get_circle(radius):
neg=False
range0=np.linspace(-radius,radius,100)
pos_a=np.array([circle(i, radius, neg) for i in range0]).T
neg=True
neg_a=np.array([circle(i, radius, neg) for i in range0]).T
c=np.concatenate((neg_a,pos_a), axis=1)
return c
def get_data(r1, r2):
c1=get_circle(r1)
c2=get_circle(r2)
c=np.concatenate((c1, c2), axis=1)
feats_tr=sg.create_features(c)
return c, feats_tr
l=np.concatenate((-np.ones(200),np.ones(200)))
lab=sg.BinaryLabels(l)
#get two circles with radius 2 and 4
c, feats_tr=get_data(2,4)
c1, feats_tr1=get_data(2,3)
_=plt.gray()
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.title("Circles with different separation")
p=plt.scatter(c[0,:], c[1,:], c=lab.get_labels())
plt.subplot(122)
q=plt.scatter(c1[0,:], c1[1,:], c=lab.get_labels())
Explanation: MKL for knowledge discovery
MKL can recover information about the problem at hand. Let us see this with a binary classification problem. The task is to separate two concentric classes shaped like circles. By varying the distance between the boundary of the circles we can control the separability of the problem. Starting with an almost non-separable scenario, the data quickly becomes separable as the distance between the circles increases.
End of explanation
def train_mkl(circles, feats_tr):
#Four kernels with different widths
kernel0=sg.create_kernel("GaussianKernel", width=1)
kernel1=sg.create_kernel("GaussianKernel", width=5)
kernel2=sg.create_kernel("GaussianKernel", width=7)
kernel3=sg.create_kernel("GaussianKernel", width=10)
kernel = sg.create_kernel("CombinedKernel")
kernel.add("kernel_array", kernel0)
kernel.add("kernel_array", kernel1)
kernel.add("kernel_array", kernel2)
kernel.add("kernel_array", kernel3)
kernel.init(feats_tr, feats_tr)
mkl = sg.create_machine("MKLClassification", mkl_norm=1, C1=1, C2=2, kernel=kernel, labels=lab)
mkl.train()
w=kernel.get_subkernel_weights()
return w, mkl
def test_mkl(mkl, grid):
kernel0t=sg.create_kernel("GaussianKernel", width=1)
kernel1t=sg.create_kernel("GaussianKernel", width=5)
kernel2t=sg.create_kernel("GaussianKernel", width=7)
kernel3t=sg.create_kernel("GaussianKernel", width=10)
kernelt = sg.create_kernel("CombinedKernel")
kernelt.add("kernel_array", kernel0t)
kernelt.add("kernel_array", kernel1t)
kernelt.add("kernel_array", kernel2t)
kernelt.add("kernel_array", kernel3t)
kernelt.init(feats_tr, grid)
mkl.put("kernel", kernelt)
out=mkl.apply()
return out
size=50
x1=np.linspace(-10, 10, size)
x2=np.linspace(-10, 10, size)
x, y=np.meshgrid(x1, x2)
grid=sg.create_features(np.array((np.ravel(x), np.ravel(y))))
w, mkl=train_mkl(c, feats_tr)
print(w)
out=test_mkl(mkl,grid)
z=out.get_values().reshape((size, size))
plt.figure(figsize=(5,5))
c=plt.pcolor(x, y, z)
_=plt.contour(x, y, z, linewidths=1, colors='black')
plt.title('classification with constant separation')
_=plt.colorbar(c)
Explanation: These are the type of circles we want to distinguish between. We can try classification with a constant separation between the circles first.
End of explanation
range1=np.linspace(5.5,7.5,50)
x=np.linspace(1.5,3.5,50)
temp=[]
for i in range1:
#vary separation between circles
c, feats=get_data(4,i)
w, mkl=train_mkl(c, feats)
temp.append(w)
y=np.array([temp[i] for i in range(0,50)]).T
plt.figure(figsize=(20,5))
_=plt.plot(x, y[0,:], color='k', linewidth=2)
_=plt.plot(x, y[1,:], color='r', linewidth=2)
_=plt.plot(x, y[2,:], color='g', linewidth=2)
_=plt.plot(x, y[3,:], color='y', linewidth=2)
plt.title("Comparison between kernel widths and weights")
plt.ylabel("Weight")
plt.xlabel("Distance between circles")
_=plt.legend(["1","5","7","10"])
Explanation: As we can see the MKL classifier classifies them as expected. Now let's vary the separation and see how it affects the weights.The choice of the kernel width of the Gaussian kernel used for classification is expected to depend on the separation distance of the learning problem. An increased distance between the circles will correspond to a larger optimal kernel width. This effect should be visible in the results of the MKL, where we used MKL-SVMs with four kernels with different widths (1,5,7,10).
End of explanation
from scipy.io import loadmat, savemat
from os import path, sep
mat = loadmat(sep.join(['..','..','..','data','multiclass', 'usps.mat']))
Xall = mat['data']
Yall = np.array(mat['label'].squeeze(), dtype=np.double)
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
np.random.seed(0)
subset = np.random.permutation(len(Yall))
#get first 1000 examples
Xtrain = Xall[:, subset[:1000]]
Ytrain = Yall[subset[:1000]]
Nsplit = 2
all_ks = range(1, 21)
print(Xall.shape)
print(Xtrain.shape)
Explanation: In the above plot we see the kernel weightings obtained for the four kernels. Every line shows one weighting. The courses of the kernel weightings reflect the development of the learning problem: as long as the problem is difficult the best separation can be obtained when using the kernel with smallest width. The low width kernel looses importance when the distance between the circle increases and larger kernel widths obtain a larger weight in MKL. Increasing the distance between the circles, kernels with greater widths are used.
Multiclass classification using MKL
MKL can be used for multiclass classification using the MKLMulticlass class. It is based on the GMNPSVM Multiclass SVM. Its termination criterion is set by set_mkl_epsilon(float64_t eps ) and the maximal number of MKL iterations is set by set_max_num_mkliters(int32_t maxnum). The epsilon termination criterion is the L2 norm between the current MKL weights and their counterpart from the previous iteration. We set it to 0.001 as we want pretty accurate weights.
To see this in action let us compare it to the normal GMNPSVM example as in the KNN notebook, just to see how MKL fares in object recognition. We use the USPS digit recognition dataset.
End of explanation
def plot_example(dat, lab):
for i in range(5):
ax=plt.subplot(1,5,i+1)
plt.title(int(lab[i]))
ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')
ax.set_xticks([])
ax.set_yticks([])
_=plt.figure(figsize=(17,6))
plt.gray()
plot_example(Xtrain, Ytrain)
Explanation: Let's plot five of the examples to get a feel of the dataset.
End of explanation
# MKL training and output
labels = sg.MulticlassLabels(Ytrain)
feats = sg.create_features(Xtrain)
#get test data from 5500 onwards
Xrem=Xall[:,subset[5500:]]
Yrem=Yall[subset[5500:]]
#test features not used in training
feats_rem = sg.create_features(Xrem)
labels_rem = sg.MulticlassLabels(Yrem)
kernel = sg.create_kernel("CombinedKernel")
feats_train = sg.create_features("CombinedFeatures")
feats_test = sg.create_features("CombinedFeatures")
#append gaussian kernel
subkernel = sg.create_kernel("GaussianKernel", width=15)
feats_train.add("feature_array", feats)
feats_test.add("feature_array", feats_rem)
kernel.add("kernel_array", subkernel)
#append PolyKernel
feats = sg.create_features(Xtrain)
subkernel = sg.create_kernel('PolyKernel', degree=10, c=2)
feats_train.add("feature_array", feats)
feats_test.add("feature_array", feats_rem)
kernel.add("kernel_array", subkernel)
kernel.init(feats_train, feats_train)
mkl = sg.create_machine("MKLMulticlass", C=1.2, kernel=kernel,
labels=labels, mkl_eps=0.001, mkl_norm=1)
# set epsilon of SVM
mkl.get("machine").put("epsilon", 1e-2)
mkl.train()
#initialize with test features
kernel.init(feats_train, feats_test)
out = mkl.apply()
evaluator = sg.create_evaluation("MulticlassAccuracy")
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get("labels") != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=plt.figure(figsize=(17,6))
plt.gray()
plot_example(Xbad, Ybad)
w=kernel.get_subkernel_weights()
print(w)
# Single kernel:PolyKernel
C=1
pk = sg.create_kernel('PolyKernel', degree=10, c=2)
svm = sg.create_machine("GMNPSVM", C=C, kernel=pk, labels=labels)
_=svm.train(feats)
out=svm.apply(feats_rem)
evaluator = sg.create_evaluation("MulticlassAccuracy")
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get("labels") != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=plt.figure(figsize=(17,6))
plt.gray()
plot_example(Xbad, Ybad)
#Single Kernel:Gaussian kernel
width=15
C=1
gk=sg.create_kernel("GaussianKernel", width=width)
svm=sg.create_machine("GMNPSVM", C=C, kernel=gk, labels=labels)
_=svm.train(feats)
out=svm.apply(feats_rem)
evaluator = sg.create_evaluation("MulticlassAccuracy")
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get("labels") != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=plt.figure(figsize=(17,6))
plt.gray()
plot_example(Xbad, Ybad)
Explanation: We combine a Gaussian kernel and a PolyKernel. To test, examples not included in training data are used.
This is just a demonstration but we can see here how MKL is working behind the scene. What we have is two kernels with significantly different properties. The gaussian kernel defines a function space that is a lot larger than that of the linear kernel or the polynomial kernel. The gaussian kernel has a low width, so it will be able to represent more and more complex relationships between the training data. But it requires enough data to train on. The number of training examples here is 1000, which seems a bit less as total examples are 10000. We hope the polynomial kernel can counter this problem, since it will fit the polynomial for you using a lot less data than the squared exponential. The kernel weights are printed below to add some insight.
End of explanation
X = -0.3 * np.random.randn(100,2)
traindata = np.r_[X + 2, X - 2].T
X = -0.3 * np.random.randn(20, 2)
testdata = np.r_[X + 2, X - 2].T
trainlab=np.concatenate((np.ones(99),-np.ones(1)))
#convert to shogun features and generate labels for data
feats=sg.create_features(traindata)
labels=sg.BinaryLabels(trainlab)
xx, yy = np.meshgrid(np.linspace(-5, 5, 500), np.linspace(-5, 5, 500))
grid=sg.create_features(np.array((np.ravel(xx), np.ravel(yy))))
#test features
feats_t=sg.create_features(testdata)
x_out=(np.random.uniform(low=-4, high=4, size=(20, 2))).T
feats_out=sg.create_features(x_out)
kernel=sg.create_kernel("CombinedKernel")
feats_train=sg.create_features("CombinedFeatures")
feats_test=sg.create_features("CombinedFeatures")
feats_test_out=sg.create_features("CombinedFeatures")
feats_grid=sg.create_features("CombinedFeatures")
#append gaussian kernel
subkernel=sg.create_kernel("GaussianKernel", width=8)
feats_train.add("feature_array", feats)
feats_test.add("feature_array", feats_t)
feats_test_out.add("feature_array", feats_out)
feats_grid.add("feature_array", grid)
kernel.add("kernel_array", subkernel)
#append PolyKernel
feats = sg.create_features(traindata)
subkernel = sg.create_kernel('PolyKernel', degree=10, c=3)
feats_train.add("feature_array", feats)
feats_test.add("feature_array", feats_t)
feats_test_out.add("feature_array", feats_out)
feats_grid.add("feature_array", grid)
kernel.add("kernel_array", subkernel)
kernel.init(feats_train, feats_train)
mkl = sg.create_machine("MKLOneClass", kernel=kernel, labels=labels, interleaved_optimization=False,
mkl_norm=1)
mkl.put("epsilon", 1e-2)
mkl.put('mkl_epsilon', 0.1)
Explanation: The misclassified examples are surely pretty tough to predict. As seen from the accuracy MKL seems to work a shade better in the case. One could try this out with more and different types of kernels too.
One-class classification using MKL
One-class classification can be done using MKL in shogun. This is demonstrated in the following simple example using MKLOneClass. We will see how abnormal data is detected. This is also known as novelty detection. Below we generate some toy data and initialize combined kernels and features.
End of explanation
mkl.train()
print("Weights:")
w=kernel.get_subkernel_weights()
print(w)
#initialize with test features
kernel.init(feats_train, feats_test)
normal_out = mkl.apply()
#test on abnormally generated data
kernel.init(feats_train, feats_test_out)
abnormal_out = mkl.apply()
#test on X-Y grid
kernel.init(feats_train, feats_grid)
grid_out=mkl.apply()
z=grid_out.get_values().reshape((500,500))
z_lab=grid_out.get("labels").reshape((500,500))
a=abnormal_out.get("labels")
n=normal_out.get("labels")
#check for normal and abnormal classified data
idx=np.where(normal_out.get("labels") != 1)[0]
abnormal=testdata[:,idx]
idx=np.where(normal_out.get("labels") == 1)[0]
normal=testdata[:,idx]
plt.figure(figsize=(15,6))
pl =plt.subplot(121)
plt.title("One-class classification using MKL")
_=plt.pink()
c=plt.pcolor(xx, yy, z)
_=plt.contour(xx, yy, z_lab, linewidths=1, colors='black')
_=plt.colorbar(c)
p1=pl.scatter(traindata[0, :], traindata[1,:], cmap=plt.gray(), s=100)
p2=pl.scatter(normal[0,:], normal[1,:], c="red", s=100)
p3=pl.scatter(abnormal[0,:], abnormal[1,:], c="blue", s=100)
p4=pl.scatter(x_out[0,:], x_out[1,:], c=a, cmap=plt.jet(), s=100)
_=pl.legend((p1, p2, p3), ["Training samples", "normal samples", "abnormal samples"], loc=2)
plt.subplot(122)
c=plt.pcolor(xx, yy, z)
plt.title("One-class classification output")
_=plt.gray()
_=plt.contour(xx, yy, z, linewidths=1, colors='black')
_=plt.colorbar(c)
Explanation: Now that everything is initialized, let's see MKLOneclass in action by applying it on the test data and on the X-Y grid.
End of explanation |
8,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="static/pybofractal.png" alt="Pybonacci" style="width
Step1: Note
Step2: The declaration of a model is also required. The use of the name <span style="color
Step3: We declare the parameters $m$ and $n$ using the Pyomo <span style="color
Step4: Although not required, it is convenient to define index sets. In this example we use the <span style="color
Step5: The coefficient and right-hand-side data are defined as indexed parameters. When sets are given as arguments to the <span style="color
Step6: Note
Step7: In abstract models, Pyomo expressions are usually provided to objective function and constraint declarations via a function defined with a Python <span style="color
Step8: To declare an objective function, the Pyomo function called <span style="color
Step10: Declaration of constraints is similar. A function is declared to deliver the constraint expression. In this case, there can be multiple constraints of the same form because we index the constraints by $i$ in the expression $ \sum_{j=1}^n a_ij x_j \geq b_i \forall i = 1 \ldots m$, which states that we need a constraint for each value of $i$ from one to $m$. In order to parametrize the expression by $i$ we include it as a formal parameter to the function that declares the constraint expression. Technically, we could have used anything for this argument, but that might be confusing. Using an <span style="color
Step11: Note
Step12: In the object oriented view of all of this, we would say that model object is a class instance of the <span style="color
Step13: There are multiple formats that can be used to provide data to a Pyomo model, but the AMPL format works well for our purposes because it contains the names of the data elements together with the data. In AMPL data files, text after a pound sign is treated as a comment. Lines generally do not matter, but statements must be terminated with a semi-colon.
For this particular data file, there is one constraint, so the value of <span style="color
Step14: There is only one constraint, so only two values are needed for <span style="color
Step15: When working with Pyomo (or any other AML), it is convenient to write abstract models in a somewhat more abstract way by using index sets that contain strings rather than index sets that are implied by $1,...,m$ or the summation from 1 to $n$. When this is done, the size of the set is implied by the input, rather than specified directly. Furthermore, the index entries may have no real order. Often, a mixture of integers and indexes and strings as indexes is needed in the same model. To start with an illustration of general indexes, consider a slightly different Pyomo implementation of the model we just presented.
Step16: However, this model can also be fed different data for problems of the same general form using meaningful indexes.
Step17: 1.5 A Simple Concrete Pyomo Model
It is possible to get nearly the same flexible behavior from models declared to be abstract and models declared to be concrete in Pyomo; however, we will focus on a straightforward concrete example here where the data is hard-wired into the model file. Python programmers will quickly realize that the data could have come from other sources.
We repeat the concrete model already given
Step18: Although rule functions can also be used to specify constraints and objectives, in this example we use the <span style="color
Step19: Since glpk is the default solver, there really is no need specify it so the <span style="color
Step20: This yields the following output on the screen
Step21: To see a list of Pyomo command line options, use | Python Code:
!cat abstract1.py
Explanation: <img src="static/pybofractal.png" alt="Pybonacci" style="width: 200px;"/>
<img src="static/cacheme_logo.png" alt="CAChemE" style="width: 300px;"/>
1. Pyomo Overview
Note: Adapted from https://github.com/Pyomo/PyomoGettingStarted, by William and Becca Hart
1.1 Mathematical Modeling
This chapter provides an introduction to Pyomo: Python Optimization Modeling Objects. A more complete description is contained in Pyomo - Optimization Modeling in Python. Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This capability is commonly associated with algebraic modeling languages (AMLs) such as AMPL, AIMMS, and GAMS. Pyomo’s modeling objects are embedded within Python, a full-featured high-level programming language that contains a rich set of supporting libraries.
Modeling is a fundamental process in many aspects of scientific research, engineering and business. Modeling involves the formulation of a simplified representation of a system or real-world object. Thus, modeling tools like Pyomo can be used in a variety of ways:
Explain phenomena that arise in a system,
Make predictions about future states of a system,
Assess key factors that influence phenomena in a system,
Identify extreme states in a system, that might represent worst-case scenarios or minimal cost plans, and
Analyze trade-offs to support human decision makers.
Mathematical models represent system knowledge with a formalized mathematical language. The following mathematical concepts are central to modern modeling activities:
Variables
VariablesVariables represent unknown or changing parts of a model
(e.g. whether or not to make a decision, or the characteristic of
a system outcome).The values taken by the variables are often
referred to as a <span style="color:dakrblue">solution</span> and are usually an output of the
optimization process.
Parameters
Parameters represents the data that must be supplied to perform
theoptimization. In fact, in some settings the word <span style="color:darkblue">data</span> is used in
place of the word <span style="color:dakrblue">parameters</span>.
Relations
These are equations, inequalities or other mathematical relationships
that define how different parts of a model are connected to each
other.
Goals
These are functions that reflect goals and objectives for the system
being modeled.
The widespread availability of computing resources has made the numerical analysis of mathematical models a commonplace activity. Without a modeling language, the process of setting up input files, executing a solver and extracting the final results from the solver output is tedious and error prone. This difficulty is compounded in complex, large-scale real-world applications which are difficult to debug when errors occur. Additionally, there are many different formats used by optimization software packages, and few formats are recognized by many optimizers. Thus the application of multiple optimization solvers to analyze a model introduces additional complexities.
Pyomo is an AML that extends Python to include objects for mathematical modeling. Hart et al. PyomoBook, PyomoJournal compare Pyomo with other AMLs. Although many good AMLs have been developed for optimization models, the following are motivating factors for the development of Pyomo:
Open Source
Pyomo is developed within Pyomo’s open source project to promote
transparency of the modeling framework and encourage community
development of Pyomo capabilities.
Customizable Capability
Pyomo supports a customizable capability through the extensive use
of plug-ins to modularize software components.
Solver Integration
Pyomo models can be optimized with solvers that are written either in
Python or in compiled, low-level languages.
Programming Language
Pyomo leverages a high-level programming language, which has several
advantages over custom AMLs: a very robust language, extensive
documentation, a rich set of standard libraries, support for modern
programming features like classes and functions, and portability to
many platforms.
1.2 Overview of Modeling Components and Processes
Pyomo supports an object-oriented design for the definition of optimization models. The basic steps of a simple modeling process are:
Create model and declare components
Instantiate the model
Apply solver
Interrogate solver results
In practice, these steps may be applied repeatedly with different data or with different constraints applied to the model. However, we focus on this simple modeling process to illustrate different strategies for modeling with Pyomo.
A Pyomo <span style="color:darkblue">model</span> consists of a collection of modeling <span style="color:darkblue">components</span> that define different aspects of the model. Pyomo includes the modeling components that are commonly supported by modern AMLs: index sets, symbolic parameters, decision variables, objectives, and constraints. These modeling components are defined in Pyomo through the following Python classes:
Set
set data that is used to define a model instance
Param
parameter data that is used to define a model instance
Var
decision variables in a model
Objective
expressions that are minimized or maximized in a model
Constraint
constraint expressions that impose restrictions on variable
values in a model
1.3 Abstract Versus Concrete Models
A mathematical model can be defined using symbols that represent data values. For example, the following equations represent a linear program (LP) to find optimal values for the vector $x$ with parameters $n$ and $b$, and parameter vectors $a$ and $c$:
$$
\begin{array}{lll}
\min & \sum_{j=1}^n c_j x_j & \
s.t. & \sum_{j=1}^n a_ij x_j \geq b_i & \forall i = 1 \ldots m\
& x_j \geq 0 & \forall j = 1 \ldots n
\end{array}
$$
Note:
As a convenience, we use the symbol $\forall$
to mean “for all” or “for each.”
We call this an <span style="color:darkblue">abstract</span> or <span style="color:darkblue">symbolic</span> mathematical model since it relies on unspecified parameter values. Data values can be used to specify a <span style="color:darkblue">model instance</span>. The <span style="color:darkblue; font-family:Courier">AbstractModel</span> class provides a context for defining and initializing abstract optimization models in Pyomo when the data values will be supplied at the time a solution is to be obtained.
In some contexts a mathematical model can be directly defined with the data values supplied at the time of the model definition and built into the model. We call these <span style="color:darkblue">concrete</span> mathematical models. For example, the following LP model is a concrete instance of the previous abstract model:
$$
\begin{array}{ll}
\min & 2x_1 + 3x_2\
s.t. & 3x_1 + 4x_2 \geq 1\
& x_1,x_2 \geq 0
\end{array}
$$
The <span style="color:darkblue; font-family:Courier">ConcreteModel</span> class is used to define concrete optimization models in Pyomo.
1.4 A Simple Abstract Pyomo Model
We repeat the abstract model already given:
$$
\begin{array}{lll}
\min & \sum_{j=1}^n c_j x_j & \
s.t. & \sum_{j=1}^n a_ij x_j \geq b_i & \forall i = 1 \ldots m\
& x_j \geq 0 & \forall j = 1 \ldots n
\end{array}
$$
One way to implement this in Pyomo is as follows:
End of explanation
from pyomo.environ import *
Explanation: Note:
Python is interpreted one line at a time. A line continuation character, backslash, is used for Python statements that need to span multiple lines. In Python, indentation has meaning and must be consistent. For example, lines inside a function definition must be indented and the end of the indentation is used by Python to signal the end of the definition.
The first import line that is required in every Pyomo model. Its purpose is to make the symbols used by Pyomo known to Python.
End of explanation
model = AbstractModel()
Explanation: The declaration of a model is also required. The use of the name <span style="color:darkblue; font-family:Courier">model</span> is not required. Almost any name could be used, but we will use the name <span style="color:darkblue; font-family:Courier">model</span> most of the time in this book. In this example, we are declaring that it will be an abstract model.
End of explanation
model.m = Param(within=NonNegativeIntegers)
model.n = Param(within=NonNegativeIntegers)
Explanation: We declare the parameters $m$ and $n$ using the Pyomo <span style="color:darkblue; font-family:Courier">Param</span> function. This function can take a variety of arguments; this example illustrates use of the <span style="color:darkblue; font-family:Courier">within</span> option that is used by Pyomo to validate the data value that is assigned to the parameter. If this option were not given, then Pyomo would not object to any type of data being assigned to these parameters. As it is, assignment of a value that is not a non-negative integer will result in an error.
End of explanation
model.I = RangeSet(1, model.m)
model.J = RangeSet(1, model.n)
Explanation: Although not required, it is convenient to define index sets. In this example we use the <span style="color:darkblue; font-family:Courier">RangeSet</span> function to declare that the sets will be a sequence of integers starting at 1 and ending at a value specified by the the parameters <span style="color:darkblue; font-family:Courier">model.m</span> and <span style="color:darkblue; font-family:Courier">model.n</span>.
End of explanation
model.a = Param(model.I, model.J)
model.b = Param(model.I)
model.c = Param(model.J)
Explanation: The coefficient and right-hand-side data are defined as indexed parameters. When sets are given as arguments to the <span style="color:darkblue; font-family:Courier">Param</span> function, they indicate that the set will index the parameter.
End of explanation
# the next line declares a variable indexed by the set J
model.x = Var(model.J, domain=NonNegativeReals)
Explanation: Note:
In Python, and therefore in Pyomo, any text after pound sign is considered to be a comment.
The next line interpreted by Python as part of the model declares the variable $x$. The first argument to the <span style="color:darkblue; font-family:Courier">Var</span> function is a set, so it is defined as an index set for the variable. In this case the variable has only one index set, but multiple sets could be used as was the case for the declaration of the parameter <span style="color:darkblue; font-family:Courier">model.a</span>. The second argument specifies a domain for the variable. This information is part of the model and will passed to the solver when data is provided and the model is solved. Specification of the <span style="color:darkblue; font-family:Courier">NonNegativeReals</span> domain implements the requirement that the variables be greater than or equal to zero.
End of explanation
def obj_expression(model):
return summation(model.c, model.x)
Explanation: In abstract models, Pyomo expressions are usually provided to objective function and constraint declarations via a function defined with a Python <span style="color:darkblue; font-family:Courier">def</span> statement. The <span style="color:darkblue; font-family:Courier">def</span> statement establishes a name for a function along with its arguments. When Pyomo uses a function to get objective function or constraint expressions, it always passes in the model (i.e., itself) as the the first argument so the model is always the first formal argument when declaring such functions in Pyomo. Additional arguments, if needed, follow. Since summation is an extremely common part of optimization models, Pyomo provides a flexible function to accommodate it. When given two arguments, the <span style="color:darkblue; font-family:Courier">summation</span> function returns an expression for the sum of the product of the two arguments over their indexes. This only works, of course, if the two arguments have the same indexes. If it is given only one argument it returns an expression for the sum over all indexes of that argument. So in this example, when <span style="color:darkblue; font-family:Courier">summation</span> is passed the arguments <span style="color:darkblue; font-family:Courier">model.c</span>, <span style="color:darkblue; font-family:Courier">model.x</span> it returns an internal representation of the expression $\sum_{j=1}^n c_j x_j$.
End of explanation
model.OBJ = Objective(rule=obj_expression)
Explanation: To declare an objective function, the Pyomo function called <span style="color:darkblue; font-family:Courier">Objective</span> is used. The <span style="color:darkblue; font-family:Courier">rule</span> argument gives the name of a function that returns the expression to be used. The default <span style="color:darkblue">sense</span> is minimization. For maximization, the <span style="color:darkblue; font-family:Courier">sense=maximize</span> argument must be used. The name that is declared, which is <span style="color:darkblue; font-family:Courier">OBJ</span> in this case, appears in some reports and can be almost any name.
End of explanation
def ax_constraint_rule(model, i):
return the expression for the constraint for i
return sum(model.a[i,j] * model.x[j] for j in model.J) >= model.b[i]
Explanation: Declaration of constraints is similar. A function is declared to deliver the constraint expression. In this case, there can be multiple constraints of the same form because we index the constraints by $i$ in the expression $ \sum_{j=1}^n a_ij x_j \geq b_i \forall i = 1 \ldots m$, which states that we need a constraint for each value of $i$ from one to $m$. In order to parametrize the expression by $i$ we include it as a formal parameter to the function that declares the constraint expression. Technically, we could have used anything for this argument, but that might be confusing. Using an <span style="color:blue; font-family:Courier">i</span> for an $i$ seems sensible in this situation.
End of explanation
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
Explanation: Note:
In Python, indexes are in square brackets and function arguments are in parentheses.
In order to declare constraints that use this expression, we use the Pyomo <span style="color:darkblue; font-family:Courier">Constraint</span> function that takes a variety of arguments. In this case, our model specifies that we can have more than one constraint of the same form and we have created a set, <span style="color:darkblue; font-family:Courier">model.I</span>, over which these constraints can be indexed so that is the first argument to the constraint declaration function. The next argument gives the rule that will be used to generate expressions for the constraints. Taken as a whole, this constraint declaration says that a list of constraints indexed by the set <span style="color:darkblue; font-family:Courier">model.I</span> will be created and for each member of model.I, the function <span style="color:darkblue; font-family:Courier">ax_constraint_rule</span> will be called and it will be passed the model object as well as the member of <span style="color:darkblue; font-family:Courier">model.I</span>.
End of explanation
!cat abstract1.dat
Explanation: In the object oriented view of all of this, we would say that model object is a class instance of the <span style="color:darkblue; font-family:Courier">AbstractModel</span> class, and <span style="color:darkblue; font-family:Courier">model.J</span> is a <span style="color:darkblue; font-family:Courier">Set</span> object that is contained by this model. Many modeling components in Pyomo can be optionally specified as <span style="color:darkblue">indexed components</span>: collections of components that are referenced using one or more values. In this example, the parameter <span style="color:darkblue; font-family:Courier">model.c</span> is indexed with set <span style="color:darkblue; font-family:Courier">model.J</span>.
In order to use this model, data must be given for the values of the parameters. Here is one file that provides data.
End of explanation
!sed -n '4,6p' abstract1.dat
Explanation: There are multiple formats that can be used to provide data to a Pyomo model, but the AMPL format works well for our purposes because it contains the names of the data elements together with the data. In AMPL data files, text after a pound sign is treated as a comment. Lines generally do not matter, but statements must be terminated with a semi-colon.
For this particular data file, there is one constraint, so the value of <span style="color:darkblue; font-family:Courier">model.m</span> will be one and there are two variables (i.e., the vector <span style="color:darkblue; font-family:Courier">model.x</span> is two elements long) so the value of <span style="color:darkblue; font-family:Courier">model.n</span> will be two. These two assignments are accomplished with standard assignments. Notice that in AMPL format input, the name of the model is omitted.
End of explanation
!sed -n '7,18p' abstract1.dat
Explanation: There is only one constraint, so only two values are needed for <span style="color:darkblue; font-family:Courier">model.a</span>. When assigning values to arrays and vectors in AMPL format, one way to do it is to give the index(es) and the the value. The line 1 2 4 causes <span style="color:darkblue; font-family:Courier">model.a[1,2]</span> to get the value 4. Since <span style="color:darkblue; font-family:Courier">model.c</span> has only one index, only one index value is needed so, for example, the line 1 2 causes <span style="color:darkblue; font-family:Courier">model.c[1]</span> to get the value 2. Line breaks generally do not matter in AMPL format data files, so the assignment of the value for the single index of <span style="color:darkblue; font-family:Courier">model.b</span> is given on one line since that is easy to read.
End of explanation
!cat abstract2.py
Explanation: When working with Pyomo (or any other AML), it is convenient to write abstract models in a somewhat more abstract way by using index sets that contain strings rather than index sets that are implied by $1,...,m$ or the summation from 1 to $n$. When this is done, the size of the set is implied by the input, rather than specified directly. Furthermore, the index entries may have no real order. Often, a mixture of integers and indexes and strings as indexes is needed in the same model. To start with an illustration of general indexes, consider a slightly different Pyomo implementation of the model we just presented.
End of explanation
! cat abstract2.dat
Explanation: However, this model can also be fed different data for problems of the same general form using meaningful indexes.
End of explanation
from pyomo.environ import *
model = ConcreteModel()
model.x = Var([1,2], domain=NonNegativeReals)
model.OBJ = Objective(expr = 2*model.x[1] + 3*model.x[2])
model.Constraint1 = Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1)
Explanation: 1.5 A Simple Concrete Pyomo Model
It is possible to get nearly the same flexible behavior from models declared to be abstract and models declared to be concrete in Pyomo; however, we will focus on a straightforward concrete example here where the data is hard-wired into the model file. Python programmers will quickly realize that the data could have come from other sources.
We repeat the concrete model already given:
$$\min \quad 2x_1 + 3x_2$$
$$s.t. \quad 3x_1 + 4x_2 \geq 1$$
$$x_1,x_2 \geq 0$$
This is implemented as a concrete model as follows:
End of explanation
!pyomo solve abstract1.py abstract1.dat --solver=glpk
Explanation: Although rule functions can also be used to specify constraints and objectives, in this example we use the <span style="color:darkblue; font-family:Courier">expr</span> option that is available only in concrete models. This option gives a direct specification of the expression.
1.6 Solving the Simple Examples
Pyomo supports modeling and scripting but does not install a solver automatically. In order to solve a model, there must be a solver installed on the computer to be used. If there is a solver, then the <span style="color:darkblue; font-family:Courier">pyomo</span> command can be used to solve a problem instance.
Suppose that the solver named glpk (also known as glpsol) is installed on the computer. Suppose further that an abstract model is in the file named <span style="color:darkblue; font-family:Courier">abstract1.py</span> and a data file for it is in the file named <span style="color:darkblue; font-family:Courier">abstract1.dat</span>. From the command prompt, with both files in the current directory, a solution can be obtained with the command:
End of explanation
!pyomo solve abstract1.py abstract1.dat --solver=cplex
Explanation: Since glpk is the default solver, there really is no need specify it so the <span style="color:darkblue; font-family:Courier">--solver</span> option can be dropped.
Note:
There are two dashes before the command line option names such as solver.
To continue the example, if CPLEX is installed then it can be listed as the solver. The command to solve with CPLEX is
End of explanation
!pyomo solve abstract1.py abstract1.dat --solver=cplex --summary
Explanation: This yields the following output on the screen:
The numbers is square brackets indicate how much time was required for each step. Results are written to the file named <span style="color:darkblue; font-family:Courier">results.json</span>, which has a special structure that makes it useful for post-processing. To see a summary of results written to the screen, use the <span style="color:darkblue; font-family:Courier">--summary</span> option:
End of explanation
!pyomo solve --help
Explanation: To see a list of Pyomo command line options, use:
End of explanation |
8,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtering and resampling data
This tutorial covers filtering and resampling, and gives examples of how
filtering can be used for artifact repair.
Step1: Background on filtering
A filter removes or attenuates parts of a signal. Usually, filters act on
specific frequency ranges of a signal — for example, suppressing all
frequency components above or below a certain cutoff value. There are many
ways of designing digital filters; see disc-filtering for a longer
discussion of the various approaches to filtering physiological signals in
MNE-Python.
Repairing artifacts by filtering
Artifacts that are restricted to a narrow frequency range can sometimes
be repaired by filtering the data. Two examples of frequency-restricted
artifacts are slow drifts and power line noise. Here we illustrate how each
of these can be repaired by filtering.
Slow drifts
Low-frequency drifts in raw data can usually be spotted by plotting a fairly
long span of data with the
Step2: A half-period of this slow drift appears to last around 10 seconds, so a full
period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be
sure those components are excluded, we want our highpass to be higher than
that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5}
\mathrm{Hz}$ filters to see which works best
Step3: Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts.
Notice that the text output summarizes the relevant characteristics of the
filter that was created. If you want to visualize the filter, you can pass
the same arguments used in the call to
Step4: Notice that the output is the same as when we applied this filter to the data
using
Step5: Power line noise
Power line noise is an environmental artifact that manifests as persistent
oscillations centered around the AC power line frequency_. Power line
artifacts are easiest to see on plots of the spectrum, so we'll use
Step6: It should be evident that MEG channels are more susceptible to this kind of
interference than EEG that is recorded in the magnetically shielded room.
Removing power-line noise can be done with a notch filter,
applied directly to the
Step7:
Step8: Because resampling involves filtering, there are some pitfalls to resampling
at different points in the analysis stream | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(0, 60).load_data() # use just 60 seconds of data, to save memory
Explanation: Filtering and resampling data
This tutorial covers filtering and resampling, and gives examples of how
filtering can be used for artifact repair.
:depth: 2
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>. We'll also crop the data to 60 seconds
(to save memory on the documentation server):
End of explanation
mag_channels = mne.pick_types(raw.info, meg='mag')
raw.plot(duration=60, order=mag_channels, proj=False,
n_channels=len(mag_channels), remove_dc=False)
Explanation: Background on filtering
A filter removes or attenuates parts of a signal. Usually, filters act on
specific frequency ranges of a signal — for example, suppressing all
frequency components above or below a certain cutoff value. There are many
ways of designing digital filters; see disc-filtering for a longer
discussion of the various approaches to filtering physiological signals in
MNE-Python.
Repairing artifacts by filtering
Artifacts that are restricted to a narrow frequency range can sometimes
be repaired by filtering the data. Two examples of frequency-restricted
artifacts are slow drifts and power line noise. Here we illustrate how each
of these can be repaired by filtering.
Slow drifts
Low-frequency drifts in raw data can usually be spotted by plotting a fairly
long span of data with the :meth:~mne.io.Raw.plot method, though it is
helpful to disable channel-wise DC shift correction to make slow drifts
more readily visible. Here we plot 60 seconds, showing all the magnetometer
channels:
End of explanation
for cutoff in (0.1, 0.2):
raw_highpass = raw.copy().filter(l_freq=cutoff, h_freq=None)
fig = raw_highpass.plot(duration=60, order=mag_channels, proj=False,
n_channels=len(mag_channels), remove_dc=False)
fig.subplots_adjust(top=0.9)
fig.suptitle('High-pass filtered at {} Hz'.format(cutoff), size='xx-large',
weight='bold')
Explanation: A half-period of this slow drift appears to last around 10 seconds, so a full
period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be
sure those components are excluded, we want our highpass to be higher than
that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5}
\mathrm{Hz}$ filters to see which works best:
End of explanation
filter_params = mne.filter.create_filter(raw.get_data(), raw.info['sfreq'],
l_freq=0.2, h_freq=None)
Explanation: Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts.
Notice that the text output summarizes the relevant characteristics of the
filter that was created. If you want to visualize the filter, you can pass
the same arguments used in the call to :meth:raw.filter()
<mne.io.Raw.filter> above to the function :func:mne.filter.create_filter
to get the filter parameters, and then pass the filter parameters to
:func:mne.viz.plot_filter. :func:~mne.filter.create_filter also requires
parameters data (a :class:NumPy array <numpy.ndarray>) and sfreq
(the sampling frequency of the data), so we'll extract those from our
:class:~mne.io.Raw object:
End of explanation
mne.viz.plot_filter(filter_params, raw.info['sfreq'], flim=(0.01, 5))
Explanation: Notice that the output is the same as when we applied this filter to the data
using :meth:raw.filter() <mne.io.Raw.filter>. You can now pass the filter
parameters (and the sampling frequency) to :func:~mne.viz.plot_filter to
plot the filter:
End of explanation
def add_arrows(axes):
# add some arrows at 60 Hz and its harmonics
for ax in axes:
freqs = ax.lines[-1].get_xdata()
psds = ax.lines[-1].get_ydata()
for freq in (60, 120, 180, 240):
idx = np.searchsorted(freqs, freq)
# get ymax of a small region around the freq. of interest
y = psds[(idx - 4):(idx + 5)].max()
ax.arrow(x=freqs[idx], y=y + 18, dx=0, dy=-12, color='red',
width=0.1, head_width=3, length_includes_head=True)
fig = raw.plot_psd(fmax=250, average=True)
add_arrows(fig.axes[:2])
Explanation: Power line noise
Power line noise is an environmental artifact that manifests as persistent
oscillations centered around the AC power line frequency_. Power line
artifacts are easiest to see on plots of the spectrum, so we'll use
:meth:~mne.io.Raw.plot_psd to illustrate. We'll also write a little
function that adds arrows to the spectrum plot to highlight the artifacts:
End of explanation
meg_picks = mne.pick_types(raw.info) # meg=True, eeg=False are the defaults
freqs = (60, 120, 180, 240)
raw_notch = raw.copy().notch_filter(freqs=freqs, picks=meg_picks)
for title, data in zip(['Un', 'Notch '], [raw, raw_notch]):
fig = data.plot_psd(fmax=250, average=True)
fig.subplots_adjust(top=0.85)
fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold')
add_arrows(fig.axes[:2])
Explanation: It should be evident that MEG channels are more susceptible to this kind of
interference than EEG that is recorded in the magnetically shielded room.
Removing power-line noise can be done with a notch filter,
applied directly to the :class:~mne.io.Raw object, specifying an array of
frequencies to be attenuated. Since the EEG channels are relatively
unaffected by the power line noise, we'll also specify a picks argument
so that only the magnetometers and gradiometers get filtered:
End of explanation
raw_downsampled = raw.copy().resample(sfreq=200)
for data, title in zip([raw, raw_downsampled], ['Original', 'Downsampled']):
fig = data.plot_psd(average=True)
fig.subplots_adjust(top=0.9)
fig.suptitle(title)
plt.setp(fig.axes, xlim=(0, 300))
Explanation: :meth:~mne.io.Raw.notch_filter also has parameters to control the notch
width, transition bandwidth and other aspects of the filter. See the
docstring for details.
Resampling
EEG and MEG recordings are notable for their high temporal precision, and are
often recorded with sampling rates around 1000 Hz or higher. This is good
when precise timing of events is important to the experimental design or
analysis plan, but also consumes more memory and computational resources when
processing the data. In cases where high-frequency components of the signal
are not of interest and precise timing is not needed (e.g., computing EOG or
ECG projectors on a long recording), downsampling the signal can be a useful
time-saver.
In MNE-Python, the resampling methods (:meth:raw.resample()
<mne.io.Raw.resample>, :meth:epochs.resample() <mne.Epochs.resample> and
:meth:evoked.resample() <mne.Evoked.resample>) apply a low-pass filter to
the signal to avoid aliasing, so you don't need to explicitly filter it
yourself first. This built-in filtering that happens when using
:meth:raw.resample() <mne.io.Raw.resample>, :meth:epochs.resample()
<mne.Epochs.resample>, or :meth:evoked.resample() <mne.Evoked.resample> is
a brick-wall filter applied in the frequency domain at the Nyquist
frequency of the desired new sampling rate. This can be clearly seen in the
PSD plot, where a dashed vertical line indicates the filter cutoff; the
original data had an existing lowpass at around 172 Hz (see
raw.info['lowpass']), and the data resampled from 600 Hz to 200 Hz gets
automatically lowpass filtered at 100 Hz (the Nyquist frequency_ for a
target rate of 200 Hz):
End of explanation
current_sfreq = raw.info['sfreq']
desired_sfreq = 90 # Hz
decim = np.round(current_sfreq / desired_sfreq).astype(int)
obtained_sfreq = current_sfreq / decim
lowpass_freq = obtained_sfreq / 3.
raw_filtered = raw.copy().filter(l_freq=None, h_freq=lowpass_freq)
events = mne.find_events(raw_filtered)
epochs = mne.Epochs(raw_filtered, events, decim=decim)
print('desired sampling frequency was {} Hz; decim factor of {} yielded an '
'actual sampling frequency of {} Hz.'
.format(desired_sfreq, decim, epochs.info['sfreq']))
Explanation: Because resampling involves filtering, there are some pitfalls to resampling
at different points in the analysis stream:
Performing resampling on :class:~mne.io.Raw data (before epoching) will
negatively affect the temporal precision of Event arrays, by causing
jitter_ in the event timing. This reduced temporal precision will
propagate to subsequent epoching operations.
Performing resampling after epoching can introduce edge artifacts on
every epoch, whereas filtering the :class:~mne.io.Raw object will only
introduce artifacts at the start and end of the recording (which is often
far enough from the first and last epochs to have no affect on the
analysis).
The following section suggests best practices to mitigate both of these
issues.
Best practices
To avoid the reduction in temporal precision of events that comes with
resampling a :class:~mne.io.Raw object, and also avoid the edge artifacts
that come with filtering an :class:~mne.Epochs or :class:~mne.Evoked
object, the best practice is to:
low-pass filter the :class:~mne.io.Raw data at or below
$\frac{1}{3}$ of the desired sample rate, then
decimate the data after epoching, by either passing the decim
parameter to the :class:~mne.Epochs constructor, or using the
:meth:~mne.Epochs.decimate method after the :class:~mne.Epochs have
been created.
<div class="alert alert-danger"><h4>Warning</h4><p>The recommendation for setting the low-pass corner frequency at
$\frac{1}{3}$ of the desired sample rate is a fairly safe rule of
thumb based on the default settings in :meth:`raw.filter()
<mne.io.Raw.filter>` (which are different from the filter settings used
inside the :meth:`raw.resample() <mne.io.Raw.resample>` method). If you
use a customized lowpass filter (specifically, if your transition
bandwidth is wider than 0.5× the lowpass cutoff), downsampling to 3× the
lowpass cutoff may still not be enough to avoid `aliasing`_, and
MNE-Python will not warn you about it (because the :class:`raw.info
<mne.Info>` object only keeps track of the lowpass cutoff, not the
transition bandwidth). Conversely, if you use a steeper filter, the
warning may be too sensitive. If you are unsure, plot the PSD of your
filtered data *before decimating* and ensure that there is no content in
the frequencies above the `Nyquist frequency`_ of the sample rate you'll
end up with *after* decimation.</p></div>
Note that this method of manually filtering and decimating is exact only when
the original sampling frequency is an integer multiple of the desired new
sampling frequency. Since the sampling frequency of our example data is
600.614990234375 Hz, ending up with a specific sampling frequency like (say)
90 Hz will not be possible:
End of explanation |
8,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 2
Work on this before the next lecture on 24 April. We will talk about questions, comments, and solutions during the exercise after the third lecture.
Please do form study groups! When you do, make sure you can explain everything in your own words, do not simply copy&paste from others.
The solutions to a lot of these problems can probably be found with Google. Please don't. You will not learn a lot by copy&pasting from the internet.
If you want to get credit/examination on this course please upload your work to your GitHub repository for this course before the next lecture starts and post a link to your repository in this thread. If you worked on things together with others please add their names to the notebook so we can see who formed groups.
These are some useful default imports for plotting and numpy
Step1: Question 1
Correlation between trees. This question is about investigating the correlation between decision trees and how this effects an ensemble constructed from them. There are three methods
for adding randomisation to the tree growing process
Step2: RandomForestClassifier
In RandomForestClassifier each of the "n_estimators" decision trees is constructed using a subsample of the training set taken with replacement (1.). Moreover, only a subset of features (selected at random) is used to construct each tree (2.).
RandomForestClassifier then provide the predicted class based on the averaging of the prediction of the trees.
For such classifier, we initialize two classifier instances, on with bootstrap = True and the other with False. Then, for each of the two classifiers, we vary the "max_features" attribute.
Step3: As we see, bootstrapping samples tends to keep almost constant (on average) the amount of correlation of the predictions from different trees within the randon forest. On the other hand, not bootstrapping tends to increase the correlation as "max_features" increase.
BaggingClassifier
Step4: Question 2
Compare the feature importances calculated by a RandomForestClassifier, ExtraTreesClassifier and GradientBoostedTreesClassifier on the digits dataset. You might have to tune n_estimators to get good performance. Which parts of the images is the most important and do you agree with the interpretation of the classifiers? (Bonus) Do the importances change if you change to problem to be a classification problem of odd vs even digit?
You can load the data set with
Step5: Question 3
This is a regression problem. Use a gradient boosted tree regressor (tune the max_depth, learning_rate and n_estimators parameters) to study the importance of the different features as well as the partial dependence of the output on individual features as well as pairs of features.
can you identify uninformative features?
how do the interactions between the features show up in the partial dependence plots?
(Help
Step6: (Bonus) Question 4
House prices in California. Use a gradient boosted regression tree model to build a model that can predict house prices in California (GradientBoostingRegressor is your friend).
Plot each of the features as a scatter plot with the target to learn about each variable. You can also make a plot of two features and use the target as colour.
Fit a model and tune the model complexity using a training and test data set.
Explore the feature importances and partial dependences that are important to the house price. | Python Code:
%config InlineBackend.figure_format='retina'
%matplotlib inline
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["font.size"] = 14
from sklearn.utils import check_random_state
Explanation: Exercise 2
Work on this before the next lecture on 24 April. We will talk about questions, comments, and solutions during the exercise after the third lecture.
Please do form study groups! When you do, make sure you can explain everything in your own words, do not simply copy&paste from others.
The solutions to a lot of these problems can probably be found with Google. Please don't. You will not learn a lot by copy&pasting from the internet.
If you want to get credit/examination on this course please upload your work to your GitHub repository for this course before the next lecture starts and post a link to your repository in this thread. If you worked on things together with others please add their names to the notebook so we can see who formed groups.
These are some useful default imports for plotting and numpy
End of explanation
from scipy.stats import pearsonr
def corr_from_trees(classifier, X_train, y_train, X_test):
clf_fit = classifier.fit(X_train, y_train)
num_est = len(clf_fit.estimators_)
siz = int(num_est*(num_est-1)/2)
corr_array = np.zeros(siz)
for i in range(num_est):
# Calculate the prediction from the i-th tree of the forest
pr_i = clf_fit.estimators_[i].predict(X_test)
for j in range(i+1,num_est):
pr_j = clf_fit.estimators_[j].predict(X_test)
# Calculate the Pearson correlation coefficient between the predictions of trees i and j
indx_corr_array_i_j = k = int((num_est*(num_est-1)/2) - (num_est-i)*((num_est-i)-1)/2 + j - i - 1)
corr_array[indx_corr_array_i_j] = pearsonr(pr_i, pr_j)[0]
return corr_array
# Load the dataset and split it in training and test set
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.model_selection import validation_curve
from sklearn.model_selection import StratifiedKFold
diabetes = load_diabetes()
X = diabetes.data
y = diabetes.target
# unused
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=66)
Explanation: Question 1
Correlation between trees. This question is about investigating the correlation between decision trees and how this effects an ensemble constructed from them. There are three methods
for adding randomisation to the tree growing process:
grow each tree on a bootstrap sample
for each tree select a subset of features at random
pick the best random split point
You can use RandomForestClassifier, BaggingClassifier, and ExtraTreesClassifier to achieve various different sets of the above three strategies.
Show how the average amount of correlation between the trees in the ensemble varies as a function of bootstrap yes/no, number of max_features, and picking the best split point at random or not.
Pick one of the classification datasets from http://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets.
End of explanation
# RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
param_range = range(1,X.shape[1])
list_mean_corr_rfc = []
list_std_corr_rfc = []
list_mean_corr_rfc_no_bootstrap = []
list_std_corr_rfc_no_bootstrap = []
for mf in param_range:
rfc = RandomForestClassifier(n_estimators = 100, n_jobs=-1, max_features=mf)
rfc_no_bootstrap = RandomForestClassifier(n_estimators = 100, n_jobs=-1, bootstrap=False, max_features=mf)
corr_rfc = corr_from_trees(rfc, X_train, y_train, X_test)
corr_rfc_no_bootstrap = corr_from_trees(rfc_no_bootstrap, X_train, y_train, X_test)
mean_corr_rfc = np.mean(corr_rfc, axis=0)
std_corr_rfc = np.std(corr_rfc, axis=0)
mean_corr_rfc_no_bootstrap = np.mean(corr_rfc_no_bootstrap, axis=0)
std_corr_rfc_no_bootstrap = np.std(corr_rfc_no_bootstrap, axis=0)
list_mean_corr_rfc.append(mean_corr_rfc)
list_std_corr_rfc.append(std_corr_rfc)
list_mean_corr_rfc_no_bootstrap.append(mean_corr_rfc_no_bootstrap)
list_std_corr_rfc_no_bootstrap.append(std_corr_rfc_no_bootstrap)
list_mean_corr_rfc = np.array(list_mean_corr_rfc)
list_std_corr_rfc = np.array(list_std_corr_rfc)
list_mean_corr_rfc_no_bootstrap = np.array(list_mean_corr_rfc_no_bootstrap)
list_std_corr_rfc_no_bootstrap = np.array(list_std_corr_rfc_no_bootstrap)
plt.plot(param_range, list_mean_corr_rfc, label='Bootstrap True', lw=4, color='forestgreen')
plt.plot(param_range, list_mean_corr_rfc + list_std_corr_rfc, param_range, list_mean_corr_rfc - list_std_corr_rfc, alpha=0.5, ls="--", color='forestgreen')
plt.plot(param_range, list_mean_corr_rfc_no_bootstrap, label='Bootstrap False', lw=4, color='darkviolet')
plt.plot(param_range, list_mean_corr_rfc_no_bootstrap + list_std_corr_rfc_no_bootstrap, param_range, list_mean_corr_rfc_no_bootstrap - list_std_corr_rfc_no_bootstrap, alpha=0.5, ls="--", color='darkviolet')
plt.xlabel("Max features")
plt.ylabel("Correlation between trees")
plt.legend(loc='best')
Explanation: RandomForestClassifier
In RandomForestClassifier each of the "n_estimators" decision trees is constructed using a subsample of the training set taken with replacement (1.). Moreover, only a subset of features (selected at random) is used to construct each tree (2.).
RandomForestClassifier then provide the predicted class based on the averaging of the prediction of the trees.
For such classifier, we initialize two classifier instances, on with bootstrap = True and the other with False. Then, for each of the two classifiers, we vary the "max_features" attribute.
End of explanation
# # BaggingClassifier
# from sklearn.ensemble import RandomForestClassifier
# param_range = range(1,X.shape[1])
# list_mean_corr_rfc = []
# list_std_corr_rfc = []
# list_mean_corr_rfc_no_bootstrap = []
# list_std_corr_rfc_no_bootstrap = []
# for mf in param_range:
# rfc = RandomForestClassifier(n_estimators = 100, n_jobs=-1, max_features=mf)
# rfc_no_bootstrap = RandomForestClassifier(n_estimators = 100, n_jobs=-1, bootstrap=False, max_features=mf)
# corr_rfc = corr_from_trees(rfc, X_train, y_train, X_test)
# corr_rfc_no_bootstrap = corr_from_trees(rfc_no_bootstrap, X_train, y_train, X_test)
# mean_corr_rfc = np.mean(corr_rfc, axis=0)
# std_corr_rfc = np.std(corr_rfc, axis=0)
# mean_corr_rfc_no_bootstrap = np.mean(corr_rfc_no_bootstrap, axis=0)
# std_corr_rfc_no_bootstrap = np.std(corr_rfc_no_bootstrap, axis=0)
# list_mean_corr_rfc.append(mean_corr_rfc)
# list_std_corr_rfc.append(std_corr_rfc)
# list_mean_corr_rfc_no_bootstrap.append(mean_corr_rfc_no_bootstrap)
# list_std_corr_rfc_no_bootstrap.append(std_corr_rfc_no_bootstrap)
# list_mean_corr_rfc = np.array(list_mean_corr_rfc)
# list_std_corr_rfc = np.array(list_std_corr_rfc)
# list_mean_corr_rfc_no_bootstrap = np.array(list_mean_corr_rfc_no_bootstrap)
# list_std_corr_rfc_no_bootstrap = np.array(list_std_corr_rfc_no_bootstrap)
# plt.plot(param_range, list_mean_corr_rfc, label='Bootstrap True', lw=4, color='forestgreen')
# plt.plot(param_range, list_mean_corr_rfc + list_std_corr_rfc, param_range, list_mean_corr_rfc - list_std_corr_rfc, alpha=0.5, ls="--", color='forestgreen')
# plt.plot(param_range, list_mean_corr_rfc_no_bootstrap, label='Bootstrap False', lw=4, color='darkviolet')
# plt.plot(param_range, list_mean_corr_rfc_no_bootstrap + list_std_corr_rfc_no_bootstrap, param_range, list_mean_corr_rfc_no_bootstrap - list_std_corr_rfc_no_bootstrap, alpha=0.5, ls="--", color='darkviolet')
# plt.xlabel("Max features")
# plt.ylabel("Correlation between trees")
# plt.legend(loc='best')
Explanation: As we see, bootstrapping samples tends to keep almost constant (on average) the amount of correlation of the predictions from different trees within the randon forest. On the other hand, not bootstrapping tends to increase the correlation as "max_features" increase.
BaggingClassifier
End of explanation
# your answer
Explanation: Question 2
Compare the feature importances calculated by a RandomForestClassifier, ExtraTreesClassifier and GradientBoostedTreesClassifier on the digits dataset. You might have to tune n_estimators to get good performance. Which parts of the images is the most important and do you agree with the interpretation of the classifiers? (Bonus) Do the importances change if you change to problem to be a classification problem of odd vs even digit?
You can load the data set with: http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits
End of explanation
from sklearn.ensemble import GradientBoostingRegressor
def make_data(n_samples=800, n_features=8, noise=0.2, random_state=2):
generator = check_random_state(random_state)
X = generator.rand(n_samples, n_features)
y = 10 * (X[:, 0] * X[:, 1]) + 20 * (X[:, 2] - 0.5) ** 2 \
+ 10 * X[:, 3] + 10 * X[:, 4] + noise * generator.randn(n_samples)
return X, y
X,y = make_data()
# your solution
Explanation: Question 3
This is a regression problem. Use a gradient boosted tree regressor (tune the max_depth, learning_rate and n_estimators parameters) to study the importance of the different features as well as the partial dependence of the output on individual features as well as pairs of features.
can you identify uninformative features?
how do the interactions between the features show up in the partial dependence plots?
(Help: rgr = GradientBoostingRegressor(n_estimators=200, max_depth=2, learning_rate=0.1) seems to work quite well)
(Help: to produce 1D and 2D partial dependence plots pass [0,1, (0,1)] as the features argument of plot_partial_dependence. More details in the function's documentation.)
End of explanation
from sklearn.datasets.california_housing import fetch_california_housing
cal_housing = fetch_california_housing()
# if the above doesn't work, download `cal_housing_py3.pkl` from the GitHub repository
# and adjust the path to the downloaded file which is passed to `load()`
# uncomment the following lines
#from sklearn.externals.joblib import load
#d = load('/home/username/Downloads/cal_housing_py3.pkz')
#X, y = d[:,1:], d[:,0]/100000
#X[:, 2] /= X[:, 5]
#X[:, 3] /= X[:, 5]
#X[:, 5] = X[:, 4] / X[:, 5]
# your solution
Explanation: (Bonus) Question 4
House prices in California. Use a gradient boosted regression tree model to build a model that can predict house prices in California (GradientBoostingRegressor is your friend).
Plot each of the features as a scatter plot with the target to learn about each variable. You can also make a plot of two features and use the target as colour.
Fit a model and tune the model complexity using a training and test data set.
Explore the feature importances and partial dependences that are important to the house price.
End of explanation |
8,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook Tour
Step1: The Jupyter Notebook is a web-based application that enables users to create documents that combine live code wth narrative next, equations, images, visualizations and HTML/JavaScript widgets.
This notebook gives an overview of the Jupyter Notebook and the standard IPython Kernel for running Python code.
Interactive exploration
First and foremost, Jupyter is an interactive environment for writing and running code. We provide features to make this as pleasant as possible.
Step2: Inline plotting
Step3: Seamless access to the system shell
Step4: Narrative text and equations
In addition to code cells, the Notebook offers Markdown cells, which enable the user to create narrative text with embedded LaTeX equations. Here is a cell that includes Maxwell's equations
Step5: Images
The Image object has a JPEG/PNG representation that is rendered by the Notebook
Step6: This representation is displayed if the object is returned from an expression
Step7: Or you can manually display the object using display
Step9: HTML
The HTML object has an HTML representation
Step10: JavaScript
The Javascript object has a "representation" that runs JavaScript code in the context of the Notebook.
Step11: LaTeX
This display architecture also understands objects that have a LaTeX representation. This is best illustrated by SymPy, which is a symbolic mathematics package for Python.
Step12: When a symbolic expression is passed to display or returned from an expression, the LaTeX representation is computed and displayed in the Notebook
Step13: nbviewer
nbviewer is a website for sharing Jupyter Notebooks on the web. It uses nbconvert to create a static HTML rendering of any notebook on the internet. This makes it easy to share notebooks with anyone in the world, without their having to install, or even know anything about, Jupyter or IPython. | Python Code:
from IPython.display import display, Image, HTML
from talktools import website, nbviewer
Explanation: Notebook Tour
End of explanation
2+2
import math
math.atan?
Explanation: The Jupyter Notebook is a web-based application that enables users to create documents that combine live code wth narrative next, equations, images, visualizations and HTML/JavaScript widgets.
This notebook gives an overview of the Jupyter Notebook and the standard IPython Kernel for running Python code.
Interactive exploration
First and foremost, Jupyter is an interactive environment for writing and running code. We provide features to make this as pleasant as possible.
End of explanation
%pylab inline
plot(rand(50))
Explanation: Inline plotting:
End of explanation
!ls -al
Explanation: Seamless access to the system shell:
End of explanation
from IPython.display import display
Explanation: Narrative text and equations
In addition to code cells, the Notebook offers Markdown cells, which enable the user to create narrative text with embedded LaTeX equations. Here is a cell that includes Maxwell's equations:
\begin{aligned}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \
\nabla \cdot \vec{\mathbf{B}} & = 0 \end{aligned}
Markdown cells enable users to create complex narratives that tell stories using code and data.
We are calling this literate computing as it is similar to Knuth's literate programming, but involves live code and data.
Rich output
Programming langauges, including Python, allow the writing of textual output to stdout and stderr. Jupyter and IPython extend this idea and allows objects to declare rich output representations:
JavaScript
HTML
LaTeX
PDF
PNG/JPEG
SVG
In IPython, the display function is like print for these rich representations:
End of explanation
from IPython.display import Image
i = Image("images/jupyter_logo.png")
Explanation: Images
The Image object has a JPEG/PNG representation that is rendered by the Notebook:
End of explanation
print(i)
i
Explanation: This representation is displayed if the object is returned from an expression:
End of explanation
display(i)
Explanation: Or you can manually display the object using display:
End of explanation
from IPython.display import HTML
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display(h)
Explanation: HTML
The HTML object has an HTML representation:
End of explanation
from IPython.display import Javascript
display(Javascript("alert('hi');"))
Explanation: JavaScript
The Javascript object has a "representation" that runs JavaScript code in the context of the Notebook.
End of explanation
from __future__ import division
from sympy import *
x, y, z = symbols("x y z")
init_printing(use_latex='mathjax')
Explanation: LaTeX
This display architecture also understands objects that have a LaTeX representation. This is best illustrated by SymPy, which is a symbolic mathematics package for Python.
End of explanation
Rational(3,2)*pi + exp(I*x) / (x**2 + y)
(1/cos(x)).series(x, 0, 12)
Explanation: When a symbolic expression is passed to display or returned from an expression, the LaTeX representation is computed and displayed in the Notebook:
End of explanation
website('https://nbviewer.jupyter.org')
Explanation: nbviewer
nbviewer is a website for sharing Jupyter Notebooks on the web. It uses nbconvert to create a static HTML rendering of any notebook on the internet. This makes it easy to share notebooks with anyone in the world, without their having to install, or even know anything about, Jupyter or IPython.
End of explanation |
8,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Netscan
Here is a sample of some the capabilities of the netscan library.
Step1: Get host info
On macOS or Linux, GetHostName should be able to resolve a computer's IP address to a hostname.
Step2: WhoIs uses a REST API to recover a current record of an external IP address.
Step3: MacLookup uses a REST API to turn a MAC address into a vendor name.
Step4: Commands
Unfortunately it is difficult in python to execute simple commands and get the returned output. This is a simple wrapper around the obnoxiously complex subprocess command in Python.
Step5: Passive Scanning
Unfortunately I currently don't know how to run python code in a jupyter notebook using sudo. Therefore I can't do live captures. Instead I will use a pcap.
Step6: Active Scanning
Don't know how to do this using sudo. | Python Code:
from __future__ import print_function
from netscan.lib import WhoIs, GetHostName, MacLookup, Commands
import pprint as pp
Explanation: Netscan
Here is a sample of some the capabilities of the netscan library.
End of explanation
print(GetHostName('192.168.1.13').name)
print(GetHostName('127.0.0.1').name)
Explanation: Get host info
On macOS or Linux, GetHostName should be able to resolve a computer's IP address to a hostname.
End of explanation
pp.pprint(WhoIs('216.58.217.4').record)
info = WhoIs('216.58.217.4')
print('CIDR:', info.CIDR)
print('Organization:', info.Organization)
Explanation: WhoIs uses a REST API to recover a current record of an external IP address.
End of explanation
print(MacLookup('58:b0:35:f2:55:88').vendor)
Explanation: MacLookup uses a REST API to turn a MAC address into a vendor name.
End of explanation
cmd = Commands()
ret = cmd.getoutput('ls -alh')
print(ret)
print(Commands().getoutput('echo hi'))
Explanation: Commands
Unfortunately it is difficult in python to execute simple commands and get the returned output. This is a simple wrapper around the obnoxiously complex subprocess command in Python.
End of explanation
from netscan.PassiveScan import PassiveMapper
nmap = []
pm = PassiveMapper()
nmap = pm.pcap('../tests/test.pcap')
nmap = pm.filter(nmap)
nmap = pm.combine(nmap)
nmap = pm.combine(nmap)
pp.pprint(nmap)
Explanation: Passive Scanning
Unfortunately I currently don't know how to run python code in a jupyter notebook using sudo. Therefore I can't do live captures. Instead I will use a pcap.
End of explanation
from netscan.ActiveScan import ActiveMapper
am = ActiveMapper(range(1, 1024))
hosts = am.scan('en1')
pp.pprint(hosts)
print(GetHostName('192.168.1.13').name)
Explanation: Active Scanning
Don't know how to do this using sudo.
End of explanation |
8,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning word embeddings - word2vec
- Saurabh Mathur
The aim of this experiment is to use the algorithm developed by Tomas Mikolov et al. to learn high quality vector representations of text.
The skip-gram model
Given,
a sequence of words $ w_1, w_2, .., w_T $, predict the next word.
The objective is to maximize average log probability.
$$ AverageLogProbability = \frac{1}{T} \sum_{t=1}^{T} \sum_{-c \leqslant j\leqslant c, j \neq 0} log\ p (w_{t+j} | w_t) $$
where $ c $ is the length of context.
Basic skip-gram model
The basic skip-gram formulation defines $ p (w_{t+j} | w_t) $ in terms of softmax as -
$$ p (wo | wi) = \frac{ exp(v'^{T} {wo} \cdot v{wi}) }{ \sum^{W}{w=1} exp(v'^{T} {w} \cdot v_{wi} ) } $$
where $vi$ and $vo$ are input and output word vectors.
This is extremely costly and this impractical as, W is huge ( ~ $10^5-10^7$ terms ).
There are three proposed methods to get around this limitation.
- Heirarchial softmax
- Negative sampling
- Subsample frequent words
I'm using Google's Tensorflow library for the implementation
Step1: For the data, I'm using the text8 dataset which is a 100MB sample of cleaned English Wikipedia dump on Mar. 3, 2006
Step2: Take only the top $c$ words, mark rest as UNK (unknown). | Python Code:
import tensorflow as tf
Explanation: Learning word embeddings - word2vec
- Saurabh Mathur
The aim of this experiment is to use the algorithm developed by Tomas Mikolov et al. to learn high quality vector representations of text.
The skip-gram model
Given,
a sequence of words $ w_1, w_2, .., w_T $, predict the next word.
The objective is to maximize average log probability.
$$ AverageLogProbability = \frac{1}{T} \sum_{t=1}^{T} \sum_{-c \leqslant j\leqslant c, j \neq 0} log\ p (w_{t+j} | w_t) $$
where $ c $ is the length of context.
Basic skip-gram model
The basic skip-gram formulation defines $ p (w_{t+j} | w_t) $ in terms of softmax as -
$$ p (wo | wi) = \frac{ exp(v'^{T} {wo} \cdot v{wi}) }{ \sum^{W}{w=1} exp(v'^{T} {w} \cdot v_{wi} ) } $$
where $vi$ and $vo$ are input and output word vectors.
This is extremely costly and this impractical as, W is huge ( ~ $10^5-10^7$ terms ).
There are three proposed methods to get around this limitation.
- Heirarchial softmax
- Negative sampling
- Subsample frequent words
I'm using Google's Tensorflow library for the implementation
End of explanation
import os, urllib
def fetch_data(url):
filename = url.split("/")[-1]
datadir = os.path.join(os.getcwd(), "data")
filepath = os.path.join(datadir, filename)
if not os.path.exists(datadir):
os.makedirs(datadir)
if not os.path.exists(filepath):
urllib.urlretrieve(url, filepath)
return filepath
url = "http://mattmahoney.net/dc/text8.zip"
filepath = fetch_data(url)
print ("Data at {0}.".format(filepath))
import os, zipfile
def read_data(filename):
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filepath)
Explanation: For the data, I'm using the text8 dataset which is a 100MB sample of cleaned English Wikipedia dump on Mar. 3, 2006
End of explanation
def build_dataset(words, vocabulary_size):
count = [[ "UNK", -1 ]].extend(
collections.Counter(words).most_common(vocabulary_size))
word_to_index = { word: i for i, (word, _) in enumerate(count) }
data = [word_to_index.get(word, 0) for word in words] # map unknown words to 0
unk_count = data.count(0) # Number of unknown words
count[0][1] = unk_count
index_to_word= dict(zip(word_to_index.values(), word_to_index.keys()))
return data, count, word_to_index, index_to_word
vocabulary_size = 50000
data, count, word_to_index, index_to_word = build_dataset(words, vocabulary_size)
print ("data: {0}".format(data[:5]))
print ("count: {0}".format(count[:5]))
print ("word_to_index: {0}".format(word_to_index.items()[:5]))
print ("index_to_word: {0}".format(index_to_word.items()[:5]))
Explanation: Take only the top $c$ words, mark rest as UNK (unknown).
End of explanation |
8,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sliding Windows
Many applications require computations on sliding windows of streams. A sliding window is specified by a window size and a step size, both of which are positive integers. Sliding windows generate a sequence of lists from a stream. For a stream x, the the n-th window is the list x[nstep_size
Step1: Sliding Windows with State and Keyword Arguments
You can specify a state using the keyword state and giving the state an initial value. The function can also use keyword arguments as illustrated in the following example in which the keyword is threshold.
This example computes mean_of_this_window which is the mean of the current window, and it computes max_of_window_mean which is the maxium of mean_of_this_window over all windows seen so far. It computes deviation which is the deviation of the mean of the current window from the max seen so far, and it sets the deviation to 0.0 if it is below a threshold.
Step2: map_window_list
Same as map_window except that map_window_list returns a list rather than a single element.
(You can also use map_window and output _multivalue(output_list) instead of calling map_window.)
The next example subtracts the mean of each window from the elements of the window. | Python Code:
import os
import sys
sys.path.append("../")
from IoTPy.core.stream import Stream, run
from IoTPy.agent_types.op import map_window
from IoTPy.helper_functions.recent_values import recent_values
def example():
x, y = Stream(), Stream()
map_window(func=sum, in_stream=x, out_stream=y, window_size=2, step_size=3)
x.extend([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Sliding Windows
Many applications require computations on sliding windows of streams. A sliding window is specified by a window size and a step size, both of which are positive integers. Sliding windows generate a sequence of lists from a stream. For a stream x, the the n-th window is the list x[nstep_size: nstep_size + window_size]. For example if the step_size is 4 and the window size is 2, the sequence of windows is x[0:2], x[4:6], x[8:10], ...
map_window applies a specified function to each window of a stream and the returned value is an element of the ouput stream
map_window(func, in_stream, out_stream, window_size, step_size, state=None, name=None, kwargs)**
In the next example, y[n] = sum(x[3n: 3n + 2])
End of explanation
import numpy as np
def example():
def deviation_from_max(window, max_of_window_mean, threshold):
# state is max_of_window_mean
mean_of_this_window = np.mean(window)
max_of_window_mean = max(max_of_window_mean, mean_of_this_window)
deviation = max_of_window_mean - mean_of_this_window
if deviation < threshold: deviation = 0.0
return deviation, max_of_window_mean
x, y = Stream(), Stream()
map_window(func=deviation_from_max, in_stream=x, out_stream=y,
window_size=2, step_size=1, state=0, threshold=4)
x.extend([0, 10, 2, 4, 0, 40, 20, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
from IoTPy.agent_types.op import timed_window
def example():
def f(window):
return sum([v[1] for v in window])
x, y = Stream(), Stream()
timed_window(func=f, in_stream=x, out_stream=y, window_duration=4, step_time=4)
x.extend([[1, 100], [3, 250], [5, 400], [5.5, 100], [7, 300],
[11.0, 250], [12.0, 150], [13.0, 200], [17.0, 100]])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Sliding Windows with State and Keyword Arguments
You can specify a state using the keyword state and giving the state an initial value. The function can also use keyword arguments as illustrated in the following example in which the keyword is threshold.
This example computes mean_of_this_window which is the mean of the current window, and it computes max_of_window_mean which is the maxium of mean_of_this_window over all windows seen so far. It computes deviation which is the deviation of the mean of the current window from the max seen so far, and it sets the deviation to 0.0 if it is below a threshold.
End of explanation
from IoTPy.agent_types.op import map_window_list
def example():
def f(window):
window_mean = np.mean(window)
return [v-window_mean for v in window]
x, y = Stream(), Stream()
map_window_list(func=f, in_stream=x, out_stream=y,
window_size=3, step_size=3)
x.extend([0, 10, 2, 4, 3, 5, 2, 3, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: map_window_list
Same as map_window except that map_window_list returns a list rather than a single element.
(You can also use map_window and output _multivalue(output_list) instead of calling map_window.)
The next example subtracts the mean of each window from the elements of the window.
End of explanation |
8,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 2
Step1: A function in PyRTL is nothing special -- it just so happens that the statements
it encapsulate tell PyRTL to build some hardware.
Step2: If we call one_bit_add
above with the arguments x, y, and z it will make a one-bit adder to add
those values together and return the wires for sum and carry_out as applied to x,
y, and z. If I call it again on i, j, and k it will build a new one-bit
adder for those inputs and return the resulting sum and carry_out for that adder.
While PyRTL actually provides an "+" operator for wirevectors which generates
adders, a ripple carry adder is something people can understand easily but has
enough structure to be mildly interesting. Let's define an adder of arbitrary
length recursively and (hopefully) pythonically. More comments after the code.
Step3: The above code breaks down into two cases
Step4: A couple new things in the above code | Python Code:
import pyrtl
pyrtl.reset_working_block()
Explanation: Example 2: A Counter with Ripple Carry Adder.
This next example shows how you make stateful things with registers
and more complex hardware structures with functions. We generate
a 3-bit ripple carry adder building off of the 1-bit adder from
the prior example, and then hook it to a register to count up modulo 8.
End of explanation
def one_bit_add(a, b, carry_in):
assert len(a) == len(b) == 1 # len returns the bitwidth
sum = a ^ b ^ carry_in
carry_out = a & b | a & carry_in | b & carry_in
return sum, carry_out
Explanation: A function in PyRTL is nothing special -- it just so happens that the statements
it encapsulate tell PyRTL to build some hardware.
End of explanation
def ripple_add(a, b, carry_in=0):
a, b = pyrtl.match_bitwidth(a, b)
# this function is a function that allows us to match the bitwidth of multiple
# different wires. By default, it zero extends the shorter bits
if len(a) == 1:
sumbits, carry_out = one_bit_add(a, b, carry_in)
else:
lsbit, ripplecarry = one_bit_add(a[0], b[0], carry_in)
msbits, carry_out = ripple_add(a[1:], b[1:], ripplecarry)
sumbits = pyrtl.concat(msbits, lsbit)
return sumbits, carry_out
Explanation: If we call one_bit_add
above with the arguments x, y, and z it will make a one-bit adder to add
those values together and return the wires for sum and carry_out as applied to x,
y, and z. If I call it again on i, j, and k it will build a new one-bit
adder for those inputs and return the resulting sum and carry_out for that adder.
While PyRTL actually provides an "+" operator for wirevectors which generates
adders, a ripple carry adder is something people can understand easily but has
enough structure to be mildly interesting. Let's define an adder of arbitrary
length recursively and (hopefully) pythonically. More comments after the code.
End of explanation
counter = pyrtl.Register(bitwidth=3, name='counter')
sum, carry_out = ripple_add(counter, pyrtl.Const("1'b1"))
counter.next <<= sum
Explanation: The above code breaks down into two cases:
If the size of the inputs is one-bit just do one_bit_add.
if they are more than one bit, do a one-bit add on the least significant bits, a ripple carry on the rest, and then stick the results back together into one WireVector.
A couple interesting features of PyRTL can be seen here:
WireVectors can be indexed like lists, with [0] accessing the least significant bit and [1:] being an example of the use of Python slicing syntax.
While you can add two lists together in python a WireVector + Wirevector means "make an adder" so to concatenate the bits of two vectors one need to use "concat".
If we look at "cin" it seems to have a default value of the integer "0" but is a WireVector at other times.Python supports polymorphism throughout and PyRTL will cast integers and some other types to WireVectors when it can.
Now let's build a 3-bit counter from our N-bit ripple carry adder.
End of explanation
sim_trace = pyrtl.SimulationTrace()
sim = pyrtl.Simulation(tracer=sim_trace)
for cycle in range(15):
sim.step({})
assert sim.value[counter] == cycle % 8
sim_trace.render_trace()
Explanation: A couple new things in the above code:
The two remaining types of basic WireVectors, Const and Register, both appear. Const, unsurprisingly, is just for holding constants (such as the 0 in ripple_add), but here we create one directly from a Verilog-like string which includes both the value and the bitwidth.
Registers are just like wires, except their updates are delayed to the next clock cycle. This is made explicit in the syntax through the property '.next' which should always be set for registers.
In this simple example, we take counter next cycle equal to counter this cycle plus one.
Now let's run the bugger. No need for inputs, it doesn't have any, but let's
throw in an assert to check that it really counts up modulo 8. Finally we'll
print the trace to the screen.
End of explanation |
8,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Export epochs to Pandas DataFrame
In this example the pandas exporter will be used to produce a DataFrame
object. After exploring some basic features a split-apply-combine
work flow will be conducted to examine the latencies of the response
maxima across epochs and conditions.
<div class="alert alert-info"><h4>Note</h4><p>Equivalent methods are available for raw and evoked data objects.</p></div>
More information and additional introductory materials can be found at the
pandas doc sites
Step1: Export DataFrame
Step2: Explore Pandas MultiIndex
Step3: Long-format dataframes | Python Code:
# Author: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
import matplotlib.pyplot as plt
import numpy as np
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# These data already have an average EEG ref applied
raw = mne.io.read_raw_fif(raw_fname)
# For simplicity we will only consider the first 10 epochs
events = mne.read_events(event_fname)[:10]
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg='grad', eeg=True, eog=True,
stim=False, exclude='bads')
tmin, tmax = -0.2, 0.5
baseline = (None, 0)
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(auditory_l=1, auditory_r=2, visual_l=3, visual_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=True, reject=reject)
Explanation: Export epochs to Pandas DataFrame
In this example the pandas exporter will be used to produce a DataFrame
object. After exploring some basic features a split-apply-combine
work flow will be conducted to examine the latencies of the response
maxima across epochs and conditions.
<div class="alert alert-info"><h4>Note</h4><p>Equivalent methods are available for raw and evoked data objects.</p></div>
More information and additional introductory materials can be found at the
pandas doc sites: http://pandas.pydata.org/pandas-docs/stable/
Short Pandas Primer
Pandas Data Frames
~~~~~~~~~~~~~~~~~~
A data frame can be thought of as a combination of matrix, list and dict:
It knows about linear algebra and element-wise operations but is size mutable
and allows for labeled access to its data. In addition, the pandas data frame
class provides many useful methods for restructuring, reshaping and visualizing
data. As most methods return data frame instances, operations can be chained
with ease; this allows to write efficient one-liners. Technically a DataFrame
can be seen as a high-level container for numpy arrays and hence switching
back and forth between numpy arrays and DataFrames is very easy.
Taken together, these features qualify data frames for inter operation with
databases and for interactive data exploration / analysis.
Additionally, pandas interfaces with the R statistical computing language that
covers a huge amount of statistical functionality.
Export Options
~~~~~~~~~~~~~~
The pandas exporter comes with a few options worth being commented.
Pandas DataFrame objects use a so called hierarchical index. This can be
thought of as an array of unique tuples, in our case, representing the higher
dimensional MEG data in a 2D data table. The column names are the channel names
from the epoch object. The channels can be accessed like entries of a
dictionary::
>>> df['MEG 2333']
Epochs and time slices can be accessed with the .loc method::
>>> epochs_df.loc[(1, 2), 'MEG 2333']
However, it is also possible to include this index as regular categorial data
columns which yields a long table format typically used for repeated measure
designs. To take control of this feature, on export, you can specify which
of the three dimensions 'condition', 'epoch' and 'time' is passed to the Pandas
index using the index parameter. Note that this decision is revertible any
time, as demonstrated below.
Similarly, for convenience, it is possible to scale the times, e.g. from
seconds to milliseconds.
Some Instance Methods
~~~~~~~~~~~~~~~~~~~~~
Most numpy methods and many ufuncs can be found as instance methods, e.g.
mean, median, var, std, mul, , max, argmax etc.
Below an incomplete listing of additional useful data frame instance methods:
apply : apply function to data.
Any kind of custom function can be applied to the data. In combination with
lambda this can be very useful.
describe : quickly generate summary stats
Very useful for exploring data.
groupby : generate subgroups and initialize a 'split-apply-combine' operation.
Creates a group object. Subsequently, methods like apply, agg, or transform
can be used to manipulate the underlying data separately but
simultaneously. Finally, reset_index can be used to combine the results
back into a data frame.
plot : wrapper around plt.plot
However it comes with some special options. For examples see below.
shape : shape attribute
gets the dimensions of the data frame.
values :
return underlying numpy array.
to_records :
export data as numpy record array.
to_dict :
export data as dict of arrays.
End of explanation
# The following parameters will scale the channels and times plotting
# friendly. The info columns 'epoch' and 'time' will be used as hierarchical
# index whereas the condition is treated as categorial data. Note that
# this is optional. By passing None you could also print out all nesting
# factors in a long table style commonly used for analyzing repeated measure
# designs.
index, scaling_time, scalings = ['epoch', 'time'], 1e3, dict(grad=1e13)
df = epochs.to_data_frame(picks=None, scalings=scalings,
scaling_time=scaling_time, index=index)
# Create MEG channel selector and drop EOG channel.
meg_chs = [c for c in df.columns if 'MEG' in c]
df.pop('EOG 061') # this works just like with a list.
Explanation: Export DataFrame
End of explanation
# Pandas is using a MultiIndex or hierarchical index to handle higher
# dimensionality while at the same time representing data in a flat 2d manner.
print(df.index.names, df.index.levels)
# Inspecting the index object unveils that 'epoch', 'time' are used
# for subsetting data. We can take advantage of that by using the
# .loc attribute, where in this case the first position indexes the MultiIndex
# and the second the columns, that is, channels.
# Plot some channels across the first three epochs
xticks, sel = np.arange(3, 600, 120), meg_chs[:15]
df.loc[:3, sel].plot(xticks=xticks)
mne.viz.tight_layout()
# slice the time starting at t0 in epoch 2 and ending 500ms after
# the base line in epoch 3. Note that the second part of the tuple
# represents time in milliseconds from stimulus onset.
df.loc[(1, 0):(3, 500), sel].plot(xticks=xticks)
mne.viz.tight_layout()
# Note: For convenience the index was converted from floating point values
# to integer values. To restore the original values you can e.g. say
# df['times'] = np.tile(epoch.times, len(epochs_times)
# We now reset the index of the DataFrame to expose some Pandas
# pivoting functionality. To simplify the groupby operation we
# we drop the indices to treat epoch and time as categroial factors.
df = df.reset_index()
# The ensuing DataFrame then is split into subsets reflecting a crossing
# between condition and trial number. The idea is that we can broadcast
# operations into each cell simultaneously.
factors = ['condition', 'epoch']
sel = factors + ['MEG 1332', 'MEG 1342']
grouped = df[sel].groupby(factors)
# To make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
# Now we compare the mean of two channels response across conditions.
grouped.mean().plot(kind='bar', stacked=True, title='Mean MEG Response',
color=['steelblue', 'orange'])
mne.viz.tight_layout()
# We can even accomplish more complicated tasks in a few lines calling
# apply method and passing a function. Assume we wanted to know the time
# slice of the maximum response for each condition.
max_latency = grouped[sel[2]].apply(lambda x: df.time[x.idxmax()])
print(max_latency)
# Then make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
plt.figure()
max_latency.plot(kind='barh', title='Latency of Maximum Response',
color=['steelblue'])
mne.viz.tight_layout()
# Finally, we will again remove the index to create a proper data table that
# can be used with statistical packages like statsmodels or R.
final_df = max_latency.reset_index()
final_df.rename(columns={0: sel[2]}) # as the index is oblivious of names.
# The index is now written into regular columns so it can be used as factor.
print(final_df)
plt.show()
# To save as csv file, uncomment the next line.
# final_df.to_csv('my_epochs.csv')
# Note. Data Frames can be easily concatenated, e.g., across subjects.
# E.g. say:
#
# import pandas as pd
# group = pd.concat([df_1, df_2])
# group['subject'] = np.r_[np.ones(len(df_1)), np.ones(len(df_2)) + 1]
Explanation: Explore Pandas MultiIndex
End of explanation
# Many statistical modelling functions expect data in a long format
# where each row is one observation at a unique coordinate of factors
# such as sensors, conditions, subjects etc.
df_long = epochs.to_data_frame(long_format=True)
print(df_long.head())
# Here the MEG or EEG signal appears in the column "observation".
# The total length is therefore the number of channels times the time points.
print(len(df_long), "=", epochs.get_data().size)
# To simplify subsetting and filtering a channwel type column is added.
print(df_long.query("ch_type == 'eeg'").head())
# Note that some of the columns are transformed to "category" data types.
print(df_long.dtypes)
Explanation: Long-format dataframes
End of explanation |
8,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Numpy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of Numpy
Step2: Built-in Methods
There are lots of built-in ways to generate Arrays
arange
Return evenly spaced values within a given interval.
Step3: zeros and ones
Generate arrays of zeros or ones
Step4: linspace
Return evenly spaced numbers over a specified interval.
Step5: eye
Creates an identity matrix
Step6: Random
Numpy also has lots of ways to create random number arrays
Step7: randn
Return a sample (or samples) from the "standard normal" distribution. Unlike rand which is uniform
Step8: randint
Return random integers from low (inclusive) to high (exclusive).
Step9: Array Attributes and Methods
Let's discuss some useful attributes and methods or an array
Step10: Reshape
Returns an array containing the same data with a new shape.
Step11: max,min,argmax,argmin
These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
Step12: Shape
Shape is an attribute that arrays have (not a method)
Step13: dtype
You can also grab the data type of the object in the array | Python Code:
import numpy as np
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
<center>Copyright Pierian Data 2017</center>
<center>For more information, visit us at www.pieriandata.com</center>
NumPy
NumPy (or Numpy) is a Linear Algebra Library for Python, the reason it is so important for Finance with Python is that almost all of the libraries in the PyData Ecosystem rely on NumPy as one of their main building blocks. Plus we will use it to generate data for our analysis examples later on!
Numpy is also incredibly fast, as it has bindings to C libraries. For more info on why you would want to use Arrays instead of lists, check out this great StackOverflow post.
We will only learn the basics of NumPy, to get started we need to install it!
Installation Instructions
NumPy is already included in your environment! You are good to go if you are using pyfinance env!
For those not using the provided environment:
It is highly recommended you install Python using the Anaconda distribution to make sure all underlying dependencies (such as Linear Algebra libraries) all sync up with the use of a conda install. If you have Anaconda, install NumPy by going to your terminal or command prompt and typing:
conda install numpy
If you do not have Anaconda and can not install it, please refer to Numpy's official documentation on various installation instructions.
Using NumPy
Once you've installed NumPy you can import it as a library:
End of explanation
my_list = [1,2,3]
my_list
np.array(my_list)
my_matrix = [[1,2,3],[4,5,6],[7,8,9]]
my_matrix
np.array(my_matrix)
Explanation: Numpy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of Numpy: vectors,arrays,matrices, and number generation. Let's start by discussing arrays.
Numpy Arrays
NumPy arrays are the main way we will use Numpy throughout the course. Numpy arrays essentially come in two flavors: vectors and matrices. Vectors are strictly 1-d arrays and matrices are 2-d (but you should note a matrix can still have only one row or one column).
Let's begin our introduction by exploring how to create NumPy arrays.
Creating NumPy Arrays
From a Python List
We can create an array by directly converting a list or list of lists:
End of explanation
np.arange(0,10)
np.arange(0,11,2)
Explanation: Built-in Methods
There are lots of built-in ways to generate Arrays
arange
Return evenly spaced values within a given interval.
End of explanation
np.zeros(3)
np.zeros((5,5,5))
np.ones(3)
np.ones((3,3))
Explanation: zeros and ones
Generate arrays of zeros or ones
End of explanation
np.linspace(0,10,10)
np.linspace(0,10,50)
Explanation: linspace
Return evenly spaced numbers over a specified interval.
End of explanation
np.eye(8)
Explanation: eye
Creates an identity matrix
End of explanation
np.random.rand(2)
np.random.rand(5,5)
Explanation: Random
Numpy also has lots of ways to create random number arrays:
rand
Create an array of the given shape and populate it with
random samples from a uniform distribution
over [0, 1).
End of explanation
np.random.randn(2)
np.random.randn(5,5)
Explanation: randn
Return a sample (or samples) from the "standard normal" distribution. Unlike rand which is uniform:
End of explanation
np.random.randint(1,100)
np.random.randint(1,100,3)
Explanation: randint
Return random integers from low (inclusive) to high (exclusive).
End of explanation
arr = np.arange(30)
ranarr = np.random.randint(0,50,10)
arr
ranarr
Explanation: Array Attributes and Methods
Let's discuss some useful attributes and methods or an array:
End of explanation
arr.reshape(3,10)
Explanation: Reshape
Returns an array containing the same data with a new shape.
End of explanation
ranarr
ranarr.max()
ranarr.argmax()
ranarr.min()
ranarr.argmin()
Explanation: max,min,argmax,argmin
These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
End of explanation
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,30)
arr.reshape(1,30).shape
arr.reshape(30,1)
arr.reshape(30,1).shape
Explanation: Shape
Shape is an attribute that arrays have (not a method):
End of explanation
arr.dtype
Explanation: dtype
You can also grab the data type of the object in the array:
End of explanation |
8,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.1 - Calculer un chi 2 sur un tableau de contingence
$\chi_2$ et tableau de contingence, avec numpy, avec scipy ou sans.
Step1: formule
Le test du $\chi_2$ (wikipedia) sert à comparer deux distributions. Il peut être appliqué sur un tableau de contingence pour comparer la distributions observée avec la distribution qu'on observerait si les deux facteurs du tableau étaient indépendants. On note $M=(m_{ij})$ une matrice de dimension $I \times J$. Le test du $\chi_2$ se calcule comme suit
Step2: calcul avec scipy
Evidemment, il existe une fonction en python qui permet de calculer la statistique $T$
Step3: calcul avec numpy
Step4: Et comme c'est un usage courant, numpy propose une façon de faire sans écrire une boucle avec la fonction sum | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.1 - Calculer un chi 2 sur un tableau de contingence
$\chi_2$ et tableau de contingence, avec numpy, avec scipy ou sans.
End of explanation
import numpy
M = numpy.array([[4, 5, 2, 1],
[6, 3, 1, 7],
[10, 14, 6, 9]])
M
Explanation: formule
Le test du $\chi_2$ (wikipedia) sert à comparer deux distributions. Il peut être appliqué sur un tableau de contingence pour comparer la distributions observée avec la distribution qu'on observerait si les deux facteurs du tableau étaient indépendants. On note $M=(m_{ij})$ une matrice de dimension $I \times J$. Le test du $\chi_2$ se calcule comme suit :
$M = \sum_{ij} m_{ij}$
$\forall i, \; m_{i \bullet} = \sum_j m_{ij}$
$\forall j, \; m_{\bullet j} = \sum_i m_{ij}$
$\forall i,j \; n_{ij} = \frac{m_{i \bullet} m_{\bullet j}}{N}$
Avec ces notations :
$$T = \sum_{ij} \frac{ (m_{ij} - n_{ij})^2}{n_{ij}}$$
La variable aléatoire $T$ suit asymptotiquement une loi du $\chi_2$ à $(I-1)(J-1)$ degrés de liberté (table). Comment le calculer avec numpy ?
tableau au hasard
On prend un petit tableau qu'on choisit au hasard, de préférence non carré pour détecter des erreurs de calculs.
End of explanation
from scipy.stats import chi2_contingency
chi2, pvalue, degrees, expected = chi2_contingency(M)
chi2, degrees, pvalue
Explanation: calcul avec scipy
Evidemment, il existe une fonction en python qui permet de calculer la statistique $T$ : chi2_contingency.
End of explanation
N = M.sum()
ni = numpy.array( [M[i,:].sum() for i in range(M.shape[0])] )
nj = numpy.array( [M[:,j].sum() for j in range(M.shape[1])] )
ni, nj, N
Explanation: calcul avec numpy
End of explanation
ni = M.sum(axis=1)
nj = M.sum(axis=0)
ni, nj, N
nij = ni.reshape(M.shape[0], 1) * nj / N
nij
d = (M - nij) ** 2 / nij
d.sum()
Explanation: Et comme c'est un usage courant, numpy propose une façon de faire sans écrire une boucle avec la fonction sum :
End of explanation |
8,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This document demonstrate how to use the library to define a "density dependent population process" and to compute its mean-field approximation and refined mean-field approximation
Step1: Example
Step2: Simulation and comparison with ODE
We can easily simulate one sample trajectory
Simulation for various values of $N$
Step3: Comparison with the ODE approximation
We can easily compare simulations with the ODE approximation
Step4: Refined mean-field approximation
(reference to be added)
This class also contains some functions to compute the fixed point of the mean-field approximation, to compute the "refined mean-field approximation" and to compare it with simulations.
If $\pi$ is the fixed point of the ODE, and $V$ the constant calculated by the function "meanFieldExpansionSteadyState(order=1)", then we have
$$E[X^N] = \pi + \frac1N V + o(\frac1N) $$
To compute these constants
Step5: Comparison of theoretical V and simulation
We observe that, for this model, the mean-field approximation is already very close to the simulation.
Step6: The function compare_refinedMF can be used to compare the refined mean-field "x+C/N" to the expectation of $X^N$. Note that the expectation is computed by using forward simulation up to time "time"; the value $E[X^N]$ is then the temporal average of $X^N(t)$ from t="time/2" to "time". (Hence, "time" should be manualy chosen so as to minimize the variance).
Step7: Transient analysis
Here we not not compare with simulation but just show how the vectors V(t) and A(t) can be computed.
We observe that for this model the $O(1/N)$ and $O(1/N^2)$ expansions are very close. | Python Code:
# To load the library
import rmftool as rmf
import importlib
importlib.reload(rmf)
# To plot the results
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
%matplotlib notebook
Explanation: This document demonstrate how to use the library to define a "density dependent population process" and to compute its mean-field approximation and refined mean-field approximation
End of explanation
# This code creates an object that represents a "density dependent population process"
ddpp = rmf.DDPP()
# We then add the three transitions :
ddpp.add_transition([-1,1,0],lambda x:x[0]+2*x[0]*x[1])
ddpp.add_transition([0,-1,+1],lambda x:x[1])
ddpp.add_transition([1,0,-1],lambda x:3*x[2])
Explanation: Example : basic SIR model
Suppose that we want to simulate a model composed of $N$ agents where each agent has 3 possible states : $S$ (susceptible) $I$ (infected) and $R$ (recoverd). If we denote by $x_S$, $x_I$ and $x_S$ the population of agent and assume that :
* A susceptible agent becomes infected at rate $x_S + 2x_Sx_I$
* An infected agent becomes recovered at rate $x_I$
* A recovered agent becomes susceptible at rate $3*x_R$
In terms of population process, we get the following transitions:
* $x \mapsto x+\frac1N(-1,1,0)$ at rate $x_0+2x_0x_1$
* $x \mapsto x+\frac1N(0,-1,1)$ at rate $x_1$
* $x \mapsto x+\frac1N(1,0,-1)$ at rate $3x_2$
End of explanation
ddpp.set_initial_state([.3,.2,.5]) # We first need to define an initial stater
T,X = ddpp.simulate(100,time=10) # We first plot a trajectory for $N=100$
plt.plot(T,X)
T,X = ddpp.simulate(1000,time=10) # Then for $N=1000$
plt.plot(T,X,'--')
Explanation: Simulation and comparison with ODE
We can easily simulate one sample trajectory
Simulation for various values of $N$
End of explanation
plt.figure()
ddpp.plot_ODE_vs_simulation(N=100)
plt.figure()
ddpp.plot_ODE_vs_simulation(N=1000)
Explanation: Comparison with the ODE approximation
We can easily compare simulations with the ODE approximation
End of explanation
%time pi,V,W = ddpp.meanFieldExpansionSteadyState(order=1)
print(pi,V)
Explanation: Refined mean-field approximation
(reference to be added)
This class also contains some functions to compute the fixed point of the mean-field approximation, to compute the "refined mean-field approximation" and to compare it with simulations.
If $\pi$ is the fixed point of the ODE, and $V$ the constant calculated by the function "meanFieldExpansionSteadyState(order=1)", then we have
$$E[X^N] = \pi + \frac1N V + o(\frac1N) $$
To compute these constants :
End of explanation
print(pi,'(mean-field)')
for N in [10,50,100]:
Xs,Vs = ddpp.steady_state_simulation(N=N,time=100000/N)
print(Xs,'(Simulation, N={})'.format(N))
print('+/-',Vs)
print(pi+V/N,'(refined mean-field, N={})'.format(N))
Explanation: Comparison of theoretical V and simulation
We observe that, for this model, the mean-field approximation is already very close to the simulation.
End of explanation
Xm,Xrmf,Xs,Vs = ddpp.compare_refinedMF(N=10,time=10000)
print(Xm, 'mean-field')
print(Xrmf,'Refined mean-field')
print(Xs, 'Simulation')
print(Vs,'Confidence inverval of simulation (rough estimation)')
Explanation: The function compare_refinedMF can be used to compare the refined mean-field "x+C/N" to the expectation of $X^N$. Note that the expectation is computed by using forward simulation up to time "time"; the value $E[X^N]$ is then the temporal average of $X^N(t)$ from t="time/2" to "time". (Hence, "time" should be manualy chosen so as to minimize the variance).
End of explanation
n=3
T,X,V,A,XVWABCD=ddpp.meanFieldExpansionTransient(order=2,time=10)
N=10
plt.figure()
plt.plot(T,X,'-')
plt.plot(T,X+V/N,'--')
plt.plot(T,X+V/N+A/N**2,':')
plt.legend(['Mean field approx.','','','$O(1/N)$-expansion','','','$O(1/N^2)$-expansion'])
Explanation: Transient analysis
Here we not not compare with simulation but just show how the vectors V(t) and A(t) can be computed.
We observe that for this model the $O(1/N)$ and $O(1/N^2)$ expansions are very close.
End of explanation |
8,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trajectory equations
Step1: The equation of motion
Step2: For the case of a uniform magnetic field
along the $z$-axis
Step3: Assuming $E_z = 0$ and $E_y = 0$
Step4: Motion is uniform along the $z$-axis
Step5: The constants of integration can be found from the initial conditions $z(0) = 0$ and $v_z(0) = v$
Step6: So that
Step7: Now, the equation for $y$ can be integrated
Step8: For initial conditions $x(0) = x_0, y'(0) = 0$
Step9: This equation can be substituted into the equation for $x$-coorditante
Step10: An expression for $E_x$ can be taken from the example on ribbon beam in free space $E_x = \dfrac{ 2 \pi I_0 }{v}$
Step11: This is an oscillator-type equation
$$
x'' + a x + b = 0
$$
with $a$ and $b$ given by
Step12: It's solution is given by
Step13: From initial conditions $x(0) = x_0, v_0 = 0$
Step14: So that
Step15: Taking into account that
$$ \sqrt{|a|} = \omega_g = \frac{ q B }{mc } $$
where $\omega_g$ is the gyrofrequency, and since
Step16: It is possible to rewrite the solution as
Step17: From the laws of motion for $x(t)$ and $z(t)$
Step18: it is possible to obtain a trajectory equation | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from sympy import *
init_printing()
Ex, Ey, Ez = symbols("E_x, E_y, E_z")
Bx, By, Bz, B = symbols("B_x, B_y, B_z, B")
x, y, z = symbols("x, y, z")
vx, vy, vz, v = symbols("v_x, v_y, v_z, v")
t = symbols("t")
q, m = symbols("q, m")
c, eps0 = symbols("c, epsilon_0")
Explanation: Trajectory equations:
End of explanation
eq_x = Eq( diff(x(t), t, 2), q / m * Ex + q / c / m * (vy * Bz - vz * By) )
eq_y = Eq( diff(y(t), t, 2), q / m * Ey + q / c / m * (-vx * Bz + vz * Bx) )
eq_z = Eq( diff(z(t), t, 2), q / m * Ez + q / c / m * (vx * By - vy * Bx) )
display( eq_x, eq_y, eq_z )
Explanation: The equation of motion:
$$
\begin{gather}
m \frac{d^2 \vec{r} }{dt^2} =
q \vec{E} + \frac{q}{c} [ \vec{v} \vec{B} ]
\end{gather}
$$
In Cortesian coordinates:
End of explanation
uni_mgn_subs = [ (Bx, 0), (By, 0), (Bz, B) ]
eq_x = eq_x.subs(uni_mgn_subs)
eq_y = eq_y.subs(uni_mgn_subs)
eq_z = eq_z.subs(uni_mgn_subs)
display( eq_x, eq_y, eq_z )
Explanation: For the case of a uniform magnetic field
along the $z$-axis:
$$ \vec{B} = B_z = B, \quad B_x = 0, \quad B_y = 0 $$
End of explanation
zero_EyEz_subs = [ (Ey, 0), (Ez, 0) ]
eq_x = eq_x.subs(zero_EyEz_subs)
eq_y = eq_y.subs(zero_EyEz_subs)
eq_z = eq_z.subs(zero_EyEz_subs)
display( eq_x, eq_y, eq_z )
Explanation: Assuming $E_z = 0$ and $E_y = 0$:
End of explanation
z_eq = dsolve( eq_z, z(t) )
vz_eq = Eq( z_eq.lhs.diff(t), z_eq.rhs.diff(t) )
display( z_eq, vz_eq )
Explanation: Motion is uniform along the $z$-axis:
End of explanation
z_0 = 0
v_0 = v
c1_c2_system = []
initial_cond_subs = [(t, 0), (z(0), z_0), (diff(z(t),t).subs(t,0), v_0) ]
c1_c2_system.append( z_eq.subs( initial_cond_subs ) )
c1_c2_system.append( vz_eq.subs( initial_cond_subs ) )
c1, c2 = symbols("C1, C2")
c1_c2 = solve( c1_c2_system, [c1, c2] )
c1_c2
Explanation: The constants of integration can be found from the initial conditions $z(0) = 0$ and $v_z(0) = v$:
End of explanation
z_sol = z_eq.subs( c1_c2 )
vz_sol = vz_eq.subs( c1_c2 )
display( z_sol, vz_sol )
Explanation: So that
End of explanation
v_as_diff = [ (vx, diff(x(t),t)), (vy, diff(y(t),t)), (vz, diff(z_sol.lhs,t)) ]
eq_y = eq_y.subs( v_as_diff )
eq_y = Eq( integrate( eq_y.lhs, (t, 0, t) ), integrate( eq_y.rhs, (t, 0, t) ) )
eq_y
Explanation: Now, the equation for $y$ can be integrated:
End of explanation
x_0 = Symbol('x_0')
vy_0 = 0
initial_cond_subs = [(x(0), x_0), (diff(y(t),t).subs(t,0), vy_0) ]
vy_sol = eq_y.subs( initial_cond_subs )
vy_sol
Explanation: For initial conditions $x(0) = x_0, y'(0) = 0$:
End of explanation
eq_x = eq_x.subs( vy, vy_sol.rhs )
eq_x = Eq( eq_x.lhs, collect( expand( eq_x.rhs ), B *q / c / m ) )
eq_x
Explanation: This equation can be substituted into the equation for $x$-coorditante:
End of explanation
I0 = symbols('I_0')
Ex_subs = [ (Ex, 2 * pi * I0 / v) ]
eq_x = eq_x.subs( ex_subs )
eq_x
Explanation: An expression for $E_x$ can be taken from the example on ribbon beam in free space $E_x = \dfrac{ 2 \pi I_0 }{v}$:
End of explanation
eq_a = Eq(a, eq_x.rhs.expand().coeff(x(t), 1))
eq_b = Eq( b, eq_x.rhs.expand().coeff(x(t), 0) )
display( eq_a , eq_b )
Explanation: This is an oscillator-type equation
$$
x'' + a x + b = 0
$$
with $a$ and $b$ given by
End of explanation
a, b, c = symbols("a, b, c")
osc_eqn = Eq( diff(x(t),t,2), - abs(a)*x(t) + b)
display( osc_eqn )
osc_eqn_sol = dsolve( osc_eqn )
osc_eqn_sol
Explanation: It's solution is given by:
End of explanation
x_0 = symbols( 'x_0' )
v_0 = 0
c1_c2_system = []
initial_cond_subs = [(t, 0), (x(0), x_0), (diff(x(t),t).subs(t,0), v_0) ]
c1_c2_system.append( osc_eqn_sol.subs( initial_cond_subs ) )
osc_eqn_sol_diff = Eq( osc_eqn_sol.lhs.diff(t), osc_eqn_sol.rhs.diff(t) )
c1_c2_system.append( osc_eqn_sol_diff.subs( initial_cond_subs ) )
c1, c2 = symbols("C1, C2")
c1_c2 = solve( c1_c2_system, [c1, c2] )
c1_c2
Explanation: From initial conditions $x(0) = x_0, v_0 = 0$:
End of explanation
x_sol = osc_eqn_sol.subs( c1_c2 )
x_sol
Explanation: So that
End of explanation
b_over_a = simplify( eq_b.rhs / abs( eq_a.rhs ).subs( abs( eq_a.rhs ), -eq_a.rhs ) )
Eq( b/abs(a), b_over_a )
Explanation: Taking into account that
$$ \sqrt{|a|} = \omega_g = \frac{ q B }{mc } $$
where $\omega_g$ is the gyrofrequency, and since
End of explanation
omega_g = symbols('omega_g')
eq_omega_g = Eq( omega_g, q * B / m / c )
A = symbols('A')
eq_A = Eq( A, b_over_a - x_0 )
subs_list = [ (b/abs(a), b_over_a), ( sqrt( abs(a) ), omega_g ), ( eq_A.rhs, eq_A.lhs) ]
x_sol = x_sol.subs( subs_list )
display( x_sol, eq_A, eq_omega_g )
Explanation: It is possible to rewrite the solution as
End of explanation
display( x_sol, z_sol )
Explanation: From the laws of motion for $x(t)$ and $z(t)$
End of explanation
t_from_z = solve( z_sol.subs(z(t),z), t )[0]
x_z_traj = Eq( x_sol.lhs.subs( t, z ), x_sol.rhs.subs( [(t, t_from_z)] ) )
display( x_z_traj, eq_A, eq_omega_g )
Explanation: it is possible to obtain a trajectory equation:
End of explanation |
8,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Dimensionality Reduction
Step1: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset
Step2: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution
Step3: To see what these numbers mean, let's view them as vectors plotted on top of the data
Step4: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
The explained variance quantifies this measure of "importance" in direction.
Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance
Step5: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression
Step6: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
This is the sense in which "dimensionality reduction" works
Step7: This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.
What do the Components Mean?
PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors.
The input data is represented as a vector
Step8: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation.
The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum
Step9: Here we see that with only six PCA components, we recover a reasonable approximation of the input!
Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well.
Choosing the Number of Components
But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components
Step10: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
PCA as data compression
As we mentioned, PCA can be used for is a sort of data compression. Using a small n_components allows you to represent a high dimensional point as a sum of just a few principal vectors.
Here's what a single digit looks like as you change the number of components
Step11: Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Dimensionality Reduction: Principal Component Analysis in-depth
Here we'll explore Principal Component Analysis, which is an extremely useful linear dimensionality reduction technique.
We'll start with our standard set of initial imports:
End of explanation
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
Explanation: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset:
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
Explanation: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution:
End of explanation
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
Explanation: To see what these numbers mean, let's view them as vectors plotted on top of the data:
End of explanation
clf = PCA(0.95) # keep 95% of variance
X_trans = clf.fit_transform(X)
print(X.shape)
print(X_trans.shape)
Explanation: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
The explained variance quantifies this measure of "importance" in direction.
Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance:
End of explanation
X_new = clf.inverse_transform(X_trans)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)
plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)
plt.axis('equal');
Explanation: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
print(X[0][:8])
print(X[0][8:16])
print(X[0][16:24])
print(X[0][24:32])
print(X[0][32:40])
print(X[0][40:48])
pca = PCA(2) # project from 64 to 2 dimensions
Xproj = pca.fit_transform(X)
print(X.shape)
print(Xproj.shape)
(1797*2)/(1797*64)
plt.scatter(Xproj[:, 0], Xproj[:, 1], c=y, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.colorbar();
Explanation: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
This is the sense in which "dimensionality reduction" works: if you can approximate a data set in a lower dimension, you can often have an easier time visualizing it or fitting complicated models to the data.
Application of PCA to Digits
The dimensionality reduction might seem a bit abstract in two dimensions, but the projection and dimensionality reduction can be extremely useful when visualizing high-dimensional data. Let's take a quick look at the application of PCA to the digits data we looked at before:
End of explanation
from fig_code.figures import plot_image_components
with plt.style.context('seaborn-white'):
plot_image_components(digits.data[0])
Explanation: This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.
What do the Components Mean?
PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors.
The input data is represented as a vector: in the case of the digits, our data is
$$
x = [x_1, x_2, x_3 \cdots]
$$
but what this really means is
$$
image(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots
$$
If we reduce the dimensionality in the pixel space to (say) 6, we recover only a partial image:
End of explanation
from fig_code.figures import plot_pca_interactive
plot_pca_interactive(digits.data)
Explanation: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation.
The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum:
End of explanation
pca = PCA().fit(X)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
Explanation: Here we see that with only six PCA components, we recover a reasonable approximation of the input!
Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well.
Choosing the Number of Components
But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components:
End of explanation
fig, axes = plt.subplots(8, 8, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
pca = PCA(i + 1).fit(X)
im = pca.inverse_transform(pca.transform(X[25:26]))
ax.imshow(im.reshape((8, 8)), cmap='binary')
ax.text(0.95, 0.05, 'n = {0}'.format(i + 1), ha='right',
transform=ax.transAxes, color='green')
ax.set_xticks([])
ax.set_yticks([])
Explanation: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
PCA as data compression
As we mentioned, PCA can be used for is a sort of data compression. Using a small n_components allows you to represent a high dimensional point as a sum of just a few principal vectors.
Here's what a single digit looks like as you change the number of components:
End of explanation
from ipywidgets import interact
def plot_digits(n_components):
fig = plt.figure(figsize=(8, 8))
plt.subplot(1, 1, 1, frameon=False, xticks=[], yticks=[])
nside = 10
pca = PCA(n_components).fit(X)
Xproj = pca.inverse_transform(pca.transform(X[:nside ** 2]))
Xproj = np.reshape(Xproj, (nside, nside, 8, 8))
total_var = pca.explained_variance_ratio_.sum()
im = np.vstack([np.hstack([Xproj[i, j] for j in range(nside)])
for i in range(nside)])
plt.imshow(im)
plt.grid(False)
plt.title("n = {0}, variance = {1:.2f}".format(n_components, total_var),
size=18)
plt.clim(0, 16)
interact(plot_digits, n_components=[1, 15, 20, 25, 32, 40, 64], nside=[1, 8]);
Explanation: Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once:
End of explanation |
8,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If I plan to run the scorer every batch to select loans, I should have a minimum score that a loan must receive to even be considered for investing in, and the remaining loans can be selected from in descending score order
Step1: Make sure no loan in test set was in train set
Step2: Add scores and npv_roi_5 to test set
Step3: See what the range of predictions is, to tell if we predict outliers later
Step4: find what is a good percentile to cutoff at, and what the distribution for scores is at that percentile
Step5: Say I wanted the 75pctile of the 80th percentile (-0.36289), what grade distribution of loans are those? | Python Code:
import modeling_utils.data_prep as data_prep
from sklearn.externals import joblib
import time
platform = 'lendingclub'
store = pd.HDFStore(
'/Users/justinhsi/justin_tinkering/data_science/lendingclub/{0}_store.h5'.
format(platform),
append=True)
Explanation: If I plan to run the scorer every batch to select loans, I should have a minimum score that a loan must receive to even be considered for investing in, and the remaining loans can be selected from in descending score order
End of explanation
store.open()
train = store['train_filtered_columns']
test = store['test_filtered_columns']
loan_npv_rois = store['loan_npv_rois']
default_series = test['target_strict']
results = store['results']
store.close()
train_ids = set(train.index.values)
test_ids = set(test.index.values)
assert len(train_ids.intersection(test_ids)) == 0
Explanation: Make sure no loan in test set was in train set
End of explanation
train_X, train_y = data_prep.process_data_test(train)
test_X, test_y = data_prep.process_data_test(test)
train_y = train_y['npv_roi_10'].values
test_y = test_y['npv_roi_10'].values
regr = joblib.load('model_dump/model_0.2.1.pkl')
regr_version = '0.2.1'
train_yhat = regr.predict(train_X)
test_yhat = regr.predict(test_X)
test['0.2.1_scores'] = test_yhat
train['0.2.1_scores'] = train_yhat
test['npv_roi_5'] = loan_npv_rois[.05]
Explanation: Add scores and npv_roi_5 to test set
End of explanation
test['0.2.1_scores'].hist(bins=100)
train['0.2.1_scores'].hist(bins=100)
Explanation: See what the range of predictions is, to tell if we predict outliers later
End of explanation
good_percentiles = np.arange(71,101,1)
good_percentiles = good_percentiles[::-1]
def find_min_score_models(trials, available_loans, test, percentiles):
# looks at loans that scored in top 30%, computes avg npv_roi_5 in each of those
# top percentiles
results = {}
results_scores = {}
pct_default = {}
test_copy = test.copy()
for trial in tqdm_notebook(np.arange(trials)):
loan_ids = np.random.choice(
test_copy.index.values, available_loans, replace=False)
loans_to_pick_from = test_copy.loc[loan_ids, :]
loans_to_pick_from.sort_values('0.2.1_scores', ascending=False, inplace = True)
chunksize = int(len(loans_to_pick_from)/100)
results_dict = {}
results_scores_dict = {}
for k,perc in enumerate(percentiles):
subset = loans_to_pick_from[k*chunksize:(k+1)*chunksize]
results_dict[perc] = subset['npv_roi_5'].mean()
results_scores_dict[perc] = subset['0.2.1_scores'].mean()
results[trial] = pd.Series(results_dict)
results_scores[trial] = pd.Series(results_scores_dict)
return pd.DataFrame.from_dict(results).T, pd.DataFrame.from_dict(results_scores).T
# assume there's 200 loans per batch
trials = 20000
available_loans = 200
results, results_scores = find_min_score_models(trials, available_loans, test, good_percentiles)
summaries = results.describe()
summaries_scores = results_scores.describe()
plt.figure(figsize=(12,9))
plt.plot(summaries.columns.values, summaries.loc['mean',:], 'o', label='mean')
plt.plot(summaries.columns.values, summaries.loc['25%',:], 'ro', label='25%')
# plt.plot(summaries.columns.values, summaries.loc['50%',:], '-.')
plt.plot(summaries.columns.values, summaries.loc['75%',:], 'ko', label='75%')
plt.title('return per percentile over batches')
plt.legend(loc='best')
plt.xlabel('percentile of 0.2.1_score')
plt.ylabel('npv_roi_5')
plt.show()
plt.figure(figsize=(12,9))
plt.plot(summaries_scores.columns.values, summaries_scores.loc['mean',:], 'o', label='mean')
plt.plot(summaries_scores.columns.values, summaries_scores.loc['25%',:], 'ro', label='25%')
# plt.plot(summaries_scores.columns.values, summaries_scores.loc['50%',:], '-.')
plt.plot(summaries_scores.columns.values, summaries_scores.loc['75%',:], 'ko', label='75%')
plt.title('scores per percentile over batches')
plt.legend(loc='best')
plt.xlabel('percentile of 0.2.1_score')
plt.ylabel('npv_roi_5')
plt.show()
summaries
summaries_scores.loc['mean', 75]
# lets take one sided 99% cofidence interval at score is greater than mean -3 std_dev at 90th percentile
cutoff = summaries_scores.loc['mean', 90] - 3*summaries_scores.loc['std', 90]
Explanation: find what is a good percentile to cutoff at, and what the distribution for scores is at that percentile
End of explanation
picks = test[test['0.2.1_scores'] >= cutoff]
# grade distribution of picks
picks['grade'].value_counts(dropna=False)/len(picks)
# compared to grade distribution of all test loans
test['grade'].value_counts(dropna=False)/len(test)
cutoff
Explanation: Say I wanted the 75pctile of the 80th percentile (-0.36289), what grade distribution of loans are those?
End of explanation |
8,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download the list of occultation periods from the MOC at Berkeley.
Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.
Step1: Download the NuSTAR TLE archive.
This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
The times, line1, and line2 elements are now the TLE elements for each epoch.
Step2: Here is where we define the observing window that we want to use.
Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error. | Python Code:
fname = io.download_occultation_times(outdir='../data/')
print(fname)
Explanation: Download the list of occultation periods from the MOC at Berkeley.
Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.
End of explanation
tlefile = io.download_tle(outdir='../data')
print(tlefile)
times, line1, line2 = io.read_tle_file(tlefile)
Explanation: Download the NuSTAR TLE archive.
This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
The times, line1, and line2 elements are now the TLE elements for each epoch.
End of explanation
tstart = '2021-07-20T00:00:00'
tend = '2021-07-20T15:00:00'
orbits = planning.sunlight_periods(fname, tstart, tend)
orbits
# Get the solar parameter
from sunpy.coordinates import sun
angular_size = sun.angular_radius(t='now')
dx = angular_size.arcsec
print(dx)
sun_pa = planning.get_nustar_roll(tstart, 0.)
pa = planning.get_nustar_roll(tstart, 0*u.deg)
print(tstart)
print("NuSTAR Roll angle for Det0 in NE quadrant: {}".format(pa))
# Orbit 1 (Eastern limb)
offset = [-1050, -350.]*u.arcsec
for ind, orbit in enumerate(orbits):
midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])
sky_pos = planning.get_skyfield_position(midTime, offset, load_path='./data', parallax_correction=True)
print("Orbit: {}".format(ind))
print(f"Solar offset: {offset}")
print("Orbit start: {} Orbit end: {}".format(orbit[0].iso, orbit[1].iso))
print(f'Aim time: {midTime.iso} RA (deg): {sky_pos[0]:8.4f} Dec (deg): {sky_pos[1]:8.4f}')
print("")
test1 = SkyCoord(289.3792274160115, -22.304595055979675, unit = 'deg')
orb1 = SkyCoord(289.3855, -22.3051, unit = 'deg')
orb1.separation(test1)
orbit
import sunpy
sunpy.__version__
test1 = SkyCoord(289.898451566591, -22.158432904027155 , unit = 'deg')
orb1 = SkyCoord(289.9047, -22.1589, unit = 'deg')
orb1.separation(test1)
sun_pa = planning.get_nustar_roll(tstart, 0.)
pa = planning.get_nustar_roll(tstart, 45*u.deg)
offset = [0, 0.]*u.arcsec
ind = 1
orbit = orbits[0]
midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])
sun_pos = planning.get_skyfield_position(midTime, offset, load_path='./data', parallax_correction=True)
# Orbit 1 (AR)
offset = [900, -300.]*u.arcsec
ind = 1
orbit = orbits[0]
midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])
sky_pos = planning.get_skyfield_position(midTime, offset, load_path='./data', parallax_correction=True)
planning.make_test_region(sky_pos[0], sky_pos[1], pa, sun_pos[0], sun_pos[1], sun_pa)
print(pa)
Explanation: Here is where we define the observing window that we want to use.
Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.
End of explanation |
8,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training on an Advanced Standard CNN Architecture
https
Step1: Preparation
Step2: Uncomment next three cells if you want to train on augmented image set
Otherwise Overfitting can not be avoided because image set is simply too small
Step3: Split test and train data 80% to 20%
Step4: Training Xception
Slighly optimized version of Inception
Step5: This is a truly complex model
Batch size needs to be small overthise model does not fit in memory
Will take long to train, even on GPU
on augmented dataset 4 minutes on K80 per Epoch
Step6: Each Epoch takes very long
Extremely impressing how fast it converges
Step7: Alternative
Step8: Results are a bit less good
Maybe need to train longer?
Batches can be larger, training is faster even though more epochs
Metrics for Augmented Data
Accuracy
Validation Accuracy | Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pylab as plt
import numpy as np
from distutils.version import StrictVersion
import sklearn
print(sklearn.__version__)
assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')
import keras
print(keras.__version__)
assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')
import pandas as pd
print(pd.__version__)
assert StrictVersion(pd.__version__) >= StrictVersion('0.20.0')
Explanation: Training on an Advanced Standard CNN Architecture
https://keras.io/applications/
The 9 Deep Learning Papers You Need To Know About: https://adeshpande3.github.io/adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html
Neural Network Architectures
top-1 rating on ImageNet: https://stats.stackexchange.com/questions/156471/imagenet-what-is-top-1-and-top-5-error-rate
End of explanation
# for VGG, ResNet, and MobileNet
INPUT_SHAPE = (224, 224)
# for InceptionV3, InceptionResNetV2, Xception
# INPUT_SHAPE = (299, 299)
import os
import skimage.data
import skimage.transform
from keras.utils.np_utils import to_categorical
import numpy as np
def load_data(data_dir, type=".ppm"):
num_categories = 6
# Get all subdirectories of data_dir. Each represents a label.
directories = [d for d in os.listdir(data_dir)
if os.path.isdir(os.path.join(data_dir, d))]
# Loop through the label directories and collect the data in
# two lists, labels and images.
labels = []
images = []
for d in directories:
label_dir = os.path.join(data_dir, d)
file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(type)]
# For each label, load it's images and add them to the images list.
# And add the label number (i.e. directory name) to the labels list.
for f in file_names:
images.append(skimage.data.imread(f))
labels.append(int(d))
images64 = [skimage.transform.resize(image, INPUT_SHAPE) for image in images]
y = np.array(labels)
y = to_categorical(y, num_categories)
X = np.array(images64)
return X, y
# Load datasets.
ROOT_PATH = "./"
original_dir = os.path.join(ROOT_PATH, "speed-limit-signs")
original_images, original_labels = load_data(original_dir, type=".ppm")
X, y = original_images, original_labels
Explanation: Preparation
End of explanation
# !curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/augmented-signs.zip
# from zipfile import ZipFile
# zip = ZipFile('augmented-signs.zip')
# zip.extractall('.')
data_dir = os.path.join(ROOT_PATH, "augmented-signs")
augmented_images, augmented_labels = load_data(data_dir, type=".png")
# merge both data sets
all_images = np.vstack((X, augmented_images))
all_labels = np.vstack((y, augmented_labels))
# shuffle
# https://stackoverflow.com/a/4602224
p = numpy.random.permutation(len(all_labels))
shuffled_images = all_images[p]
shuffled_labels = all_labels[p]
X, y = shuffled_images, shuffled_labels
Explanation: Uncomment next three cells if you want to train on augmented image set
Otherwise Overfitting can not be avoided because image set is simply too small
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
X_train.shape, y_train.shape
Explanation: Split test and train data 80% to 20%
End of explanation
from keras.applications.xception import Xception
model = Xception(classes=6, weights=None)
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# !rm -rf ./tf_log
# https://keras.io/callbacks/#tensorboard
tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')
# To start tensorboard
# tensorboard --logdir=./tf_log
# open http://localhost:6006
Explanation: Training Xception
Slighly optimized version of Inception: https://keras.io/applications/#xception
Inception V3 no longer using non-sequential tower architecture, rahter short cuts: https://keras.io/applications/#inceptionv3
Uses Batch Normalization:
https://keras.io/layers/normalization/#batchnormalization
http://cs231n.github.io/neural-networks-2/#batchnorm
Batch Normalization still exist even in prediction model
normalizes activations for each batch around 0 and standard deviation close to 1
replaces Dropout except for final fc layers
as a next step might make sense to alter classifier to again have Dropout for training
All that makes it ideal for our use case
End of explanation
# Depends on harware GPU architecture, model is really complex, batch needs to be small (this works well on K80)
BATCH_SIZE = 25
early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=1)
%time model.fit(X_train, y_train, epochs=50, validation_split=0.2, callbacks=[tb_callback, early_stopping_callback], batch_size=BATCH_SIZE)
Explanation: This is a truly complex model
Batch size needs to be small overthise model does not fit in memory
Will take long to train, even on GPU
on augmented dataset 4 minutes on K80 per Epoch: 400 Minutes for 100 Epochs = 6-7 hours
End of explanation
train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
test_loss, test_accuracy
original_loss, original_accuracy = model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE)
original_loss, original_accuracy
model.save('xception-augmented.hdf5')
!ls -lh xception-augmented.hdf5
Explanation: Each Epoch takes very long
Extremely impressing how fast it converges: Almost 100% for validation starting from epoch 25
TODO: Metrics for Augmented Data
Accuracy
Validation Accuracy
End of explanation
from keras.applications.resnet50 import ResNet50
model = ResNet50(classes=6, weights=None)
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1)
!rm -rf ./tf_log
# https://keras.io/callbacks/#tensorboard
tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')
# To start tensorboard
# tensorboard --logdir=./tf_log
# open http://localhost:6006
# Depends on harware GPU architecture, model is really complex, batch needs to be small (this works well on K80)
BATCH_SIZE = 50
# https://github.com/fchollet/keras/issues/6014
# batch normalization seems to mess with accuracy when test data set is small, accuracy here is different from below
%time model.fit(X_train, y_train, epochs=50, validation_split=0.2, batch_size=BATCH_SIZE, callbacks=[tb_callback, early_stopping_callback])
# %time model.fit(X_train, y_train, epochs=50, validation_split=0.2, batch_size=BATCH_SIZE, callbacks=[tb_callback])
Explanation: Alternative: ResNet
basic ideas
depth does matter
8x deeper than VGG
possible by using shortcuts and skipping final fc layer
https://keras.io/applications/#resnet50
https://medium.com/towards-data-science/neural-network-architectures-156e5bad51ba
http://arxiv.org/abs/1512.03385
End of explanation
train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
test_loss, test_accuracy
original_loss, original_accuracy = model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE)
original_loss, original_accuracy
model.save('resnet-augmented.hdf5')
!ls -lh resnet-augmented.hdf5
Explanation: Results are a bit less good
Maybe need to train longer?
Batches can be larger, training is faster even though more epochs
Metrics for Augmented Data
Accuracy
Validation Accuracy
End of explanation |
8,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-Nearest Neighbors (KNN)
by Chiyuan Zhang and Sören Sonnenburg
This notebook illustrates the <a href="http
Step1: Let us plot the first five examples of the train data (first row) and test data (second row).
Step2: Then we import shogun components and convert the data to shogun objects
Step3: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
Step4: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time
Step5: We have the prediction for each of the 13 k's now and can quickly compute the accuracies
Step6: So k=3 seems to have been the optimal choice.
Accellerating KNN
Obviously applying KNN is very costly
Step7: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN
Step8: Evaluate KNN with and without Cover Tree. This takes a few seconds
Step9: Generate plots with the data collected in the evaluation
Step10: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.
Comparison to Multiclass Support Vector Machines
In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above.
Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).
Step11: Let's apply the SVM to the same test data set to compare results
Step12: Since the SVM performs way better on this task - let's apply it to all data we did not use in training. | Python Code:
import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat, savemat
from numpy import random
from os import path
import matplotlib.pyplot as plt
%matplotlib inline
import shogun as sg
mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = mat['data']
Yall = np.array(mat['label'].squeeze(), dtype=np.double)
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
random.seed(0)
subset = random.permutation(len(Yall))
Xtrain = Xall[:, subset[:5000]]
Ytrain = Yall[subset[:5000]]
Xtest = Xall[:, subset[5000:6000]]
Ytest = Yall[subset[5000:6000]]
Nsplit = 2
all_ks = range(1, 21)
print(Xall.shape)
print(Xtrain.shape)
print(Xtest.shape)
Explanation: K-Nearest Neighbors (KNN)
by Chiyuan Zhang and Sören Sonnenburg
This notebook illustrates the <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href="http://en.wikipedia.org/wiki/Cover_tree">Cover Trees</a> on speed is illustrated by comparing KNN with and without it. Finally, a comparison with <a href="http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM">Multiclass Support Vector Machines</a> is shown.
The basics
The training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias.
In SHOGUN, you can use KNN to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard EuclideanDistance, but in general, any subclass of Distance could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K.
First we load and init data split:
End of explanation
def plot_example(dat, lab):
for i in range(5):
ax=plt.subplot(1,5,i+1)
plt.title(int(lab[i]))
ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')
ax.set_xticks([])
ax.set_yticks([])
_=plt.figure(figsize=(17,6))
plt.gray()
plot_example(Xtrain, Ytrain)
_=plt.figure(figsize=(17,6))
plt.gray()
plot_example(Xtest, Ytest)
Explanation: Let us plot the first five examples of the train data (first row) and test data (second row).
End of explanation
labels = sg.create_labels(Ytrain)
feats = sg.create_features(Xtrain)
k=3
dist = sg.create_distance('EuclideanDistance')
knn = sg.create_machine("KNN", k=k, distance=dist, labels=labels)
labels_test = sg.create_labels(Ytest)
feats_test = sg.create_features(Xtest)
knn.train(feats)
pred = knn.apply(feats_test)
print("Predictions", pred.get("labels")[:5])
print("Ground Truth", Ytest[:5])
evaluator = sg.create_evaluation("MulticlassAccuracy")
accuracy = evaluator.evaluate(pred, labels_test)
print("Accuracy = %2.2f%%" % (100*accuracy))
Explanation: Then we import shogun components and convert the data to shogun objects:
End of explanation
idx=np.where(pred != Ytest)[0]
Xbad=Xtest[:,idx]
Ybad=Ytest[idx]
_=plt.figure(figsize=(17,6))
plt.gray()
plot_example(Xbad, Ybad)
Explanation: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
End of explanation
knn.put('k', 13)
multiple_k=knn.get("classify_for_multiple_k")
print(multiple_k.shape)
Explanation: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step:
End of explanation
for k in range(13):
print("Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest)))
Explanation: We have the prediction for each of the 13 k's now and can quickly compute the accuracies:
End of explanation
%%time
knn.put('k', 3)
knn.put('knn_solver', "KNN_BRUTE")
pred = knn.apply(feats_test)
# FIXME: causes SEGFAULT
# %%time
# knn.put('k', 3)
# knn.put('knn_solver', "KNN_COVER_TREE")
# pred = knn.apply(feats_test)
Explanation: So k=3 seems to have been the optimal choice.
Accellerating KNN
Obviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above:
End of explanation
def evaluate(labels, feats, use_cover_tree=False):
import time
split = sg.create_splitting_strategy("CrossValidationSplitting", labels=labels, num_subsets=Nsplit)
split.build_subsets()
accuracy = np.zeros((Nsplit, len(all_ks)))
acc_train = np.zeros(accuracy.shape)
time_test = np.zeros(accuracy.shape)
for i in range(Nsplit):
idx_train = split.generate_subset_inverse(i)
idx_test = split.generate_subset_indices(i)
for j, k in enumerate(all_ks):
#print "Round %d for k=%d..." % (i, k)
feats.add_subset(idx_train)
labels.add_subset(idx_train)
dist = sg.create_distance('EuclideanDistance')
dist.init(feats, feats)
knn = sg.create_machine("KNN", k=k, distance=dist, labels=labels)
#knn.set_store_model_features(True)
#FIXME: causes SEGFAULT
if use_cover_tree:
continue
# knn.put('knn_solver', "KNN_COVER_TREE")
else:
knn.put('knn_solver', "KNN_BRUTE")
knn.train()
evaluator = sg.create_evaluation("MulticlassAccuracy")
pred = knn.apply()
acc_train[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
feats.add_subset(idx_test)
labels.add_subset(idx_test)
t_start = time.clock()
pred = knn.apply_multiclass(feats)
time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels()
accuracy[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
return {'eout': accuracy, 'ein': acc_train, 'time': time_test}
Explanation: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN:
End of explanation
labels = sg.create_labels(Ytest)
feats = sg.create_features(Xtest)
print("Evaluating KNN...")
wo_ct = evaluate(labels, feats, use_cover_tree=False)
# wi_ct = evaluate(labels, feats, use_cover_tree=True)
print("Done!")
Explanation: Evaluate KNN with and without Cover Tree. This takes a few seconds:
End of explanation
fig = plt.figure(figsize=(8,5))
plt.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*')
# plt.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*')
plt.legend(["Test Accuracy", "Training Accuracy"])
plt.xlabel('K')
plt.ylabel('Accuracy')
plt.title('KNN Accuracy')
plt.tight_layout()
fig = plt.figure(figsize=(8,5))
plt.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*')
# plt.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d')
plt.xlabel("K")
plt.ylabel("time")
plt.title('KNN time')
plt.legend(["Plain KNN", "CoverTree KNN"], loc='center right')
plt.tight_layout()
Explanation: Generate plots with the data collected in the evaluation:
End of explanation
width=80
C=1
gk=sg.create_kernel("GaussianKernel", width=width)
svm=sg.create_machine("GMNPSVM", C=C, kernel=gk, labels=labels)
_=svm.train(feats)
Explanation: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.
Comparison to Multiclass Support Vector Machines
In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above.
Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).
End of explanation
out=svm.apply(feats_test)
evaluator = sg.create_evaluation("MulticlassAccuracy")
accuracy = evaluator.evaluate(out, labels_test)
print("Accuracy = %2.2f%%" % (100*accuracy))
Explanation: Let's apply the SVM to the same test data set to compare results:
End of explanation
Xrem=Xall[:,subset[6000:]]
Yrem=Yall[subset[6000:]]
feats_rem=sg.create_features(Xrem)
labels_rem=sg.create_labels(Yrem)
out=svm.apply(feats_rem)
evaluator = sg.create_evaluation("MulticlassAccuracy")
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get("labels") != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=plt.figure(figsize=(17,6))
plt.gray()
plot_example(Xbad, Ybad)
Explanation: Since the SVM performs way better on this task - let's apply it to all data we did not use in training.
End of explanation |
8,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Screenshots and Movies with WebGL
One can use the REBOUND WebGL ipython widget to capture screenshots of a simualtion. These screenshots can then be easily compiled into a movie.
The widget is using the ipywidgets package which needs to be installed and enabled. More information on this can be found in the ipywidgets documentation at https
Step1: You can now drag the widget with your mouse or touchpad to look at the simulation from a different angle. Keep the shift key pressed while you drag to zoom in or out.
To take a single screenshot, all you have to do is call the takeScreenshot function of the widget.
Step2: You will see that there is now a file screenshot00000.png in the current directory. It shows the same view as the WebGL widget in the notebook. To get a larger image, increase the size of the widget (see the documentation for the widget for all possible options).
We could now rotate the widget or integrate the simulation. If we then execute the same command takeScreenshot command again, we will get another file screenshot00001.png.
Consider the following code
Step3: This will not produce the desired outcome (in fact it will through an expection). The reason is complex. In short, ipywidgets provides no blocking calls to wait for updates of a widget because the widget updates make use of the ipython event loop which does not get run during an execution of a cell.
Thus, to capture multiple screenshots at different times, one either needs to take one screenshot per cell, or use the following more convenient way | Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1) # add a star
for i in range(10):
sim.add(m=1e-3,a=0.4+0.1*i,inc=0.03*i,omega=5.*i) # Jupiter mass planets on close orbits
sim.move_to_com() # Move to the centre of mass frame
w = sim.getWidget()
w
Explanation: Screenshots and Movies with WebGL
One can use the REBOUND WebGL ipython widget to capture screenshots of a simualtion. These screenshots can then be easily compiled into a movie.
The widget is using the ipywidgets package which needs to be installed and enabled. More information on this can be found in the ipywidgets documentation at https://ipywidgets.readthedocs.io/en/latest/user_install.html. You also need a browser and a graphics card that supports WebGL.
Note that this is a new feature and might not work an all systems. We've tested it on python 3.5.2.
Let's first create a simulation and display it using the REBOUND WebGL widget.
End of explanation
w.takeScreenshot()
Explanation: You can now drag the widget with your mouse or touchpad to look at the simulation from a different angle. Keep the shift key pressed while you drag to zoom in or out.
To take a single screenshot, all you have to do is call the takeScreenshot function of the widget.
End of explanation
# w.takeScreenshot()
# sim.integrate(10)
# w.takeScreenshot()
Explanation: You will see that there is now a file screenshot00000.png in the current directory. It shows the same view as the WebGL widget in the notebook. To get a larger image, increase the size of the widget (see the documentation for the widget for all possible options).
We could now rotate the widget or integrate the simulation. If we then execute the same command takeScreenshot command again, we will get another file screenshot00001.png.
Consider the following code:
End of explanation
times = [0,10,100]
w.takeScreenshot(times)
Explanation: This will not produce the desired outcome (in fact it will through an expection). The reason is complex. In short, ipywidgets provides no blocking calls to wait for updates of a widget because the widget updates make use of the ipython event loop which does not get run during an execution of a cell.
Thus, to capture multiple screenshots at different times, one either needs to take one screenshot per cell, or use the following more convenient way:
End of explanation |
8,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stockmarket analysis with pmdarima
This example follows the post on Towards Data Science (TDS), demonstrating the use of pmdarima to simplify time series analysis.
Step1: Import the data
pmdarima contains an embedded datasets submodule that allows us to try out models on common datasets. We can load the MSFT stock data from pmdarima 1.3.0+
Step2: Split the data
As in the blog post, we'll use 80% of the samples as training data. Note that a time series' train/test split is different from that of a dataset without temporality; order must be preserved if we hope to discover any notable trends.
Step3: Pre-modeling analysis
TDS fixed p at 5 based on some lag plot analysis
Step4: All lags look fairly linear, so it's a good indicator that an auto-regressive model is a good choice. Therefore, we'll allow the auto_arima to select the lag term for us, up to 6.
Estimating the differencing term
We can estimate the best lag term with several statistical tests
Step5: Use auto_arima to fit a model on the data. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import pmdarima as pm
print(f"Using pmdarima {pm.__version__}")
Explanation: Stockmarket analysis with pmdarima
This example follows the post on Towards Data Science (TDS), demonstrating the use of pmdarima to simplify time series analysis.
End of explanation
from pmdarima.datasets.stocks import load_msft
df = load_msft()
df.head()
Explanation: Import the data
pmdarima contains an embedded datasets submodule that allows us to try out models on common datasets. We can load the MSFT stock data from pmdarima 1.3.0+:
End of explanation
train_len = int(df.shape[0] * 0.8)
train_data, test_data = df[:train_len], df[train_len:]
y_train = train_data['Open'].values
y_test = test_data['Open'].values
print(f"{train_len} train samples")
print(f"{df.shape[0] - train_len} test samples")
Explanation: Split the data
As in the blog post, we'll use 80% of the samples as training data. Note that a time series' train/test split is different from that of a dataset without temporality; order must be preserved if we hope to discover any notable trends.
End of explanation
from pandas.plotting import lag_plot
fig, axes = plt.subplots(3, 2, figsize=(12, 16))
plt.title('MSFT Autocorrelation plot')
# The axis coordinates for the plots
ax_idcs = [
(0, 0),
(0, 1),
(1, 0),
(1, 1),
(2, 0),
(2, 1)
]
for lag, ax_coords in enumerate(ax_idcs, 1):
ax_row, ax_col = ax_coords
axis = axes[ax_row][ax_col]
lag_plot(df['Open'], lag=lag, ax=axis)
axis.set_title(f"Lag={lag}")
plt.show()
Explanation: Pre-modeling analysis
TDS fixed p at 5 based on some lag plot analysis:
End of explanation
from pmdarima.arima import ndiffs
kpss_diffs = ndiffs(y_train, alpha=0.05, test='kpss', max_d=6)
adf_diffs = ndiffs(y_train, alpha=0.05, test='adf', max_d=6)
n_diffs = max(adf_diffs, kpss_diffs)
print(f"Estimated differencing term: {n_diffs}")
Explanation: All lags look fairly linear, so it's a good indicator that an auto-regressive model is a good choice. Therefore, we'll allow the auto_arima to select the lag term for us, up to 6.
Estimating the differencing term
We can estimate the best lag term with several statistical tests:
End of explanation
auto = pm.auto_arima(y_train, d=n_diffs, seasonal=False, stepwise=True,
suppress_warnings=True, error_action="ignore", max_p=6,
max_order=None, trace=True)
print(auto.order)
from sklearn.metrics import mean_squared_error
from pmdarima.metrics import smape
model = auto
def forecast_one_step():
fc, conf_int = model.predict(n_periods=1, return_conf_int=True)
return (
fc.tolist()[0],
np.asarray(conf_int).tolist()[0])
forecasts = []
confidence_intervals = []
for new_ob in y_test:
fc, conf = forecast_one_step()
forecasts.append(fc)
confidence_intervals.append(conf)
# Updates the existing model with a small number of MLE steps
model.update(new_ob)
print(f"Mean squared error: {mean_squared_error(y_test, forecasts)}")
print(f"SMAPE: {smape(y_test, forecasts)}")
fig, axes = plt.subplots(2, 1, figsize=(12, 12))
# --------------------- Actual vs. Predicted --------------------------
axes[0].plot(y_train, color='blue', label='Training Data')
axes[0].plot(test_data.index, forecasts, color='green', marker='o',
label='Predicted Price')
axes[0].plot(test_data.index, y_test, color='red', label='Actual Price')
axes[0].set_title('Microsoft Prices Prediction')
axes[0].set_xlabel('Dates')
axes[0].set_ylabel('Prices')
axes[0].set_xticks(np.arange(0, 7982, 1300).tolist(), df['Date'][0:7982:1300].tolist())
axes[0].legend()
# ------------------ Predicted with confidence intervals ----------------
axes[1].plot(y_train, color='blue', label='Training Data')
axes[1].plot(test_data.index, forecasts, color='green',
label='Predicted Price')
axes[1].set_title('Prices Predictions & Confidence Intervals')
axes[1].set_xlabel('Dates')
axes[1].set_ylabel('Prices')
conf_int = np.asarray(confidence_intervals)
axes[1].fill_between(test_data.index,
conf_int[:, 0], conf_int[:, 1],
alpha=0.9, color='orange',
label="Confidence Intervals")
axes[1].set_xticks(np.arange(0, 7982, 1300).tolist(), df['Date'][0:7982:1300].tolist())
axes[1].legend()
df["Date"]
Explanation: Use auto_arima to fit a model on the data.
End of explanation |
8,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source
Step1: Elements Are Lists
Step2: Attributes Are Dictonaries
Step3: Searching
Step4: Generating XML | Python Code:
#import lxml.etree as etree
try:
from lxml import etree as etree
except ImportError:
import xml.etree.ElementTree as etree
tree = etree.parse('feed.xml')
root = tree.getroot()
root
Explanation: Source : Dive Into Python - Chapter 12 XML by Mark Pilgrim
XML overview
XML is a generalized way of describing hierarchical structured data.
An xml document contains one or more elements, which are delimited by start and end tags. Elements can be nested to any depth.
The first element in every xml document is called the root element. An xml document can only have one root element.
Elements can have attributes, which are name-value pairs. Attributes are listed within the start tag of an element and separated by whitespace. Attribute names can not be repeated within an element. Attribute values must be quoted. You may use either single or double quotes.
An element’s attributes form an unordered set of keys and values, like a Python dictionary.
Elements can have text content.
Like Python functions can be declared in different modules, xml elements can be declared in different namespaces. Namespaces usually look like URLs.
You can also use an xmlns:prefix declaration to define a namespace and associate it with a prefix. Then each element in that namespace must be explicitly declared with the prefix.
xml documents can contain character encoding information on the first line, before the root element.
Parsing XML
End of explanation
root.tag
len(root)
for child in root:
print(child)
Explanation: Elements Are Lists
End of explanation
root.attrib
c4_att = root[4].attrib
c4_att
c4_att['rel'],c4_att['href']
Explanation: Attributes Are Dictonaries
End of explanation
# find 1st matching entry
tree.find('//{http://www.w3.org/2005/Atom}entry')
# find all entry elements
tree.findall('//{http://www.w3.org/2005/Atom}entry')
# find all category elements
tree.findall('//{http://www.w3.org/2005/Atom}category')
# find all category element with attribute term="mp4"
tree.findall('//{http://www.w3.org/2005/Atom}category[@term="mp4"]')
# find all elements with href attribute
href_nodes = tree.findall('//{http://www.w3.org/2005/Atom}*[@href]')
for e in href_nodes:
print(e.attrib['href']) # get link url
# advanced search with XPath
NSMAP = {'atom': 'http://www.w3.org/2005/Atom'}
entries = tree.xpath("//atom:category[@term='accessibility']/..", namespaces=NSMAP)
entries[0].tag
title = entries[0].xpath('./atom:title/text()', namespaces=NSMAP)
title
Explanation: Searching
End of explanation
new_feed = etree.Element('{http://www.w3.org/2005/Atom}feed',
attrib={'{http://www.w3.org/XML/1998/namespace}lang': 'en'})
print(etree.tostring(new_feed))
# add more element/text
title = etree.SubElement(new_feed, 'title', attrib={'type':'html'})
print(etree.tounicode(new_feed))
title.text = 'Dive into Python!'
print(etree.tounicode(new_feed))
# pretty print XML
print(etree.tounicode(new_feed, pretty_print=True))
Explanation: Generating XML
End of explanation |
8,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ZPIC Python interface
This notebook illustrates the use of the ZPIC Python interface to run a simulation and save the results to disk.
Calling ZPIC from Python requires importing the appropriate ZPIC module. For this example we will be using the EM1D code, so we need to import the em1d module.
Step1: Initializing a ZPIC simulation requires setting the simulation box and timestep
Step2: Next we need to describe the particle species in the simulation. In this example (demonstration of the two stream instability) we are using 2 species
Step3: Writing diagnostic output to disk
You can run ZPIC inside the notebook (or any interactive Python/iPython session) and access all the simulation data directly in memory, without writing any output to disk, as described in other notebooks in this folder. For most situations this is the recommended way of using the code. However, if you your simulation takes a long time to compute, you may want to write diagnostic information to disk for post-processing later.
To do this you must define the required diagnostics in a python function that accepts as a single argument a simulation object. This routine will be called once per iteration, and it can access global variables defined in the Python code, e.g.
Step4: We can now initialize the simulation, passing in the function we just created
Step5: To run the simulation use the run method, giving the final time as the sole parameter
Step6: Accessing simulation results
Simulation results are saved in the ZDF format, as in normal (non-Python) ZPIC simulations, and can now be visualized in the noteboook | Python Code:
import em1d
Explanation: ZPIC Python interface
This notebook illustrates the use of the ZPIC Python interface to run a simulation and save the results to disk.
Calling ZPIC from Python requires importing the appropriate ZPIC module. For this example we will be using the EM1D code, so we need to import the em1d module.
End of explanation
import numpy as np
nx = 120
box = 4 * np.pi
dt = 0.1
tmax = 50.0
ndump = 10
Explanation: Initializing a ZPIC simulation requires setting the simulation box and timestep
End of explanation
ppc = 500
ufl = [0.4, 0.0, 0.0]
uth = [0.001,0.001,0.001]
right = em1d.Species( "right", -1.0, ppc, ufl = ufl, uth = uth )
ufl[0] = -ufl[0]
left = em1d.Species( "left", -1.0, ppc, ufl = ufl, uth = uth )
Explanation: Next we need to describe the particle species in the simulation. In this example (demonstration of the two stream instability) we are using 2 species:
End of explanation
def rep( sim ):
# sim.n has the current simulation iteration
if (sim.n % ndump == 0):
right.report("particles")
left.report("particles")
sim.emf.report("E",0)
Explanation: Writing diagnostic output to disk
You can run ZPIC inside the notebook (or any interactive Python/iPython session) and access all the simulation data directly in memory, without writing any output to disk, as described in other notebooks in this folder. For most situations this is the recommended way of using the code. However, if you your simulation takes a long time to compute, you may want to write diagnostic information to disk for post-processing later.
To do this you must define the required diagnostics in a python function that accepts as a single argument a simulation object. This routine will be called once per iteration, and it can access global variables defined in the Python code, e.g.:
End of explanation
sim = em1d.Simulation( nx, box, dt, species = [right,left], report = rep )
Explanation: We can now initialize the simulation, passing in the function we just created
End of explanation
sim.run(tmax)
Explanation: To run the simulation use the run method, giving the final time as the sole parameter:
End of explanation
import zdf
import matplotlib.pyplot as plt
(particles,info) = zdf.read("PARTICLES/particles-right-000500.zdf")
x = particles['x1']
y = particles['u1']
plt.plot(x, y, '.', ms=1,alpha=0.1)
title = "u_1-x_1\,phasespace"
timeLabel = "t = {:g}\,[{:s}]".format(info.iteration.t, info.iteration.tunits)
plt.title(r'$\sf{' + title + r'}$' + '\n' + r'$\sf{' + timeLabel + r'}$')
xlabel = "x_1\,[{:s}]".format( info.particles.units['x1'] )
ylabel = "u_1\,[{:s}]".format( info.particles.units['u1'] )
plt.xlabel(r'$\sf{' + xlabel + r'}$')
plt.ylabel(r'$\sf{' + ylabel + r'}$')
(particles,info) = zdf.read("PARTICLES/particles-left-000500.zdf")
x = particles['x1']
y = particles['u1']
plt.plot(x, y, '.', ms=1,alpha=0.1)
plt.grid(True)
plt.show()
Explanation: Accessing simulation results
Simulation results are saved in the ZDF format, as in normal (non-Python) ZPIC simulations, and can now be visualized in the noteboook:
End of explanation |
8,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Какво е "цикъл"?
В програмирането често се налага многократно изпълнение на дадена
последователност от операции.
Цикъл (loop) е основна конструкция в програмирането, която позволява
многократно изпълнение на даден фрагмент сорс код.
В зависимост от вида на цикъла, програмният код в него се
повтаря или фиксиран брой пъти или докато е в сила дадено условие.
Step1: Упражнение
Step2: Упражнение | Python Code:
counter = 1
while counter <= 10:
print(counter)
counter = counter + 1
print("end")
Explanation: Какво е "цикъл"?
В програмирането често се налага многократно изпълнение на дадена
последователност от операции.
Цикъл (loop) е основна конструкция в програмирането, която позволява
многократно изпълнение на даден фрагмент сорс код.
В зависимост от вида на цикъла, програмният код в него се
повтаря или фиксиран брой пъти или докато е в сила дадено условие.
End of explanation
counter = 1
product = 1
while counter <= 5:
product = product * counter
print("counter: ", counter)
print("product: ", product)
counter = counter + 1
print(product)
Explanation: Упражнение: Напишете програма, която пресмята произведението на числата от 1 до 5.
End of explanation
a = 2
if a % 2 == 0:
print("even")
else:
print("odd")
counter = 1
product = 1
while counter <= 5:
if counter % 2 == 0:
print("counter is %d even" % counter)
print("product = %d * %d" % (product, counter))
product = product * counter
print("counter: ", counter)
print("product: ", product)
counter = counter + 1
print(product)
Explanation: Упражнение: Напишете програма, която пресмята произведението на четните числа от 1 до 5.
End of explanation |
8,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vehicle Data
Data1.txt
Dies sind die Werte aus der Datei "data1.txt". Hierbei hatten wir einen folgende Startwerte
Step1: Da wir nun die entsprechenden Werte in numpy importiert haben, können wir diese nun erkunden und auswerten.
Schauen wir nun erst einmal nach ein Paar Plots, welche die Datensätze beschreiben kö
Step2: Wie wir sehen ist der Graph in etwa parabelförmig. Es gibt einen ungewöhnlichen Wert bei ca 160 MotorTorque. Betrachten wir diesen genauer mit den benachbarten Werten. Es handelt sich dabei um den 7. Wert.
Step3: Hierbei ist zu beachten, dass sich der Wert vom MotorTorque nicht verändert. Dies ist eine Abweichung, welche durch mich verursacht wurde, da ich dort aus Versehen das Programm gestoppt hatte, und durch das neue Starten das Fahrzeug aus den Stand startet musste. Interessant ist hierbei jedoch, das es einen erheblichen Unterschied machte, ob ein Fahrzeug bereit Geschwindigkeit aufgenommen hat, oder aus den Stand startete. Betrachten wir diesen Unterschied einmal genauer.
Step4: So erhalten wir an dieser Stelle eine Verschiebung um
Step5: 24.28 Sekunden. Dies scheint allerding recht ungewöhnlich, da die entsprechenden Anfangswerte selbst bei den langsamen Werten deutlich schneller sind. Betrachten wir daher einmal die Verteilung der Zeit auf den einzelnen Gates. Da laut Messwerten die Geschwindigkeit die gleich ist, betrachten wir diese an den einzelnen Gates. Dafür fügen wir die entsprechenden Werte in eine Liste ein. Wichtig sind für uns die Indizen 5 und 6. Dabei nutzen wir ein 2D Array mit dem ersten Wert für das Gate und den zweiten Wert für die entsprechende Zeit. Dies erledigen wir für die ersten 20 Werte, da dies eine ausreichende Beispielmenge sein sollte.
Step6: Selbst wenn beide Graphen sehr starke Spitzen besitzen, in welchen sie sich stark voneinander unterscheiden, haben sie auch Strecken, in welchen sie die gleiche Geschwindigkeit hatten. Daher nutzen wir die Durschnittsgeschwindigkeit um eine endgültiges Urteil zu fassen.
Step7: Hier sehen wir, dass die Durchschnittsgeschwindigkeit recht nahe liegt. Die Abweichung könnte durch eine ungünstige Wahl des Messbereiches liegen. Da nur jede Sekunde gemessen wurde, kann es sein, dass an bestimmten Gates das Fahrzeug bereits schon weiter war als in der Letzten Reihe. Von daher sollte der Zeitpunkt der Messung abgepasst werden.
Dies ist entweder durch eine Erhöhung der Datenauflösung möglich, also durch eine Messung in kleineren Abständen, oder durch feste Messpunkte. Diese könnten zum Beispiel beim erreichen der einzelnen Wegpunkte ausgegeben werden.
Eine dritte Möglichkeit besteht in einer Kombination beider Methoden. Hierfür wäre es für die einfache lesbarkeit hilfreich 2 Dateien anzulegen. Auf der einen Seite werden die Werte ausgegeben, sobald ein Wegpunkt erreicht wurde, und in der anderen Datei werden die Daten alle 0.5s ausgegeben. Dabei würden auch in der ersten Datei in Information zu der Position wegfallen, da die Wegpunkte stets an der gleichen Stelle sind.
weitere Betrachtungen
Relation zwischen Torque und Time
Eine weitere interessante Beobachtung, wäre die Erkundung des Verhältnisses zwischen Torque und Zeit. Hierfür nutzen wir diese Gleichung. | Python Code:
#Create Lists
time = [233.32,198.92,184.7,168.18,148.22,138.88,151.76,127.48,119.12,115.24,110.7,104.28,105.52,109.2,120.7401,147.027]
motorTorque = [100,110,121,133.1,146.41,161.051,161.051,177.1561,194.8717,214.3589,235.7948,259.3743,285.3117,313.8429,345.2272,379.74992]
print(time)
print('elements in time: '+str(len(time)))
print(motorTorque)
print('elements in motorTorque: '+str(len(motorTorque)))
Explanation: Vehicle Data
Data1.txt
Dies sind die Werte aus der Datei "data1.txt". Hierbei hatten wir einen folgende Startwerte:
- MotorTorque : 100
- maxSpeed : 10
Nach jeder einzelnen Runde wurden diese beiden Werte um 10% erhöht. Die Listen sind hierbei die Endzeiten der Runden sowie der MotorTorque. Ich beachte hierbei nur eine der beiden Variablen, da diese direkt zueinander proportional sind. Zu beachten ist hier bei, dass die Werte für Torgue, maxSpeed und SteerAngle nach den Wertereihen kommen.
End of explanation
np_time = np.array(time)
np_torque = np.array(motorTorque)
np_2d=np.array([np_time,np_torque])
np_2d
np_2d.shape
plot = plt.plot(np_torque,np_time)
plt.xlabel('MotorTorque')
plt.ylabel('Zeit in s/Runde')
plt.title('Messwerte von data1.txt')
plt.show()
Explanation: Da wir nun die entsprechenden Werte in numpy importiert haben, können wir diese nun erkunden und auswerten.
Schauen wir nun erst einmal nach ein Paar Plots, welche die Datensätze beschreiben kö
End of explanation
print(np_2d[:,5:8])
Explanation: Wie wir sehen ist der Graph in etwa parabelförmig. Es gibt einen ungewöhnlichen Wert bei ca 160 MotorTorque. Betrachten wir diesen genauer mit den benachbarten Werten. Es handelt sich dabei um den 7. Wert.
End of explanation
plot = plt.plot(np_torque[4:8],np_time[4:8])
plt.xlabel('MotorTorque')
plt.ylabel('Zeit in s/Runde')
plt.title('Messwerte um Wert 7')
plt.show()
Explanation: Hierbei ist zu beachten, dass sich der Wert vom MotorTorque nicht verändert. Dies ist eine Abweichung, welche durch mich verursacht wurde, da ich dort aus Versehen das Programm gestoppt hatte, und durch das neue Starten das Fahrzeug aus den Stand startet musste. Interessant ist hierbei jedoch, das es einen erheblichen Unterschied machte, ob ein Fahrzeug bereit Geschwindigkeit aufgenommen hat, oder aus den Stand startete. Betrachten wir diesen Unterschied einmal genauer.
End of explanation
print(np_time[6]-np_time[7])
Explanation: So erhalten wir an dieser Stelle eine Verschiebung um
End of explanation
np_gateSpeed6 = np.array([[1,3.28],[2,7.60],[3,13.70],[4,1.74],[5,5.78],[6,3.58],[7,0.40],[8,-1.8],[9,-1.2],[10,11.53],[11,-0.95],[12,11.71],[13,3.34],[14,10.09],[15,5.95],[16,4.99],[17,4.11],[18,6.74],[19,6.30],[20,8.4]])
np_gateSpeed5 = np.array([[1,2.55],[2,5.41],[3,9.66],[4,4.04],[5,5.44],[6,10.01],[7,8.86],[8,4.41],[9,5.7],[10,9.68],[11,-2.45],[12,12.71],[13,5.9],[14,9.44],[15,5.84],[16,-0.18],[17,2.05],[18,7.73],[19,2.93],[20,4.48]])
np_gate = np_gateSpeed6[:,0]
np_speed6 = np_gateSpeed6[:,1]
np_speed5 = np_gateSpeed5[:,1]
print(np_gate) ; print(np_speed6) ; print(np_speed5)
plt.plot(np_gate,np_speed5)
plt.plot(np_gate,np_speed6)
plt.xlabel('Gates')
plt.ylabel('Speed')
plt.title('Comparison of Value 5 and 6')
plt.show()
Explanation: 24.28 Sekunden. Dies scheint allerding recht ungewöhnlich, da die entsprechenden Anfangswerte selbst bei den langsamen Werten deutlich schneller sind. Betrachten wir daher einmal die Verteilung der Zeit auf den einzelnen Gates. Da laut Messwerten die Geschwindigkeit die gleich ist, betrachten wir diese an den einzelnen Gates. Dafür fügen wir die entsprechenden Werte in eine Liste ein. Wichtig sind für uns die Indizen 5 und 6. Dabei nutzen wir ein 2D Array mit dem ersten Wert für das Gate und den zweiten Wert für die entsprechende Zeit. Dies erledigen wir für die ersten 20 Werte, da dies eine ausreichende Beispielmenge sein sollte.
End of explanation
avgGS6 = np.mean(np_gateSpeed6)
avgGS5 = np.mean(np_gateSpeed5)
print('Abweichung = '+str(100-(avgGS6 / avgGS5 *100))+'%')
Explanation: Selbst wenn beide Graphen sehr starke Spitzen besitzen, in welchen sie sich stark voneinander unterscheiden, haben sie auch Strecken, in welchen sie die gleiche Geschwindigkeit hatten. Daher nutzen wir die Durschnittsgeschwindigkeit um eine endgültiges Urteil zu fassen.
End of explanation
np_coeff = (np_time/np_torque)
x = np.linspace(0,16,16)
plt.xlabel('rounds')
plt.ylabel('$seconds / Torque$')
coeff_plt = plt.plot(x,np_coeff)
plt.show()
print('lowest time: '+str(min(time)))
print('torque: '+str(motorTorque[time.index(104.28)]))
print('Index: '+str(time.index(104.28)))
Explanation: Hier sehen wir, dass die Durchschnittsgeschwindigkeit recht nahe liegt. Die Abweichung könnte durch eine ungünstige Wahl des Messbereiches liegen. Da nur jede Sekunde gemessen wurde, kann es sein, dass an bestimmten Gates das Fahrzeug bereits schon weiter war als in der Letzten Reihe. Von daher sollte der Zeitpunkt der Messung abgepasst werden.
Dies ist entweder durch eine Erhöhung der Datenauflösung möglich, also durch eine Messung in kleineren Abständen, oder durch feste Messpunkte. Diese könnten zum Beispiel beim erreichen der einzelnen Wegpunkte ausgegeben werden.
Eine dritte Möglichkeit besteht in einer Kombination beider Methoden. Hierfür wäre es für die einfache lesbarkeit hilfreich 2 Dateien anzulegen. Auf der einen Seite werden die Werte ausgegeben, sobald ein Wegpunkt erreicht wurde, und in der anderen Datei werden die Daten alle 0.5s ausgegeben. Dabei würden auch in der ersten Datei in Information zu der Position wegfallen, da die Wegpunkte stets an der gleichen Stelle sind.
weitere Betrachtungen
Relation zwischen Torque und Time
Eine weitere interessante Beobachtung, wäre die Erkundung des Verhältnisses zwischen Torque und Zeit. Hierfür nutzen wir diese Gleichung.
End of explanation |
8,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:20
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
8,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Halo Scattering
v1 -- Mainly in the context of the FRB 181112 paper
Step1: Kolmogorov estimate
Equation 1 of Prochaska et al. 2019 by JP Macquart
Step2: $\tau = 1 $ms, $z_{\rm FRB} = 1$, $z_{\rm halo} = 0.5$, $\nu = 1$GHz, $L_0 = 1$pc, $\Delta L = 50$kpc
Step3: $\tau = 1 \mu$s, $z_{\rm FRB} = 2$, $z_{\rm halo} = 1$, $\nu = 1$GHz, $L_0 = 1$pc, $\Delta L = 50$kpc
Step4: Mist
Formulated by Matt McQuinn in Prochaska+2019
$\theta$
Step5: $\tau$
Step6: n_e from $\tau$
Test values in Prochaska+19
Step7: $\tau = 1 $ms, $z_{\rm FRB} = 1$, $z_{\rm halo} = 0.5$, $\nu = 1$GHz, $L_0 = 1$pc, $\Delta L = 50$kpc
Step8: $\tau = 1 \mu$s, $z_{\rm FRB} = 2$, $z_{\rm halo} = 1$, $\nu = 1$GHz, $L_0 = 1$pc, $\Delta L = 50$kpc
Step9: Tests
JP's notes | Python Code:
# imports
import numpy as np
from importlib import reload
from astropy import units
from astropy import constants
from frb import turb_scattering as frb_scatt
Explanation: Halo Scattering
v1 -- Mainly in the context of the FRB 181112 paper
End of explanation
reload(frb_scatt)
z_FRB = 0.4755
z_halo = 0.367
n_e = frb_scatt.ne_from_tau_kolmogorov(40e-6*units.s, z_FRB, z_halo,
1.3*units.GHz, L0=1*units.kpc, L=50*units.kpc)#, debug=True)
n_e
Explanation: Kolmogorov estimate
Equation 1 of Prochaska et al. 2019 by JP Macquart
End of explanation
reload(frb_scatt)
z_FRB = 1
z_halo = 0.5
n_e = frb_scatt.ne_from_tau_kolmogorov(1e-3*units.s, z_FRB, z_halo, 1*units.GHz, L0=1*units.pc, L=50*units.kpc)
n_e.to('cm**-3')
Explanation: $\tau = 1 $ms, $z_{\rm FRB} = 1$, $z_{\rm halo} = 0.5$, $\nu = 1$GHz, $L_0 = 1$pc, $\Delta L = 50$kpc
End of explanation
z_FRB = 1
z_halo = 0.5
tau = 1e-6 * units.s
n_e = frb_scatt.ne_from_tau_kolmogorov(tau, z_FRB, z_halo, 1*units.GHz, L0=1*units.pc, debug=True)
n_e.to('cm**-3')
Explanation: $\tau = 1 \mu$s, $z_{\rm FRB} = 2$, $z_{\rm halo} = 1$, $\nu = 1$GHz, $L_0 = 1$pc, $\Delta L = 50$kpc
End of explanation
reload(frb_scatt)
theta = frb_scatt.theta_mist(1e-3*units.cm**-3, 1*units.GHz)
theta.to('arcsec')
Explanation: Mist
Formulated by Matt McQuinn in Prochaska+2019
$\theta$
End of explanation
reload(frb_scatt)
tau = frb_scatt.tau_mist(1e-3*units.cm**-3, 1*units.GHz, 0.4, 0.3)
tau
Explanation: $\tau$
End of explanation
reload(frb_scatt)
z_FRB = 0.4755
z_halo = 0.367
n_e = frb_scatt.ne_from_tau_mist(40e-6*units.s, z_FRB, z_halo,
1.3*units.GHz, L=50*units.kpc, fV=1e-3, R=0.1*units.pc,
verbose=True)
n_e
n_e = frb_scatt.ne_from_tau_mist(40e-6*units.s, z_FRB, z_halo,
1*units.GHz, verbose=True, R=0.005*units.pc)
n_e
Explanation: n_e from $\tau$
Test values in Prochaska+19
End of explanation
reload(frb_scatt)
z_FRB = 1.
z_halo = 0.5
n_e = frb_scatt.ne_from_tau_mist(1e-3*units.s, z_FRB, z_halo, 1*units.GHz,
L=50*units.kpc, R=1*units.pc, fV=1e-3, verbose=True)
n_e
Explanation: $\tau = 1 $ms, $z_{\rm FRB} = 1$, $z_{\rm halo} = 0.5$, $\nu = 1$GHz, $L_0 = 1$pc, $\Delta L = 50$kpc
End of explanation
reload(frb_scatt)
z_FRB = 2
z_halo = 1
n_e = frb_scatt.ne_from_tau_mist(1e-6*units.s, z_FRB, z_halo, 1*units.GHz,
L=50*units.kpc, R=1*units.pc, fV=1e-3, verbose=True)
n_e
Explanation: $\tau = 1 \mu$s, $z_{\rm FRB} = 2$, $z_{\rm halo} = 1$, $\nu = 1$GHz, $L_0 = 1$pc, $\Delta L = 50$kpc
End of explanation
tau = 40e-6 * units.s
L0 = 1 * units.kpc
DL = 50*units.kpc
l0 = 0.23 * units.m
zL = 0.36
DS = 1.23 * units.Gpc
DL = 1.045 * units.Gpc
DLS = 0.262 * units.Gpc
#
n_e = 1.61 * L0**(1/3) * DL**(-1/2) * tau**(5/12) * (1+zL)**(17/12) * l0**(-22/12) * (DL*DLS/DS/constants.c)**(-5/12)
n_e.decompose()
(20-30-122*6)/2
0.011 * np.sqrt(1e-3/40e-6)
Explanation: Tests
JP's notes
End of explanation |
8,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Process Regression in Pytorch
Thomas Viehmann, tv@lernapparat.de
Modelled after GPFlow Regression notebook by James Hensman
Step1: Let's have a regression example
Step2: Creating the model
Not adapted to the data yet...
Step3: Maximum-A-Posteriori
One commonly used approach to model selection is to maximize the marginal log likelihood. This is the "gp" equivalent of a maximum-likelihood estimate.
Step4: Hamiltonian Monte Carlo
We can go more Bayesian by putting a prior on the parameters and do Hamiltonian Monte Carlo to draw parameters.
Step5: Plotting simulated functions
(Note that the simulations are for the de-noised functions - i.e. without the noise contribution of the likelihood.)
Step6: Sparse Regression | Python Code:
from matplotlib import pyplot
%matplotlib inline
import IPython
import torch
import numpy
import sys, os
sys.path.append(os.path.join(os.getcwd(),'..'))
pyplot.style.use('ggplot')
import candlegp
import candlegp.training.hmc
Explanation: Gaussian Process Regression in Pytorch
Thomas Viehmann, tv@lernapparat.de
Modelled after GPFlow Regression notebook by James Hensman
End of explanation
N = 12
X = torch.rand(N,1).double()
Y = (torch.sin(12*X) + 0.6*torch.cos(25*X) + torch.randn(N,1).double()*0.1+3.0).squeeze(1)
pyplot.figure()
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
Explanation: Let's have a regression example
End of explanation
k = candlegp.kernels.Matern52(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m = candlegp.models.GPR(X, Y.unsqueeze(1), kern=k, mean_function=mean)
m.likelihood.variance.set(torch.tensor([0.01], dtype=torch.double))
m
xstar = torch.linspace(0,1,100).double()
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
Explanation: Creating the model
Not adapted to the data yet...
End of explanation
opt = torch.optim.LBFGS(m.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.item())
m
xstar = torch.linspace(0,1,100).double()
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
Explanation: Maximum-A-Posteriori
One commonly used approach to model selection is to maximize the marginal log likelihood. This is the "gp" equivalent of a maximum-likelihood estimate.
End of explanation
k2 = candlegp.kernels.RBF(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean2 = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m2 = candlegp.models.GPR(X, Y.unsqueeze(1), kern=k2, mean_function=mean2)
m2.load_state_dict(m.state_dict())
dt = torch.double
m2.likelihood.variance.prior = candlegp.priors.Gamma(1.0,1.0, dtype=dt)
m2.kern.variance.prior = candlegp.priors.Gamma(1.0,1.0, dtype=dt)
m2.kern.lengthscales.prior = candlegp.priors.Gamma(1.0,1.0,dtype=dt)
m2.mean_function.A.prior = candlegp.priors.Gaussian(0.0,10.0, dtype=dt)
m2.mean_function.b.prior = candlegp.priors.Gaussian(0.0,10.0, dtype=dt)
print("likelihood with priors",m2().item())
m2
# res = candlegp.training.hmc.hmc_sample(m2,500,0.2,burn=50, thin=10)
res = candlegp.training.hmc.hmc_sample(m2,50,0.2,burn=50, thin=10)
pyplot.plot(res[0]); pyplot.title("likelihood");
for (n,p0),p,c in zip(m.named_parameters(),res[1:],['r','g','b','y','b']):
pyplot.plot(torch.stack(p).squeeze().numpy(), c=c, label=n)
pyplot.plot((0,len(p)),(p0.data.view(-1)[0],p0.data.view(-1)[0]), c=c)
pyplot.legend();
Explanation: Hamiltonian Monte Carlo
We can go more Bayesian by putting a prior on the parameters and do Hamiltonian Monte Carlo to draw parameters.
End of explanation
xstar = torch.linspace(0,1,100).double()
mc_params = torch.stack([torch.cat(p, dim=0).view(-1) for p in res[1:]], dim=1)
allsims = []
for ps in mc_params[:50]:
for mp, p in zip(m2.parameters(), ps):
with torch.no_grad():
mp.set(p)
allsims.append(m2.predict_f_samples(xstar.unsqueeze(1), 1).squeeze(0).t())
allsims = torch.cat(allsims, dim=0)
pyplot.plot(xstar.numpy(),allsims.data.numpy().T, 'b', lw=2, alpha=0.1)
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
Explanation: Plotting simulated functions
(Note that the simulations are for the de-noised functions - i.e. without the noise contribution of the likelihood.)
End of explanation
k3 = candlegp.kernels.RBF(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean3 = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m3 = candlegp.models.SGPR(X, Y.unsqueeze(1), k3, X[:7].clone(), mean_function=mean3)
m3.likelihood.variance.set(torch.tensor([0.01], dtype=torch.double))
m3
opt = torch.optim.LBFGS(m3.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m3()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m3()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.item())
m3
xstar = torch.linspace(0,1,100).double()
mu, var = m3.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
pyplot.plot(m3.Z.data.numpy(), torch.zeros(m3.Z.size(0)).numpy(),'o')
Explanation: Sparse Regression
End of explanation |
8,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Todo
Step1: Rel risk. P(gender|misaligned)/P(gender)
What proportion of the misaligned dataset is about women?
For each gender, what proportion of the each misalignment group do the represent. | Python Code:
bigdf = pandas.read_csv('/media/notconfusing/9d9b45fc-55f7-428c-a228-1c4c4a1b728c/home/maximilianklein/snapshot_data/2016-01-03/gender-index-data-2016-01-03.csv')
gender_qid_df = bigdf[['qid','gender']]
def map_gender(x):
if isinstance(x,float):
return 'no gender'
else:
gen = x.split('|')[0]
if gen == 'Q6581072':
return 'female'
elif gen == 'Q6581097':
return 'male'
else:
return 'nonbin'
gender_qid_df['gender'] = gender_qid_df['gender'].apply(map_gender)
def qid2enname(x):
try:
return qidnames[x]
except KeyError:
return None
gender_qid_df['enname'] = gender_qid_df['qid'].apply(qid2enname)
enname_id = pandas.read_csv('/home/notconfusing/workspace/wikidumpparse/wikidump/mediawiki-utilities/enname_id.txt',sep='\t',names=['enname','pageid'])
gender_page_id = pandas.merge(gender_qid_df, enname_id, how='inner',on='enname')
pah_gender = pandas.merge(pah, gender_page_id, how='left', on='pageid')
pah_gender
len(pah), len(gender_page_id), len(pah_gender)
Explanation: Todo:
+ map gender-enwiki-page-id
+ dissoance class priors
+ posteriors by gender
+ posterior for no gender.
End of explanation
pah_gender['gender'] = pah_gender['gender'].fillna('nonbio')
SE = pah_gender[(pah_gender['dissonance'] == 'Moderate negative') | (pah_gender['dissonance'] == 'High negative')]
NI = pah_gender[(pah_gender['dissonance'] == 'Moderate positive') | (pah_gender['dissonance'] == 'High positive')]
rel_risk = defaultdict(dict)
for risk, risk_name in [(SE,'Spent Effort'), (NI,'Needs Improvement')]:
for gender in ['female','male','nonbin','nonbio']:
gen_mis = len(risk[risk['gender'] == gender])
p_gen_mis = gen_mis/len(risk) #p(gender|misalignment)
p_gen = len(pah_gender[pah_gender['gender'] == gender]) / len(pah_gender) #p(gender)
print(p_gen_mis, p_gen)
rel_risk[gender][risk_name] = p_gen_mis/p_gen#rel sirk
java_min_int = -2147483648
allrecs = pandas.read_csv('/media/notconfusing/9d9b45fc-55f7-428c-a228-1c4c4a1b728c/home/maximilianklein/snapshot_data/2016-01-03/gender-index-data-2016-01-03.csv',na_values=[java_min_int])
def sum_column(q_str):
if type(q_str) is str:
qs = q_str.split('|')
return len(qs) #cos the format will always end with a |
for col in ['site_links']:
allrecs[col] = allrecs[col].apply(sum_column)
allrecs['site_links'].head(20)
allrecs['gender'] = allrecs['gender'].apply(map_gender)
sl_risk = defaultdict(dict)
sl_risk['nonbio']['Sitelink Ratio'] = 1
for gender in ['female','male','nonbin']:
gend_df = allrecs[allrecs['gender']==gender]
gend_df_size = len(gend_df)
avg_sl = (gend_df['site_links'].sum() / gend_df_size) / 2.6
sl_risk[gender]['Sitelink Ratio'] = avg_sl
sl_risk_df = pandas.DataFrame.from_dict(sl_risk, orient='index')
rel_risk_df = pandas.DataFrame.from_dict(rel_risk,orient="index")
risk_df = pandas.DataFrame.join(sl_risk_df,rel_risk_df)
risk_df.index = ['Female','Male','Non-binary','Non-biography']
print(risk_df.to_latex(columns = ['Needs Improvement','Spent Effort', 'Sitelink Ratio'],float_format=lambda n:'%.2f' %n))
Explanation: Rel risk. P(gender|misaligned)/P(gender)
What proportion of the misaligned dataset is about women?
For each gender, what proportion of the each misalignment group do the represent.
End of explanation |
8,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data SITREP
redshiftzero, January 26, 2017
We've been collecting traces from crawling onion services, this notebook contains a brief SITREP of the status of the data collection.
Step1: Number of Examples
Step2: We have currently got a sample of
Step3: examples.
Examples collected per day
This was a bit stop and start as you can see
Step4: Sanity check the sorter was last run recently
Step5: crawls
Step6: There have been
Step8: crawls (the crawlers have clearly failed and restarted a crazy number of times).
Number of unique onion services scraped
Step9: The many days of very low numbers of unique onion services were when the crawlers were mostly getting traces to SecureDrops
Number of HS total | Python Code:
import os
import pandas as pd
import sqlalchemy
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
with open(os.environ["PGPASS"], "rb") as f:
content = f.readline().decode("utf-8").replace("\n", "").split(":")
engine = sqlalchemy.create_engine("postgresql://{user}:{passwd}@{host}/{db}".format(user=content[3],
passwd=content[4],
host=content[0],
db=content[2]))
Explanation: Data SITREP
redshiftzero, January 26, 2017
We've been collecting traces from crawling onion services, this notebook contains a brief SITREP of the status of the data collection.
End of explanation
df_examples = pd.read_sql("SELECT * FROM raw.frontpage_examples", con=engine)
Explanation: Number of Examples
End of explanation
len(df_examples)
Explanation: We have currently got a sample of:
End of explanation
daily = df_examples.set_index('t_scrape').groupby(pd.TimeGrouper(freq='D'))['exampleid'].count()
ax = daily.plot(kind='bar', figsize=(24,6))
ax.set_xlabel('Date of scrape')
ax.set_ylabel('Number of onion services scraped')
ax.grid(False)
ax.set_frame_on(False)
# Prettify ticks, probably a smarter way to do this but sometimes I'm not very smart
xtl=[item.get_text()[:10] for item in ax.get_xticklabels()]
_=ax.set_xticklabels(xtl)
Explanation: examples.
Examples collected per day
This was a bit stop and start as you can see:
End of explanation
result = engine.execute('SELECT MAX(t_sort) FROM raw.hs_history')
for row in result:
print(row)
Explanation: Sanity check the sorter was last run recently:
End of explanation
df_crawls = pd.read_sql("SELECT * FROM raw.crawls", con=engine)
Explanation: crawls
End of explanation
len(df_crawls)
Explanation: There have been:
End of explanation
hs_query = SELECT t1.t_scrape::date, count(distinct t1.hsid)
FROM raw.frontpage_examples t1
group by 1
ORDER BY 1
df_hs = pd.read_sql(hs_query, con=engine)
df_hs.set_index('t_scrape', inplace=True)
ax = df_hs.plot(kind='bar', figsize=(20,6))
ax.set_xlabel('Date of scrape')
ax.set_ylabel('Number of UNIQUE onion services scraped')
ax.grid(False)
ax.set_frame_on(False)
Explanation: crawls (the crawlers have clearly failed and restarted a crazy number of times).
Number of unique onion services scraped
End of explanation
pd.read_sql("SELECT count(distinct hsid) FROM raw.frontpage_examples", con=engine)
Explanation: The many days of very low numbers of unique onion services were when the crawlers were mostly getting traces to SecureDrops
Number of HS total
End of explanation |
8,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step11: Vertex constants
Setup up the following constants for Vertex
Step12: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Container (Docker) image
Next, we will set the Docker container images for prediction
Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available
Step14: Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start locally training a custom model IMDB Movie Reviews, and then deploy the model to the cloud.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Endpoint Service for deployment.
Prediction Service for serving.
Step16: Train a model locally
In this tutorial, you train a IMDB Movie Reviews model locally.
Set location to store trained model
You set the variable MODEL_DIR for where in your Cloud Storage bucket to save the model in TensorFlow SavedModel format.
Also, you create a local folder for the training script.
Step17: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary
Step18: Train the model
Step19: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step20: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the IMDB Movie Review test (holdout) data from tfds.datasets, using the method load(). This will return the dataset as a tuple of two elements. The first element is the dataset and the second is information on the dataset, which will contain the predefined vocabulary encoder. The encoder will convert words into a numerical embedding, which was pretrained and used in the custom training script.
When you trained the model, you needed to set a fix input length for your text. For forward feeding batches, the padded_batch() property of the corresponding tf.dataset was set to pad each input sequence into the same shape for a batch.
For the test data, you also need to set the padded_batch() property accordingly.
Step21: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step22: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts
Step23: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters
Step24: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter
Step25: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps
Step26: Now get the unique identifier for the Endpoint resource you created.
Step27: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step28: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step29: Make a online prediction request
Now do a online prediction to your deployed model.
Prepare the request content
Since the dataset is a tf.dataset, which acts as a generator, we must use it as an iterator to access the data items in the test data. We do the following to get a single data item from the test data
Step30: Send the prediction request
Ok, now you have a test data item. Use this helper function predict_data, which takes the following parameters
Step31: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step32: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: Local text binary classification model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_local_text_binary_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_local_text_binary_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to deploy a locally trained custom text binary classification model for online prediction.
Dataset
The dataset used for this tutorial is the IMDB Movie Reviews from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts whether a review is positive or negative in sentiment.
Objective
In this notebook, you create a custom model locally in the notebook, then learn to deploy the locally trained model to Vertex, and then do a prediction on the deployed model. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a model locally.
Train the model locally.
View the model evaluation.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU)
Explanation: Container (Docker) image
Next, we will set the Docker container images for prediction
Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available:
TensorFlow 1.15
gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest
XGBoost
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest
Scikit-learn
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest
For the latest list, see Pre-built containers for prediction
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start locally training a custom model IMDB Movie Reviews, and then deploy the model to the cloud.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
MODEL_DIR = BUCKET_NAME + "/imdb"
model_path_to_deploy = MODEL_DIR
! rm -rf custom
! mkdir custom
! mkdir custom/trainer
Explanation: Train a model locally
In this tutorial, you train a IMDB Movie Reviews model locally.
Set location to store trained model
You set the variable MODEL_DIR for where in your Cloud Storage bucket to save the model in TensorFlow SavedModel format.
Also, you create a local folder for the training script.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for IMDB
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=1e-4, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print(device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets():
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
encoder = info.features['text'].encoder
padded_shapes = ([None],())
return train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes), encoder
train_dataset, encoder = make_datasets()
# Build the Keras model
def build_and_compile_rnn_model(encoder):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(args.lr),
metrics=['accuracy'])
return model
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_rnn_model(encoder)
# Train the model
model.fit(train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Gets the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads IMDB Movie Reviews dataset from TF Datasets (tfds).
Builds a simple RNN model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
! python custom/trainer/task.py --epochs=10 --model-dir=$MODEL_DIR
Explanation: Train the model
End of explanation
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
encoder = info.features["text"].encoder
BATCH_SIZE = 64
padded_shapes = ([None], ())
test_dataset = test_dataset.padded_batch(BATCH_SIZE, padded_shapes)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the IMDB Movie Review test (holdout) data from tfds.datasets, using the method load(). This will return the dataset as a tuple of two elements. The first element is the dataset and the second is information on the dataset, which will contain the predefined vocabulary encoder. The encoder will convert words into a numerical embedding, which was pretrained and used in the custom training script.
When you trained the model, you needed to set a fix input length for your text. For forward feeding batches, the padded_batch() property of the corresponding tf.dataset was set to pad each input sequence into the same shape for a batch.
For the test data, you also need to set the padded_batch() property accordingly.
End of explanation
model.evaluate(test_dataset)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
Explanation: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts:
preprocessing function:
Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).
Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
post-processing function:
Converts the model output to format expected by the receiving application -- e.q., compresses the output.
Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model("imdb-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy)
Explanation: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters:
display_name: A human readable name for the Endpoint service.
image_uri: The container image for the model deployment.
model_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.
The helper function calls the Model client service's method upload_model, which takes the following parameters:
parent: The Vertex location root path for Dataset, Model and Endpoint resources.
model: The specification for the Vertex Model resource instance.
Let's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:
display_name: A human readable name for the Model resource.
metadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').
artificat_uri: The Cloud Storage path where the model is stored in SavedModel format.
container_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.
The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
End of explanation
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
Explanation: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
This helper function calls the Vertex Model client service's method get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
End of explanation
ENDPOINT_NAME = "imdb_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "imdb_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
test_dataset.take(1)
for data in test_dataset:
print(data)
break
test_item = data[0].numpy()
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Prepare the request content
Since the dataset is a tf.dataset, which acts as a generator, we must use it as an iterator to access the data items in the test data. We do the following to get a single data item from the test data:
Set the property for the number of batches to draw per iteration to one using the method take(1).
Iterate once through the test data -- i.e., we do a break within the for loop.
In the single iteration, we save the data item which is in the form of a tuple.
The data item will be the first element of the tuple, which you then will convert from an tensor to a numpy array -- data[0].numpy().
End of explanation
def predict_data(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: data.tolist()}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_data(test_item, endpoint_id, None)
Explanation: Send the prediction request
Ok, now you have a test data item. Use this helper function predict_data, which takes the following parameters:
data: The test data item is a 64 padded numpy 1D array.
endpoint: The Vertex AI fully qualified identifier for the endpoint where the model was deployed.
parameters_dict: Additional parameters for serving.
This function uses the prediction client service and calls the predict method with the following parameters:
endpoint: The Vertex AI fully qualified identifier for the endpoint where the model was deployed.
instances: A list of instances (data items) to predict.
parameters: Additional parameters for serving.
To pass the test data to the prediction service, you must package it for transmission to the serving binary as follows:
1. Convert the data item from a 1D numpy array to a 1D Python list.
2. Convert the prediction request to a serialized Google protobuf (`json_format.ParseDict()`)
Each instance in the prediction request is a dictionary entry of the form:
{input_name: content}
input_name: the name of the input layer of the underlying model.
content: The data item as a 1D Python list.
Since the predict() service can take multiple data items (instances), you will send your single data item as a list of one data item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() service.
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
predictions -- the predicated binary sentiment between 0 (negative) and 1 (positive).
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
8,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FAQ
This document will address frequently asked questions not addressed in other pages of the documentation.
How do I install cobrapy?
Please see the INSTALL.rst file.
How do I cite cobrapy?
Please cite the 2013 publication
Step1: The Model.repair function will rebuild the necessary indexes
Step2: How do I delete a gene?
That depends on what precisely you mean by delete a gene.
If you want to simulate the model with a gene knockout, use the cobra.manipulation.knock_out_model_genes function within a context. The effects of this function are reversed when exiting a context.
Step3: If you want to actually remove all traces of a gene from a model, this is more difficult because this will require changing all the gene_reaction_rule strings for reactions involving the gene.
How do I change the reversibility of a Reaction?
Reaction.reversibility is a property in cobra which is computed when it is requested from the lower and upper bounds.
Step4: Trying to set it directly will result in an error or warning
Step5: The way to change the reversibility is to change the bounds to make the reaction irreversible.
Step6: How do I generate an LP file from a COBRA model?
For optlang based solvers
With optlang solvers, the LP formulation of a model is obtained by its string representation. All solvers behave the same way. | Python Code:
from cobra.io import load_model
model = load_model("iYS1720")
for metabolite in model.metabolites:
metabolite.id = f"test_{metabolite.id}"
try:
model.metabolites.get_by_id(model.metabolites[0].id)
except KeyError as e:
print(repr(e))
Explanation: FAQ
This document will address frequently asked questions not addressed in other pages of the documentation.
How do I install cobrapy?
Please see the INSTALL.rst file.
How do I cite cobrapy?
Please cite the 2013 publication: 10.1186/1752-0509-7-74
How do I rename reactions or metabolites?
TL;DR Use Model.repair afterwards
When renaming metabolites or reactions, there are issues because cobra indexes based off of ID's, which can cause errors. For example:
End of explanation
model.repair()
model.metabolites.get_by_id(model.metabolites[0].id)
Explanation: The Model.repair function will rebuild the necessary indexes
End of explanation
model = load_model("iYS1720")
PGI = model.reactions.get_by_id("PGI")
print("bounds before knockout:", (PGI.lower_bound, PGI.upper_bound))
from cobra.manipulation import knock_out_model_genes
knock_out_model_genes(model, ["STM4221"])
print("bounds after knockouts", (PGI.lower_bound, PGI.upper_bound))
Explanation: How do I delete a gene?
That depends on what precisely you mean by delete a gene.
If you want to simulate the model with a gene knockout, use the cobra.manipulation.knock_out_model_genes function within a context. The effects of this function are reversed when exiting a context.
End of explanation
model = load_model("iYS1720")
model.reactions.get_by_id("PGI").reversibility
Explanation: If you want to actually remove all traces of a gene from a model, this is more difficult because this will require changing all the gene_reaction_rule strings for reactions involving the gene.
How do I change the reversibility of a Reaction?
Reaction.reversibility is a property in cobra which is computed when it is requested from the lower and upper bounds.
End of explanation
try:
model.reactions.get_by_id("PGI").reversibility = False
except Exception as e:
print(repr(e))
Explanation: Trying to set it directly will result in an error or warning:
End of explanation
model.reactions.get_by_id("PGI").lower_bound = 10
model.reactions.get_by_id("PGI").reversibility
Explanation: The way to change the reversibility is to change the bounds to make the reaction irreversible.
End of explanation
with open('test.lp', 'w') as out:
out.write(str(model.solver))
Explanation: How do I generate an LP file from a COBRA model?
For optlang based solvers
With optlang solvers, the LP formulation of a model is obtained by its string representation. All solvers behave the same way.
End of explanation |
8,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/mean_variance.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: <img src="image/weight_biases.png" style="height
Step8: <img src="image/learn_rate_tune.png" style="height
Step9: Test
Set the epochs, batch_size, and learning_rate with the best learning parameters you discovered in problem 3. You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in differents font.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/mean_variance.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
features_count = 784
labels_count = 10
# TODO: Set the features and labels tensors
# features =
# labels =
# TODO: Set the weights and biases tensors
# weights =
# biases =
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.initialize_all_variables()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: <img src="image/weight_biases.png" style="height: 60%;width: 60%; position: relative; right: 10%">
Problem 2
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# TODO: Find the best parameters for each configuration
# epochs =
# batch_size =
# learning_rate =
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/learn_rate_tune.png" style="height: 60%;width: 60%">
Problem 3
Below are 3 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Batch Size:
* 2000
* 1000
* 500
* 300
* 50
* Learning Rate: 0.01
Configuration 2
* Epochs: 1
* Batch Size: 100
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 3
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Batch Size: 100
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
# TODO: Set the epochs, batch_size, and learning_rate with the best parameters from problem 3
# epochs =
# batch_size =
# learning_rate =
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
Set the epochs, batch_size, and learning_rate with the best learning parameters you discovered in problem 3. You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
8,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr5', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-HR5
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
8,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modifying Rates
Sometimes we want to change the nuclei involved in rates to simplify our network. Currently,
pynucastro supports changing the products. Here's an example.
Step1: We want to model ${}^{12}\mathrm{C} + {}^{12}\mathrm{C}$ reactions. There are 3 rates involved.
Step2: The rate ${}^{12}\mathrm{C}({}^{12}\mathrm{C},n){}^{23}\mathrm{Mg}$ is quickly followed by ${}^{23}\mathrm{Mg}(n,\gamma){}^{24}\mathrm{Mg}$, so we want to modify that rate sequence to just be ${}^{12}\mathrm{C}({}^{12}\mathrm{C},\gamma){}^{24}\mathrm{Mg}$
Step3: This has the Q value
Step4: Now we modify it
Step5: and we see that the Q value has been updated to reflect the new endpoint
Step6: Now let's build a network that includes the nuclei involved in our carbon burning. We'll start by leaving off the ${}^{23}\mathrm{Mg}$
Step7: Now we add in our modified rate | Python Code:
import pynucastro as pyna
reaclib_library = pyna.ReacLibLibrary()
Explanation: Modifying Rates
Sometimes we want to change the nuclei involved in rates to simplify our network. Currently,
pynucastro supports changing the products. Here's an example.
End of explanation
filter = pyna.RateFilter(reactants=["c12", "c12"])
mylib = reaclib_library.filter(filter)
mylib
Explanation: We want to model ${}^{12}\mathrm{C} + {}^{12}\mathrm{C}$ reactions. There are 3 rates involved.
End of explanation
r = mylib.get_rate("c12 + c12 --> n + mg23 <cf88_reaclib__reverse>")
r
Explanation: The rate ${}^{12}\mathrm{C}({}^{12}\mathrm{C},n){}^{23}\mathrm{Mg}$ is quickly followed by ${}^{23}\mathrm{Mg}(n,\gamma){}^{24}\mathrm{Mg}$, so we want to modify that rate sequence to just be ${}^{12}\mathrm{C}({}^{12}\mathrm{C},\gamma){}^{24}\mathrm{Mg}$
End of explanation
r.Q
Explanation: This has the Q value:
End of explanation
r.modify_products("mg24")
r
Explanation: Now we modify it
End of explanation
r.Q
Explanation: and we see that the Q value has been updated to reflect the new endpoint
End of explanation
mylib2 = reaclib_library.linking_nuclei(["p", "he4", "c12", "o16", "ne20", "na23", "mg24"])
Explanation: Now let's build a network that includes the nuclei involved in our carbon burning. We'll start by leaving off the ${}^{23}\mathrm{Mg}$
End of explanation
mylib2 += pyna.Library(rates=[r])
mylib2
rc = pyna.RateCollection(libraries=[mylib2])
rc.plot(rotated=True, curved_edges=True, hide_xalpha=True)
Explanation: Now we add in our modified rate
End of explanation |
8,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
_ doesn't look like much, but as part of a name in Python it has a surprising amount of different meanings.
Make names more readable
We all know that we should use good names. This often makes it necessary to use more than one word to describe the thing. cheese grater is arguably a good name for a thing but it consists of two words in english[^1]. Like most programming languages Python tokenizes source code along white space. So, for the sake of readability the system needs to be tricked into believing that several words are one while still providing some kind of visual separation for the human reader. Born were constructs like CheeseGrater or cheese_grater - a.k.a. CamelCase and snake_case.
[^1]
Step1: There are often more elegant ways to deal with this depending on context, but sometimes this is still the easiest way to clearly communicate this.
To take one example and offer some unasked advice we could think of a primitive home baked plugin system that simply calls the client function with positional arguments. This could be improved by always using keyword arguments like this
Step2: This way the client function can collect whatever arrives in an un(derscore)named catch-all dict also having more flexibility regarding future API changes.
What might be even better though is if the plugin system inspects the client function and calls it only with the requested parameters making this particular underscore crutch completely unnecessary[^2].
[^2]
Step3: Also not uncommon
Step4: the exception
Step5: This has also one practical implication when used together with the star import
Step6: This is veering off the original topic a bit, but I just want to mention that whatever you do - the original object a builtin points to is never lost - just shadowed. When a module is initialized, the namespace of the builtins[^3] module is merged into the module. The objects can still be retrieved from builtins whenever necessary
Step7: __ as attribute prefix
Step8: Up to this point there is nothing unusual about this. When I try to access the attribute from outside though, the behaviour is different as when accessed from inside the object although a and self are the exact same object (as can be seen from the printed id)
Step9: To access the original attribute, I have to know the secret name mangling formula which simplified is _<class name><attribute name> | Python Code:
def spam(a, _, b):
return a + b
spam(1, 2, 3)
Explanation: _ doesn't look like much, but as part of a name in Python it has a surprising amount of different meanings.
Make names more readable
We all know that we should use good names. This often makes it necessary to use more than one word to describe the thing. cheese grater is arguably a good name for a thing but it consists of two words in english[^1]. Like most programming languages Python tokenizes source code along white space. So, for the sake of readability the system needs to be tricked into believing that several words are one while still providing some kind of visual separation for the human reader. Born were constructs like CheeseGrater or cheese_grater - a.k.a. CamelCase and snake_case.
[^1]: We wouldn't have a problem like that if we used german names, where this would be a käsehobel. Especially in german law, words like Vermögens­zuordnungs­zuständigkeits­übertragungs­verordnung are used unironically.
The PEP-8 style guide is here to tell us - among other things - how to name things in a consistent manner.
! I personally prefer lowerCasedCamelCase for local names bound to data and snake_case for names bound to functions. Until 2018 I wasn't even technically violating PEP-8 when doing that. Since then I am a self confessed PEP-8 outlaw in personal projects and wherever I can get away with it.
Simply _: it needs a name, but I won't use it
Using a single underscore as the complete name can be seen as a crutch. In certain situations it might be necessary to assign a name due to the nature of the language, but we'd rather not give it a name (because we won't even be using it anyway).
_ as a parameter name
Here we know that the function will always be called with three positional arguments but we don't need the second one:
End of explanation
def spam(a, b, **_):
return a + b
spam(c=10, a=2, b=3)
Explanation: There are often more elegant ways to deal with this depending on context, but sometimes this is still the easiest way to clearly communicate this.
To take one example and offer some unasked advice we could think of a primitive home baked plugin system that simply calls the client function with positional arguments. This could be improved by always using keyword arguments like this:
End of explanation
left, *_, right = (0, 1, 2, 3, 4)
left, right
Explanation: This way the client function can collect whatever arrives in an un(derscore)named catch-all dict also having more flexibility regarding future API changes.
What might be even better though is if the plugin system inspects the client function and calls it only with the requested parameters making this particular underscore crutch completely unnecessary[^2].
[^2]: Have a look at pluggy if you are interested in how this is accomplished.
_ as a local name
In the next example a tuple contains several items, but we are only interested in the first and the last one, so we collect whatever is in the middle and assign it to _:
End of explanation
for _ in range(3):
print("hi", end="")
Explanation: Also not uncommon: needing to repeat something a certain number of times without needing the iteration variable:
End of explanation
_private_name = "Please don't access me from outside"
Explanation: the exception: _ in an interactive shell
There is one case where _ can and should be used but is not manually assigned.
When you type python on the command line you enter the so-called REPL. The underscore grows a magic functionality here by always holding the result of the last evaluation.
!!! The IPython interactive shell takes this two steps further - it provides the last three evaluation results in _, __ and ___.
_ as prefix: this is private, please don't use it
End of explanation
id_ = 123456
print(f"{id(id_)=}")
Explanation: This has also one practical implication when used together with the star import: names that start with an underscore, are not imported in that case.
_ as postfix: avoid shadowing of names
Names in Python can be freely reassigned. This is handy, but also a source of confusion and bugs. For that reason, static code analyzers warn you when you reassign the name of an inbuilt (e.g. id) or a name already defined in an outer scope. If I am determined to use such a name, I simply add an underscore like id_ - which means: I know that this is shadowing an already defined name, but I still want to use it, so I mangle it just enough to be different. I am not sure how common this practice is, but I am pretty sure I didn't come up with it myself.
End of explanation
print = 2
try:
print("This won't work!")
except TypeError:
from builtins import print as real_print
real_print("Told you so ...")
print = real_print
print("Now all is fine again.")
Explanation: This is veering off the original topic a bit, but I just want to mention that whatever you do - the original object a builtin points to is never lost - just shadowed. When a module is initialized, the namespace of the builtins[^3] module is merged into the module. The objects can still be retrieved from builtins whenever necessary:
[^3]: While we are talking about builtins in the context of underscores: the __builtins__ attribute is already available in the module namespace but it is not recommended to use it directly as it is a CPython implementation detail
End of explanation
class A:
__spam = "SPAM"
def print_spam(self):
print(f"{self.__spam=}, {id(self)=}")
a = A()
a.print_spam()
Explanation: __ as attribute prefix: mangle the name
In pairs the underscore turns from being an informal crutch to something that is part of Pythons execution model. If an object attribute starts with a double underscore (a.k.a. dunder), interesting things start to happen:
End of explanation
print(f"{id(a)=}")
a.__spam
Explanation: Up to this point there is nothing unusual about this. When I try to access the attribute from outside though, the behaviour is different as when accessed from inside the object although a and self are the exact same object (as can be seen from the printed id):
End of explanation
a._A__spam
Explanation: To access the original attribute, I have to know the secret name mangling formula which simplified is _<class name><attribute name>:
End of explanation |
8,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Video using the Base Overlay
The PYNQ-Z1 board contains a HDMI input port, and a HDMI output port connected to the FPGA fabric of the Zynq® chip. This means to use the HDMI ports, HDMI controllers must be included in a hardware library or overlay.
The base overlay contains a HDMI input controller, and a HDMI Output controller, both connected to their corresponding HDMI ports. A frame can be captured from the HDMI input, and streamed into DDR memory. The frames in DDR memory, can be accessed from Python.
A framebuffer can be shared between HDMI in and HDMI out to enable streaming.
Video IO
The overlay contains two video controllers, HDMI in and out. Both interfaces can be controlled independently, or used in combination to capture an image from the HDMI, process it, and display it on the HDMI out.
There is also a USB controller connected to the Zynq PS. A webcam can also be used to capture images, or video input, that can be processed and displayed on the HDMI out.
The HDMI video capture controller
To use the HDMI in controller, connect the on-board HDMI In port to a valid video source. E.g. your laptop can be used if it has HDMI out. Any HDMI video source can be used up to 1080p.
To use the HDMI in, ensure you have connected a valid HDMI source and execute the next cell. If a valid HDMI source is not detected, the HDMI in controller will timeout with an error.
Step1: The HDMI() argument ‘in’ indicates that the object is in capture mode.
When a valid video input source is connected, the controller should recognize it and start automatically. If a HDMI source is not connected, the code will time-out with an error.
Starting and stopping the controller
You can manually start/stop the controller
Step2: Readback from the controller
To check the state of the controller
Step3: The state is returned as an integer value, with one of three possible values
Step4: HDMI Frame list
The HDMI object holds a frame list, that can contain up to 3 frames, and is where the controller stores the captured frames. At the object instantiation, the current frame is the one at index 0. You can check at any time which frame index is active
Step5: The frame_index() method can also be used to set a new index, if you specify an argument with the method call. For instance
Step6: This will set the current frame index to the next in the sequence. Note that, if index is 2 (the last frame in the list), (index+1) will cause an exception.
If you want to set the next frame in the sequence, use
Step7: This will loop through the frame list and it will also return the new index as an integer.
Access the current frame
There are two ways to access pixel data
Step8: This will dump the frame as a list _frame[height, width][rgb]. Where rgb is a tuple (r,g,b). If you want to modify the green component of a pixel, you can do it as shown below. In the example, the top left quarter of the image will have the green component increased.
Step9: This frame() method is a simple way to capture pixel data, but processing it in Python will be slow. If you want to dump a frame at a specific index, just pass the index as an argument of the frame() method
Step10: If higher performance is required, the frame_raw() method can be used
Step11: This method will return a fast memory dump of the internal frame list, as a mono-dimensional list of dimension frame[1920*1080*3] (This array is of fixed size regardless of the input source resolution). 1920x1080 is the maximum supported frame dimension and 3 separate values for each pixel (Blue, Green, Red).
When the resolution is less than 1920x1080, the user must manually extract the correct pixel data.
For example, if the resolution of the video input source is 800x600, meaningful values will only be in the range frame_raw[1920*i*3] to frame_raw[(1920*i + 799)*3] for each i (rows) from 0 to 599. Any other position outside of this range will contain invalid data.
Step12: Frame Lists
To draw or display smooth animations/video, note the following
Step13: For the HDMI controller, you have to start/stop the device explicitly
Step14: To check the state of the controller
Step15: The state is returned as an integer value, with 2 possible values
Step16: This will print the current mode as a string. To change the mode, insert a valid index as an argument when calling mode()
Step17: Valid resolutions are
Step18: To start the controllers
Step19: The last step is always to stop the controllers and delete HDMI objects. | Python Code:
from pynq import Overlay
from pynq.drivers.video import HDMI
# Download bitstream
Overlay("base.bit").download()
# Initialize HDMI as an input device
hdmi_in = HDMI('in')
Explanation: Video using the Base Overlay
The PYNQ-Z1 board contains a HDMI input port, and a HDMI output port connected to the FPGA fabric of the Zynq® chip. This means to use the HDMI ports, HDMI controllers must be included in a hardware library or overlay.
The base overlay contains a HDMI input controller, and a HDMI Output controller, both connected to their corresponding HDMI ports. A frame can be captured from the HDMI input, and streamed into DDR memory. The frames in DDR memory, can be accessed from Python.
A framebuffer can be shared between HDMI in and HDMI out to enable streaming.
Video IO
The overlay contains two video controllers, HDMI in and out. Both interfaces can be controlled independently, or used in combination to capture an image from the HDMI, process it, and display it on the HDMI out.
There is also a USB controller connected to the Zynq PS. A webcam can also be used to capture images, or video input, that can be processed and displayed on the HDMI out.
The HDMI video capture controller
To use the HDMI in controller, connect the on-board HDMI In port to a valid video source. E.g. your laptop can be used if it has HDMI out. Any HDMI video source can be used up to 1080p.
To use the HDMI in, ensure you have connected a valid HDMI source and execute the next cell. If a valid HDMI source is not detected, the HDMI in controller will timeout with an error.
End of explanation
hdmi_in.start()
hdmi_in.stop()
Explanation: The HDMI() argument ‘in’ indicates that the object is in capture mode.
When a valid video input source is connected, the controller should recognize it and start automatically. If a HDMI source is not connected, the code will time-out with an error.
Starting and stopping the controller
You can manually start/stop the controller
End of explanation
state = hdmi_in.state()
print(state)
Explanation: Readback from the controller
To check the state of the controller:
End of explanation
hdmi_in.start()
width = hdmi_in.frame_width()
height = hdmi_in.frame_height()
print('HDMI is capturing a video source of resolution {}x{}'\
.format(width,height))
Explanation: The state is returned as an integer value, with one of three possible values:
0 if disconnected
1 if streaming
2 if paused
You can also check the width and height of the input source (assuming a source is connected):
End of explanation
hdmi_in.frame_index()
Explanation: HDMI Frame list
The HDMI object holds a frame list, that can contain up to 3 frames, and is where the controller stores the captured frames. At the object instantiation, the current frame is the one at index 0. You can check at any time which frame index is active:
End of explanation
index = hdmi_in.frame_index()
hdmi_in.frame_index(index + 1)
Explanation: The frame_index() method can also be used to set a new index, if you specify an argument with the method call. For instance:
End of explanation
hdmi_in.frame_index_next()
Explanation: This will set the current frame index to the next in the sequence. Note that, if index is 2 (the last frame in the list), (index+1) will cause an exception.
If you want to set the next frame in the sequence, use:
End of explanation
from IPython.display import Image
frame = hdmi_in.frame()
orig_img_path = '/home/xilinx/jupyter_notebooks/Getting_Started/images/hdmi_in_frame0.jpg'
frame.save_as_jpeg(orig_img_path)
Image(filename=orig_img_path)
Explanation: This will loop through the frame list and it will also return the new index as an integer.
Access the current frame
There are two ways to access pixel data: hdmi.frame() and hdmi.frame_raw().
End of explanation
for x in range(int(width/2)):
for y in range(int(height/2)):
(red,green,blue) = frame[x,y]
green = green*2
if(green>255):
green = 255
frame[x,y] = (red, green, blue)
new_img_path = '/home/xilinx/jupyter_notebooks/Getting_Started/images/hdmi_in_frame1.jpg'
frame.save_as_jpeg(new_img_path)
Image(filename=new_img_path)
Explanation: This will dump the frame as a list _frame[height, width][rgb]. Where rgb is a tuple (r,g,b). If you want to modify the green component of a pixel, you can do it as shown below. In the example, the top left quarter of the image will have the green component increased.
End of explanation
# dumping frame at index 2
frame = hdmi_in.frame(2)
Explanation: This frame() method is a simple way to capture pixel data, but processing it in Python will be slow. If you want to dump a frame at a specific index, just pass the index as an argument of the frame() method:
End of explanation
# dumping frame at current index
frame_raw = hdmi_in.frame_raw()
# dumping frame at index 2
frame_raw = hdmi_in.frame_raw(2)
Explanation: If higher performance is required, the frame_raw() method can be used:
End of explanation
# printing the green component of pixel (0,0)
print(frame_raw[1])
# printing the blue component of pixel (1,399)
print(frame_raw[1920 + 399 + 0])
# printing the red component of the last pixel (599,799)
print(frame_raw[1920*599 + 799 + 2])
Explanation: This method will return a fast memory dump of the internal frame list, as a mono-dimensional list of dimension frame[1920*1080*3] (This array is of fixed size regardless of the input source resolution). 1920x1080 is the maximum supported frame dimension and 3 separate values for each pixel (Blue, Green, Red).
When the resolution is less than 1920x1080, the user must manually extract the correct pixel data.
For example, if the resolution of the video input source is 800x600, meaningful values will only be in the range frame_raw[1920*i*3] to frame_raw[(1920*i + 799)*3] for each i (rows) from 0 to 599. Any other position outside of this range will contain invalid data.
End of explanation
from pynq.drivers import HDMI
hdmi_out = HDMI('out')
Explanation: Frame Lists
To draw or display smooth animations/video, note the following:
Draw a new frame to a frame location not currently in use (an index different to the current hdmi.frame_index()) . Once finished writing the new frame, change the current frame index to the new frame index.
The HDMI out controller
Using the HDMI output is similar to using the HDMI input. Connect the HDMI OUT port to a monitor, or other display device.
To instantiate the HDMI controller:
End of explanation
hdmi_out.start()
hdmi_out.stop()
Explanation: For the HDMI controller, you have to start/stop the device explicitly:
End of explanation
state = hdmi_out.state()
print(state)
Explanation: To check the state of the controller:
End of explanation
print(hdmi_out.mode())
Explanation: The state is returned as an integer value, with 2 possible values:
0 if stopped
1 if running
After initialization, the display resolution is set at the lowest level: 640x480 at 60Hz.
To check the current resolution:
End of explanation
hdmi_out.mode(4)
Explanation: This will print the current mode as a string. To change the mode, insert a valid index as an argument when calling mode():
End of explanation
from pynq.drivers.video import HDMI
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(4)
Explanation: Valid resolutions are:
0 : 640x480, 60Hz
1 : 800x600, 60Hz
2 : 1280x720, 60Hz
3 : 1280x1024, 60Hz
4 : 1920x1080, 60Hz
Input/Output Frame Lists
To draw or display smooth animations/video, note the following:
Draw a new frame to a frame location not currently in use (an index different to the current hdmi.frame_index()) . Once finished writing the new frame, change the current frame index to the new frame index.
Streaming from HDMI Input to Output
To use the HDMI input and output to capture and display an image, make both the HDMI input and output share the same frame list. The frame list in both cases can be accessed. You can make the two object share the same frame list by a frame list as an argument to the second object’s constructor.
End of explanation
hdmi_out.start()
hdmi_in.start()
Explanation: To start the controllers:
End of explanation
hdmi_out.stop()
hdmi_in.stop()
del hdmi_out
del hdmi_in
Explanation: The last step is always to stop the controllers and delete HDMI objects.
End of explanation |
8,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License")
Step1: On Variational Bounds of Mutual Information
Ben Poole, Sherjil Ozair, Aäron van den Oord, Alexander A. Alemi, George Tucker<br/>
ICML 2019<br/>
paper / slides / video / poster
This notebook contains code for most of the variational bounds on mutual information presented in the paper, and experiments on the toy Gaussian problem.
Comments? Complaints? Questions? Bug Ben on Twitter @poolio
<sup>Thanks to Zhe Dong for valuable feedback on this colab.</sup>
Introduction
Here we give a brief overview of the challenges of MI estimation and a few of the variational bounds we presented in the paper. Please checkout the paper for more details.
Our goal is to build efficient estimators of mutual information (MI) that leverage neural networks. Mutual information is given by
Step2: Variational bound implementations
TUBA and NWJ lower bounds
The $I_\text{TUBA}$ variational lower bound on mutual information is given by
Step4: InfoNCE contrastive lower bound
The InfoNCE variational lower bound is the easiest to implement, and typically provides low variance but high bias estimates of MI
Step8: Interpolated lower bounds
The interpolated lower bounds are the trickiest to implement, but also provide the best of both worlds
Step11: JS-KL hybrid lower bound
We can also use different approaches for training the critic vs. estimating mutual informaiton. The $I_\text{JS}$ bound just trains the critic with the standard lower bound on the Jensen-Shannon divergence used in GANs, and then evaluates the critic using the $I_\text{NWJ}$ lower bound on KL (mutual information).
We use a fun trick to simplify the code here
Step13: Structured Bounds
See the end of the notebook for code for the structured bounds. The implementation when using these bounds is often different, as we use a known conditional distribution $p(y|x)$ instead of a critic function $f(x, y)$ that scores all pairs of points. For simplicity, we will just demonstrate the minibatch lower bound $I_\text{TNCE}$, which is equivalent to $I_\text{NCE}$ but using the critic $f(x,y) = \log p(y|x)$.
Putting it together
Step16: Neural network architectures
Critics
Step17: Baselines
Step20: Experiments
Dataset
Step22: Training code
Step23: Dataset, optimization, and critic parameters. Try experimenting with these.
Step24: Build a dictionary of the mutual information estimators to train and their parameters.
Step25: Train each estimator and store mutual information estimates (this takes ~2 minutes on a modern GPU).
Step26: Results
Step30: Structured Bounds
The structured lower bounds use known conditional and learned marginal distributions to lower and upper bound mutual information. Check back soon for some more examples with these bounds! | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License")
End of explanation
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfkl = tf.keras.layers
tfp = tfp.layers
import pandas as pd # used for exponential moving average
from scipy.special import logit
import numpy as np
import matplotlib.pyplot as plt
Explanation: On Variational Bounds of Mutual Information
Ben Poole, Sherjil Ozair, Aäron van den Oord, Alexander A. Alemi, George Tucker<br/>
ICML 2019<br/>
paper / slides / video / poster
This notebook contains code for most of the variational bounds on mutual information presented in the paper, and experiments on the toy Gaussian problem.
Comments? Complaints? Questions? Bug Ben on Twitter @poolio
<sup>Thanks to Zhe Dong for valuable feedback on this colab.</sup>
Introduction
Here we give a brief overview of the challenges of MI estimation and a few of the variational bounds we presented in the paper. Please checkout the paper for more details.
Our goal is to build efficient estimators of mutual information (MI) that leverage neural networks. Mutual information is given by:
$$I(X; Y) = \mathbb{E}_{p(x,y)}\left[\log \frac{p(x,y)}{p(x)p(y)}\right].$$
Estimating MI is challenging as we often only have access to samples $x, y$ but do not know the densities.
To overcome these challenges, we review and presented several variational estimators of MI. These estimators replace the intractable MI objective with a tractable objective that lower or upper bounds MI, and depends on neural-network-powered critics and baselines. For example, the $I_\text{NWJ}$ bound is given by:
$$ I(X; Y) \ge 1 + \mathbb{E}{p(x, y)} \left[\color{blue} f(x,y) \right] - \mathbb{E}{p(x)p(y)} \left[ \exp{{\color{blue} f(x , y)}}\right] $$
where $\color{blue} f$ is a neural network that takes $x$ and $y$ as input and outputs a scalar.
Below, we code up several variational lower and upper bounds on mutual information:
- Existing lower bounds: $I_\text{NWJ}, \,I_\text{InfoNCE}$
- New lower bounds: $I_\text{TUBA},\, I_{\alpha},\, I_\text{JS}$
- Structured lower bound ($I_\text{TNCE}$) and upper bounds ($I_\text{BA}$, $I_\text{MBU}$) with known $p(y|x)$
Here's a table of the bounds we'll code up here as well as the mathematical objectives:
We apply these bounds to a toy problem, where $(x, y)$ are jointly Gaussian, and we vary the correlation between $x$ and $y$ over time to increase the mutual information. For the bounds with neural network critics, we train the critics to tighten the variational bounds.
Practical suggestions
MI estimates with neural networks are highly sensitive to the representation of the input data, architecture, optimizer, and more. When applying these estimators to new datasets, you will need to experiment!
For representation learning, the $I_\text{NCE}$ bounds are a good place to start. Implementing them is simple.
For estimation, start with the $I_\alpha$ bounds with a small $\alpha$. The implementation is trickier, but they can greatly reduce variance vs. $I_\text{NWJ}$.
If you have known structure use it. For example, if you know a conditional distribution $p(y|x)$, use one of the structured bounds.
Implementation notes:
For most bounds, we will work with a matrix, denoted by f or scores that contains the output of the critic for every pair of elements in the minibatch: scores[i, j] = critic(x[j], y[i])
Most variational lower bounds on MI require samples from both the joint $p(x, y)$ and marginal distributions $p(x)p(y)$. For a minibatch of size $K$, we have $K$ samples from the joint $p(x,y)$ and can form samples from the marginal by using all pairs $x_i, y_j$ where $i \ne j$, giving $K \times (K -1)$ samples from the marginal. The diagonal elements of the scores matrix correspond to the $K$ critic scores for samples from the joint, and the off-diagonal elements correspond to the critic scores for samples from the marginal.
In this notebook we assume that we use all other elements in the minibatch as negative samples. For certain InfoMax tasks, for example when negatives come from the same image but at other spatial locations, one has to be more careful about summing over and selecting the right positive and negative examples to form the two expectations.
Separable critics are the more efficient and popular architecture. Instead of having to do $K^2$ forward passes, you only have to do $2 \times K$ followed by an inner product.
Setup
Notebook should work in TF 1.X, but uses Eager-mode.
End of explanation
def reduce_logmeanexp_nodiag(x, axis=None):
batch_size = x.shape[0].value
logsumexp = tf.reduce_logsumexp(x - tf.linalg.tensor_diag(np.inf * tf.ones(batch_size)), axis=axis)
if axis:
num_elem = batch_size - 1.
else:
num_elem = batch_size * (batch_size - 1.)
return logsumexp - tf.math.log(num_elem)
def tuba_lower_bound(scores, log_baseline=None):
if log_baseline is not None:
scores -= log_baseline[:, None]
batch_size = tf.cast(scores.shape[0], tf.float32)
# First term is an expectation over samples from the joint,
# which are the diagonal elmements of the scores matrix.
joint_term = tf.reduce_mean(tf.linalg.diag_part(scores))
# Second term is an expectation over samples from the marginal,
# which are the off-diagonal elements of the scores matrix.
marg_term = tf.exp(reduce_logmeanexp_nodiag(scores))
return 1. + joint_term - marg_term
def nwj_lower_bound(scores):
# equivalent to: tuba_lower_bound(scores, log_baseline=1.)
return tuba_lower_bound(scores - 1.)
Explanation: Variational bound implementations
TUBA and NWJ lower bounds
The $I_\text{TUBA}$ variational lower bound on mutual information is given by:
$$ I(X; Y) \ge 1 + \mathbb{E}{p(x, y)} \left[\log {e^{\color{blue} f(x,y)}\over{\color{green} a(y)}} \right] - \mathbb{E}{p(x)p(y)} \left[ \frac{e^{{\color{blue} f(x , y)}}}{\color{green} a(y)}\right] \triangleq I_\text{TUBA}$$
These bounds can be low bias, but are typically high variance. To implement this bound, we need the critic scores for all elements in the minibatch: scores[i, j] = critic(x[i], y[j]), and the value of the baseline a[j] = baseline(y[j]) for every y. Instead of requiring the baseline to be non-negative, we assume that the argument passed in is $\log {\color{green} a(y)}$, and will exponentiate it to get the baseline ${\color{green} a(y)}$.
The implementation requires averaging the exponentiated critic over all independent samples, corresponding to the off-diagonal terms in scores, which is done by the reduce_logmeanexp_nodiag function.
The $I_\text{NWJ}$ bound just uses the constant $e$ for the baseline, which is equivalent to adding 1 to the critic scores.
End of explanation
def infonce_lower_bound(scores):
InfoNCE lower bound from van den Oord et al. (2018).
nll = tf.reduce_mean(tf.linalg.diag_part(scores) - tf.reduce_logsumexp(scores, axis=1))
# Alternative implementation:
# nll = -tf.nn.sparse_softmax_cross_entropy_with_logits(logits=scores, labels=tf.range(batch_size))
mi = tf.math.log(tf.cast(scores.shape[0].value, tf.float32)) + nll
return mi
Explanation: InfoNCE contrastive lower bound
The InfoNCE variational lower bound is the easiest to implement, and typically provides low variance but high bias estimates of MI:
$$I(X; Y) \ge \mathbb{E}{p^K(x,y)}\left[\frac{1}{K} \sum{i=1}^K \log \frac{e^{\color{blue}f(x_i, y_i)}}{\frac{1}{K}\sum_{j=1}^K e^{\color{blue} f(x_j, y_i)}}\right].$$
The expectation $p^K(x,y)$ draws $K$ samples from the joint $(x, y)$. So for a minibatch of $K$ examples, we are effectively forming a single sample Monte-Carlo estimate of the quantity inside the expectation.
Here we use reduce_logsumexp so the implementation matches the math, but you can also implement this with sparse_softmax_cross_entropy_with_logits using a different label for each of the K elements in the minibatch.
End of explanation
def log_interpolate(log_a, log_b, alpha_logit):
Numerically stable implementation of log(alpha * a + (1-alpha) * b).
log_alpha = -tf.nn.softplus(-alpha_logit)
log_1_minus_alpha = -tf.nn.softplus(alpha_logit)
y = tf.reduce_logsumexp(tf.stack((log_alpha + log_a, log_1_minus_alpha + log_b)), axis=0)
return y
def compute_log_loomean(scores):
Compute the log leave-one-out mean of the exponentiated scores.
For each column j we compute the log-sum-exp over the row holding out column j.
This is a numerically stable version of:
log_loosum = scores + tfp.math.softplus_inverse(tf.reduce_logsumexp(scores, axis=1, keepdims=True) - scores)
Implementation based on tfp.vi.csiszar_divergence.csiszar_vimco_helper.
max_scores = tf.reduce_max(scores, axis=1, keepdims=True)
lse_minus_max = tf.reduce_logsumexp(scores - max_scores, axis=1, keepdims=True)
d = lse_minus_max + (max_scores - scores)
d_ok = tf.not_equal(d, 0.)
safe_d = tf.where(d_ok, d, tf.ones_like(d))
loo_lse = scores + tfp.math.softplus_inverse(safe_d)
# Normalize to get the leave one out log mean exp
loo_lme = loo_lse - tf.math.log(scores.shape[1].value - 1.)
return loo_lme
def interpolated_lower_bound(scores, baseline, alpha_logit):
Interpolated lower bound on mutual information.
Interpolates between the InfoNCE baseline ( alpha_logit -> -infty),
and the single-sample TUBA baseline (alpha_logit -> infty)
Args:
scores: [batch_size, batch_size] critic scores
baseline: [batch_size] log baseline scores
alpha_logit: logit for the mixture probability
Returns:
scalar, lower bound on MI
batch_size = scores.shape[0].value
# Compute InfoNCE baseline
nce_baseline = compute_log_loomean(scores)
# Inerpolated baseline interpolates the InfoNCE baseline with a learned baseline
interpolated_baseline = log_interpolate(
nce_baseline, tf.tile(baseline[:, None], (1, batch_size)), alpha_logit)
# Marginal term.
critic_marg = scores - tf.linalg.diag_part(interpolated_baseline)[:, None]
marg_term = tf.exp(reduce_logmeanexp_nodiag(critic_marg))
# Joint term.
critic_joint = tf.linalg.diag_part(scores)[:, None] - interpolated_baseline
joint_term = (tf.reduce_sum(critic_joint) -
tf.reduce_sum(tf.linalg.diag_part(critic_joint))) / (batch_size * (batch_size - 1.))
return 1 + joint_term - marg_term
Explanation: Interpolated lower bounds
The interpolated lower bounds are the trickiest to implement, but also provide the best of both worlds: lower bias or variance relative to the NWJ and InfoNCE bounds. We start with the multisample TUBA bound from the paper:
$$I(X_1; Y) \geq 1+\mathbb{E}{p(x{1:K})p(y|x_1)}\left[\log \frac{e^{f(x_1,y)}}{a(y; x_{1:K})}\right]
- \mathbb{E}{p(x{1:K})p(y)}\left[ \frac{e^{f(x_1,y)}}{a(y; x_{1:K})}\right]$$
Coding up this bound is tricky as we need $K+1$ samples from the joint $p(x,y)$: $K$ of these samples are used for the first term, and the additional sample is used for the independent sample from $p(y)$ for the second term. In the minibatch setting, we can implement this by holding out the $j$th sample from the joint, and using $y_j$ for the indepent sample from $p(y)$ for the second term, and $x_\ne j$ (all elements in the minibatch except element $j$) for the samples from $p(x_{1:K})$. A single-sample Monte-Carlo approximation to the NWJ bound is then given by:
$$1 + \log \frac{e^{f(x_1, y_1)}}{a(y_1; x_{\ne j}, \alpha)} - \frac{e^{f(x_1, y_j)}}{a(y_j; x_{\ne j}, \alpha)}$$
Instead of just using $(x_1, y_1)$ as the only positive example, we can use all samples from the joint in the minibatch: $(x_i, y_i)$. Summing over the $K$ elements which are not the element we are holding out for the second expectation ($y_j$), yields:
$$1 + \frac{1}{K}\sum_{i\ne j} \left(\log \frac{e^{f(x_i, y_i)}}{a(y_i; x_{\ne j}, \alpha)} - \frac{e^{f(x_i, y_j)}}{a(y_j; x_{\ne j}, \alpha)}\right)$$
Furthermore, in the minibatch setting we can choose any element in the minibatch to hold out for the second expectation. Summing over all possible (leave one out) combinations yields:
$$1 + \frac{1}{K+1} \sum_{j=1}^{K+1} \frac{1}{K}\sum_{i\ne j} \left(\log \frac{e^{f(x_i, y_i)}}{a(y_i; x_{\ne j}, \alpha)} - \frac{e^{f(x_i, y_j)}}{a(y_j; x_{\ne j}, \alpha)}\right)$$
where the interpolated baseline is given by $a(y; x_{1:M}) = \alpha \frac{1}{K} \sum_{l=1}^K e^{f(x_l, y)} + (1-\alpha) q(y)$. We work in log-space for numerical stability, using compute_log_loomean to compute the leave one out InfoNCE baselines, and log_interpolate to compute the log baseline for the interpolated bound.
End of explanation
def js_fgan_lower_bound(f):
Lower bound on Jensen-Shannon divergence from Nowozin et al. (2016).
f_diag = tf.linalg.tensor_diag_part(f)
first_term = tf.reduce_mean(-tf.nn.softplus(-f_diag))
n = tf.cast(f.shape[0], tf.float32)
second_term = (tf.reduce_sum(tf.nn.softplus(f)) - tf.reduce_sum(tf.nn.softplus(f_diag))) / (n * (n - 1.))
return first_term - second_term
def js_lower_bound(f):
NWJ lower bound on MI using critic trained with Jensen-Shannon.
The returned Tensor gives MI estimates when evaluated, but its gradients are
the gradients of the lower bound of the Jensen-Shannon divergence.
js = js_fgan_lower_bound(f)
mi = nwj_lower_bound(f)
return js + tf.stop_gradient(mi - js)
Explanation: JS-KL hybrid lower bound
We can also use different approaches for training the critic vs. estimating mutual informaiton. The $I_\text{JS}$ bound just trains the critic with the standard lower bound on the Jensen-Shannon divergence used in GANs, and then evaluates the critic using the $I_\text{NWJ}$ lower bound on KL (mutual information).
We use a fun trick to simplify the code here:
z = x + tf.stop_gradient(y - x).<br/>
z gives the value of y on the forward pass, but the gradient of x on the backward pass.
End of explanation
def estimate_mutual_information(estimator, x, y, critic_fn,
baseline_fn=None, alpha_logit=None):
Estimate variational lower bounds on mutual information.
Args:
estimator: string specifying estimator, one of:
'nwj', 'infonce', 'tuba', 'js', 'interpolated'
x: [batch_size, dim_x] Tensor
y: [batch_size, dim_y] Tensor
critic_fn: callable that takes x and y as input and outputs critic scores
output shape is a [batch_size, batch_size] matrix
baseline_fn (optional): callable that takes y as input
outputs a [batch_size] or [batch_size, 1] vector
alpha_logit (optional): logit(alpha) for interpolated bound
Returns:
scalar estimate of mutual information
scores = critic_fn(x, y)
if baseline_fn is not None:
# Some baselines' output is (batch_size, 1) which we remove here.
log_baseline = tf.squeeze(baseline_fn(y))
if estimator == 'infonce':
mi = infonce_lower_bound(scores)
elif estimator == 'nwj':
mi = nwj_lower_bound(scores)
elif estimator == 'tuba':
mi = tuba_lower_bound(scores, log_baseline)
elif estimator == 'js':
mi = js_lower_bound(scores)
elif estimator == 'interpolated':
assert alpha_logit is not None, "Must specify alpha_logit for interpolated bound."
mi = interpolated_lower_bound(scores, log_baseline, alpha_logit)
return mi
Explanation: Structured Bounds
See the end of the notebook for code for the structured bounds. The implementation when using these bounds is often different, as we use a known conditional distribution $p(y|x)$ instead of a critic function $f(x, y)$ that scores all pairs of points. For simplicity, we will just demonstrate the minibatch lower bound $I_\text{TNCE}$, which is equivalent to $I_\text{NCE}$ but using the critic $f(x,y) = \log p(y|x)$.
Putting it together
End of explanation
def mlp(hidden_dim, output_dim, layers, activation):
return tf.keras.Sequential(
[tfkl.Dense(hidden_dim, activation) for _ in range(layers)] +
[tfkl.Dense(output_dim)])
class SeparableCritic(tf.keras.Model):
def __init__(self, hidden_dim, embed_dim, layers, activation, **extra_kwargs):
super(SeparableCritic, self).__init__()
self._g = mlp(hidden_dim, embed_dim, layers, activation)
self._h = mlp(hidden_dim, embed_dim, layers, activation)
def call(self, x, y):
scores = tf.matmul(self._h(y), self._g(x), transpose_b=True)
return scores
class ConcatCritic(tf.keras.Model):
def __init__(self, hidden_dim, layers, activation, **extra_kwargs):
super(ConcatCritic, self).__init__()
# output is scalar score
self._f = mlp(hidden_dim, 1, layers, activation)
def call(self, x, y):
batch_size = tf.shape(x)[0]
# Tile all possible combinations of x and y
x_tiled = tf.tile(x[None, :], (batch_size, 1, 1))
y_tiled = tf.tile(y[:, None], (1, batch_size, 1))
# xy is [batch_size * batch_size, x_dim + y_dim]
xy_pairs = tf.reshape(tf.concat((x_tiled, y_tiled), axis=2), [batch_size * batch_size, -1])
# Compute scores for each x_i, y_j pair.
scores = self._f(xy_pairs)
return tf.transpose(tf.reshape(scores, [batch_size, batch_size]))
def gaussian_log_prob_pairs(dists, x):
Compute log probability for all pairs of distributions and samples.
mu, sigma = dists.mean(), dists.stddev()
sigma2 = sigma**2
normalizer_term = tf.reduce_sum(-0.5 * (np.log(2. * np.pi) + 2.0 * tf.math.log(sigma)), axis=1)[None, :]
x2_term = -tf.matmul(x**2, 1.0 / (2 * sigma2), transpose_b=True)
mu2_term = - tf.reduce_sum(mu**2 / (2 * sigma2), axis=1)[None, :]
cross_term = tf.matmul(x, mu / sigma2, transpose_b=True)
log_prob = normalizer_term + x2_term + mu2_term + cross_term
return log_prob
def build_log_prob_conditional(rho, **extra_kwargs):
True conditional distribution.
def log_prob_conditional(x, y):
mu = x * rho
q_y = tfd.MultivariateNormalDiag(mu, tf.ones_like(mu) * tf.cast(tf.sqrt(1.0 - rho**2), tf.float32))
return gaussian_log_prob_pairs(q_y, y)
return log_prob_conditional
CRITICS = {
'separable': SeparableCritic,
'concat': ConcatCritic,
'conditional': build_log_prob_conditional,
}
Explanation: Neural network architectures
Critics: we consider two choices of neural network architectures for $\color{blue} f(x,y)$:
1. Separable: $f(x,y) = g(x)^Th(y)$ where $g$ and $h$ are two different MLPs
2. Concat: $f(x,y) = g([x, y])$ where we concatenate $x$ and $y$ and feed them into a single MLP
Using a separable baseline is typically more efficient, as you only have to do batch_size forward passes through each network vs. the batch_size * batch_size with the concat critic.
Baselines: we consider three possibilities for the baseline $\color{green}a(y)$:
1. Constant: $a(y)$ is a fixed constant (as in $I_\text{NWJ}$)
2. Unnormalized: $a(y)$ is a neural network that produces a scalar output (representing $\log a(y)$)
3. Gaussian: $a(y)$ is a Gaussian distribution. Here we fix the mean and variance to be 1, but you could use any tractable density with learnable parameters as well.
Critics
End of explanation
def log_prob_gaussian(x):
return tf.reduce_sum(tfd.Normal(0., 1.).log_prob(x), -1)
BASELINES= {
'constant': lambda: None,
'unnormalized': lambda: mlp(hidden_dim=512, output_dim=1, layers=2, activation='relu'),
'gaussian': lambda: log_prob_gaussian,
}
Explanation: Baselines
End of explanation
def sample_correlated_gaussian(rho=0.5, dim=20, batch_size=128):
Generate samples from a correlated Gaussian distribution.
x, eps = tf.split(tf.random.normal((batch_size, 2 * dim)), 2, axis=1)
y = rho * x + tf.sqrt(tf.cast(1. - rho**2, tf.float32)) * eps
return x, y
def rho_to_mi(dim, rho):
return -0.5 * np.log(1-rho**2) * dim
def mi_to_rho(dim, mi):
return np.sqrt(1-np.exp(-2.0 / dim * mi))
def mi_schedule(n_iter):
Generate schedule for increasing correlation over time.
mis = np.round(np.linspace(0.5, 5.5-1e-9, n_iter)) *2.0#0.1
return mis.astype(np.float32)
plt.figure(figsize=(6,3))
for i, rho in enumerate([0.5, 0.99]):
plt.subplot(1, 2, i + 1)
x, y = sample_correlated_gaussian(batch_size=500, dim=1, rho=rho)
plt.scatter(x[:, 0], y[:, 0])
plt.title(r'$\rho=%.2f$, $I(X; Y)=%.1f$' % (rho, rho_to_mi(1, rho)))
plt.xlim(-3, 3); plt.ylim(-3, 3);
Explanation: Experiments
Dataset: correlated Gaussian
We experiment with a super simple correlated Gaussian dataset:
\begin{align}
x &\sim \mathcal{N}(0, I_d)\
y &\sim \mathcal{N}(\rho x, (1 - \rho^2) I_d)
\end{align}
where $d$ is the dimensionality, and $\rho$ is the correlation. Each pair of dimensions $(x_i, y_i)$ has correlation $\rho$, and correlation 0 with all other dimensions. We can control the information by varying the correlation $\rho$:
$$I(X; Y) = -\frac{d}{2} \log \left(1 - \rho^2\right)$$
End of explanation
def train_estimator(critic_params, data_params, mi_params):
Main training loop that estimates time-varying MI.
# Ground truth rho is only used by conditional critic
critic = CRITICS[mi_params.get('critic', 'concat')](rho=None, **critic_params)
baseline = BASELINES[mi_params.get('baseline', 'constant')]()
opt = tf.keras.optimizers.Adam(opt_params['learning_rate'])
@tf.function
def train_step(rho, data_params, mi_params):
# Annoying special case:
# For the true conditional, the critic depends on the true correlation rho,
# so we rebuild the critic at each iteration.
if mi_params['critic'] == 'conditional':
critic_ = CRITICS['conditional'](rho=rho)
else:
critic_ = critic
with tf.GradientTape() as tape:
x, y = sample_correlated_gaussian(dim=data_params['dim'], rho=rho, batch_size=data_params['batch_size'])
mi = estimate_mutual_information(mi_params['estimator'], x, y, critic_, baseline, mi_params.get('alpha_logit', None))
loss = -mi
trainable_vars = []
if isinstance(critic, tf.keras.Model):
trainable_vars += critic.trainable_variables
if isinstance(baseline, tf.keras.Model):
trainable_vars += baseline.trainable_variables
grads = tape.gradient(loss, trainable_vars)
opt.apply_gradients(zip(grads, trainable_vars))
return mi
# Schedule of correlation over iterations
mis = mi_schedule(opt_params['iterations'])
rhos = mi_to_rho(data_params['dim'], mis)
estimates = []
for i in range(opt_params['iterations']):
estimates.append(train_step(rhos[i], data_params, mi_params).numpy())
return np.array(estimates)
Explanation: Training code
End of explanation
data_params = {
'dim': 20,
'batch_size': 64,
}
critic_params = {
'layers': 2,
'embed_dim': 32,
'hidden_dim': 256,
'activation': 'relu',
}
opt_params = {
'iterations': 20000,
'learning_rate': 5e-4,
}
Explanation: Dataset, optimization, and critic parameters. Try experimenting with these.
End of explanation
critic_type = 'concat' # or 'separable'
estimators = {
'NWJ': dict(estimator='nwj', critic=critic_type, baseline='constant'),
'TUBA': dict(estimator='tuba', critic=critic_type, baseline='unnormalized'),
'InfoNCE': dict(estimator='infonce', critic=critic_type, baseline='constant'),
'JS': dict(estimator='js', critic=critic_type, baseline='constant'),
'TNCE': dict(estimator='infonce', critic='conditional', baseline='constant'),
# Optimal critic for TUBA
#'TUBA_opt': dict(estimator='tuba', critic='conditional', baseline='gaussian')
}
# Add interpolated bounds
def sigmoid(x):
return 1/(1. + np.exp(-x))
for alpha_logit in [-5., 0., 5.]:
name = 'alpha=%.2f' % sigmoid(alpha_logit)
estimators[name] = dict(estimator='interpolated', critic=critic_type,
alpha_logit=alpha_logit, baseline='unnormalized')
Explanation: Build a dictionary of the mutual information estimators to train and their parameters.
End of explanation
estimates = {}
for estimator, mi_params in estimators.items():
print("Training %s..." % estimator)
estimates[estimator] = train_estimator(critic_params, data_params, mi_params)
Explanation: Train each estimator and store mutual information estimates (this takes ~2 minutes on a modern GPU).
End of explanation
# Smooting span for Exponential Moving Average
EMA_SPAN = 200
# Ground truth MI
mi_true = mi_schedule(opt_params['iterations'])
# Names specifies the key and ordering for plotting estimators
names = np.sort(list(estimators.keys()))
lnames = list(map(lambda s: s.replace('alpha', '$\\alpha$'), names))
nrows = min(2, len(estimates))
ncols = int(np.ceil(len(estimates) / float(nrows)))
fig, axs = plt.subplots(nrows, ncols, figsize=(2.7 * ncols, 3 * nrows))
if len(estimates) == 1:
axs = [axs]
axs = np.ravel(axs)
for i, name in enumerate(names):
plt.sca(axs[i])
plt.title(lnames[i])
# Plot estimated MI and smoothed MI
mis = estimates[name]
mis_smooth = pd.Series(mis).ewm(span=EMA_SPAN).mean()
p1 = plt.plot(mis, alpha=0.3)[0]
plt.plot(mis_smooth, c=p1.get_color())
# Plot true MI and line for log(batch size)
plt.plot(mi_true, color='k', label='True MI')
estimator = estimators[name]['estimator']
if 'interpolated' in estimator or 'nce' in estimator:
# Add theoretical upper bound lines
if 'interpolated' in estimator:
log_alpha = -np.log( 1+ tf.exp(-estimators[name]['alpha_logit']))
else:
log_alpha = 1.
plt.axhline(1 + np.log(data_params['batch_size']) - log_alpha, c='k', linestyle='--', label=r'1 + log(K/$\alpha$)' )
plt.ylim(-1, mi_true.max()+1)
plt.xlim(0, opt_params['iterations'])
if i == len(estimates) - ncols:
plt.xlabel('steps')
plt.ylabel('Mutual information (nats)')
plt.legend(loc='best', fontsize=8, framealpha=0.0)
plt.gcf().tight_layout();
Explanation: Results
End of explanation
def log_prob_pairs(dists, samples):
if isinstance(dists, (tfd.Normal, tfd.MultivariateNormalDiag)):
return gaussian_log_prob_pairs(dists, samples)
batch_size = tf.shape(samples)[0]
multiples = [1] * (1 + len(samples.get_shape().as_list()))
multiples[1] = tf.shape(samples)[0]
samples_tiled = tf.tile(samples[:, None], multiples)
# Compute log probs, size [batch_size, batch_size]
log_probs = dists.log_prob(samples_tiled)
return log_probs
def variational_upper_bound(conditional_dist, marginal_dist, samples):
Variational upper bound on mutual information.
Args:
conditional_dist: true conditional density, p(y|x)
marginal_dist: approximate marginal density, m(y)
samples: samples from the conditional distribution p(y|x)
Returns:
scalar, upper bound on mutual information
return tf.reduce_mean(conditional_dist.log_prob(samples) -
marginal_dist.log_prob(samples))
def minibatch_upper_bound(conditional_dist, samples):
Minibatch upper bound on mutual information.
Args:
conditional_dist: approximate conditional density, e(y|x)
samples: samples from conditional_dist
Returns:
scalar, upper bound on mutual information
log_probs = log_prob_pairs(conditional_dist, samples)
# Batch marginal holds out self (along diagonal), and averages over
# all other elements in the batch.
mask = tf.eye(tf.shape(samples)[0])
log_prob_marginal = tf.reduce_mean(reduce_logmeanexp_nodiag(log_probs, axis=1))
log_prob_cond = tf.reduce_mean(tf.linalg.tensor_diag_part(log_probs))
return log_prob_cond - log_prob_marginal
def minibatch_lower_bound(conditional_dist, samples):
Minibatch lower bound on mutual information.
Args:
conditional_dist: approximate conditional density, e(y|x)
samples: samples from conditional_dist
Returns:
scalar, lower bound on mutual information
batch_marginal_dist = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(logits=tf.zeros(conditional_dist.batch_shape)),
components_distribution=conditional_dist)
return variational_upper_bound(conditional_dist, batch_marginal_dist, samples)
Explanation: Structured Bounds
The structured lower bounds use known conditional and learned marginal distributions to lower and upper bound mutual information. Check back soon for some more examples with these bounds!
End of explanation |
8,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scapy in 15 minutes (or longer)
Guillaume Valadon & Pierre Lalet
Scapy is a powerful Python-based interactive packet manipulation program and library. It can be used to forge or decode packets for a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.
This iPython notebook provides a short tour of the main Scapy features. It assumes that you are familiar with networking terminology. All examples were built using the development version from https
Step1: 2_ Advanced firewalking using IP options is sometimes useful to perform network enumeration. Here is a more complicated one-liner
Step2: Now that we've got your attention, let's start the tutorial !
Quick setup
The easiest way to try Scapy is to clone the github repository, then launch the run_scapy script as root. The following examples can be pasted at the Scapy prompt. There is no need to install any external Python modules.
```shell
git clone https
Step3: First steps
With Scapy, each network layer is a Python class.
The '/' operator is used to bind layers together. Let's put a TCP segment on top of IP and assign it to the packet variable, then stack it on top of Ethernet.
Step4: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Protocol fields can be listed using the ls() function
Step5: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Scapy packets are objects with some useful methods, such as summary().
Step6: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Using internal mechanisms (such as DNS resolution, routing table and ARP resolution), Scapy has automatically set fields necessary to send the packet. These fields can of course be accessed and displayed.
Step7: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
Step8: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
Step9: Sending and receiving
Currently, you know how to build packets with Scapy. The next step is to send them over the network !
The sr1() function sends a packet and returns the corresponding answer. srp1() does the same for layer two packets, i.e. Ethernet. If you are only interested in sending packets send() is your friend.
As an example, we can use the DNS protocol to get www.example.com IPv4 address.
Step10: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
Step11: sr() sent a list of packets, and returns two variables, here r and u, where
Step12: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
Step13: Sniffing the network is as straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
Step14: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
Step15: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Unlike other Scapy sockets, StreamSockets do not require root privileges.
Step16: Visualization
Parts of the following examples require the matplotlib module.
With srloop(), we can send 100 ICMP packets to 8.8.8.8 and 8.8.4.4.
Step17: Then we can use the results to plot the IP id values.
Step18: The raw() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
Step19: Since some people cannot read this representation, Scapy can
Step20: "hexdump" the packet's bytes
Step21: dump the packet, layer by layer, with the values for each field
Step22: render a pretty and handy dissection of the packet
Step23: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
Step24: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
Step25: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner"
Step26: Implementing a new protocol
Scapy can be easily extended to support new protocols.
The following example defines DNS over TCP. The DNSTCP class inherits from Packet and defines two field
Step27: This new packet definition can be direcly used to build a DNS message over TCP.
Step28: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
Step29: Scapy as a module
So far, Scapy was only used from the command line. It is also a Python module than can be used to build specific network tools, such as ping6.py
Step30: Answering machines
A lot of attack scenarios look the same
Step31: Cheap Man-in-the-middle with NFQUEUE
NFQUEUE is an iptables target than can be used to transfer packets to userland process. As a nfqueue module is available in Python, you can take advantage of this Linux feature to perform Scapy based MiTM.
This example intercepts ICMP Echo request messages sent to 8.8.8.8, sent with the ping command, and modify their sequence numbers. In order to pass packets to Scapy, the following iptable command put packets into the NFQUEUE #2807
Step32: Automaton
When more logic is needed, Scapy provides a clever way abstraction to define an automaton. In a nutshell, you need to define an object that inherits from Automaton, and implement specific methods
Step33: Pipes
Pipes are an advanced Scapy feature that aims sniffing, modifying and printing packets. The API provides several buildings blocks. All of them, have high entries and exits (>>) as well as low (>) ones.
For example, the CliFeeder is used to send message from the Python command line to a low exit. It can be combined to the InjectSink that reads message on its low entry and inject them to the specified network interface. These blocks can be combined as follows
Step34: Packet can be sent using the following command on the prompt | Python Code:
send(IP(dst="1.2.3.4")/TCP(dport=502, options=[("MSS", 0)]))
Explanation: Scapy in 15 minutes (or longer)
Guillaume Valadon & Pierre Lalet
Scapy is a powerful Python-based interactive packet manipulation program and library. It can be used to forge or decode packets for a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.
This iPython notebook provides a short tour of the main Scapy features. It assumes that you are familiar with networking terminology. All examples were built using the development version from https://github.com/secdev/scapy, and tested on Linux. They should work as well on OS X, and other BSD.
The current documentation is available on http://scapy.readthedocs.io/ !
Scapy eases network packets manipulation, and allows you to forge complicated packets to perform advanced tests. As a teaser, let's have a look a two examples that are difficult to express without Scapy:
1_ Sending a TCP segment with maximum segment size set to 0 to a specific port is an interesting test to perform against embedded TCP stacks. It can be achieved with the following one-liner:
End of explanation
ans = sr([IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_RR())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8), options=IPOption_Traceroute())/ICMP(seq=RandShort()), IP(dst="8.8.8.8", ttl=(1, 8))/ICMP(seq=RandShort())], verbose=False, timeout=3)[0]
ans.make_table(lambda x, y: (", ".join(z.summary() for z in x[IP].options) or '-', x[IP].ttl, y.sprintf("%IP.src% %ICMP.type%")))
Explanation: 2_ Advanced firewalking using IP options is sometimes useful to perform network enumeration. Here is a more complicated one-liner:
End of explanation
from scapy.all import *
Explanation: Now that we've got your attention, let's start the tutorial !
Quick setup
The easiest way to try Scapy is to clone the github repository, then launch the run_scapy script as root. The following examples can be pasted at the Scapy prompt. There is no need to install any external Python modules.
```shell
git clone https://github.com/secdev/scapy --depth=1
sudo ./run_scapy
Welcome to Scapy (2.4.0)
```
Note: iPython users must import scapy as follows
End of explanation
packet = IP()/TCP()
Ether()/packet
Explanation: First steps
With Scapy, each network layer is a Python class.
The '/' operator is used to bind layers together. Let's put a TCP segment on top of IP and assign it to the packet variable, then stack it on top of Ethernet.
End of explanation
>>> ls(IP, verbose=True)
version : BitField (4 bits) = (4)
ihl : BitField (4 bits) = (None)
tos : XByteField = (0)
len : ShortField = (None)
id : ShortField = (1)
flags : FlagsField (3 bits) = (0)
MF, DF, evil
frag : BitField (13 bits) = (0)
ttl : ByteField = (64)
proto : ByteEnumField = (0)
chksum : XShortField = (None)
src : SourceIPField (Emph) = (None)
dst : DestIPField (Emph) = (None)
options : PacketListField = ([])
Explanation: This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.
Protocol fields can be listed using the ls() function:
End of explanation
p = Ether()/IP(dst="www.secdev.org")/TCP()
p.summary()
Explanation: Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.
Scapy packets are objects with some useful methods, such as summary().
End of explanation
print(p.dst) # first layer that has an src field, here Ether
print(p[IP].src) # explicitly access the src field of the IP layer
# sprintf() is a useful method to display fields
print(p.sprintf("%Ether.src% > %Ether.dst%\n%IP.src% > %IP.dst%"))
Explanation: There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !
Using internal mechanisms (such as DNS resolution, routing table and ARP resolution), Scapy has automatically set fields necessary to send the packet. These fields can of course be accessed and displayed.
End of explanation
print(p.sprintf("%TCP.flags% %TCP.dport%"))
Explanation: Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.
End of explanation
[p for p in IP(ttl=(1,5))/ICMP()]
Explanation: Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.
End of explanation
p = sr1(IP(dst="8.8.8.8")/UDP()/DNS(qd=DNSQR()))
p[DNS].an
Explanation: Sending and receiving
Currently, you know how to build packets with Scapy. The next step is to send them over the network !
The sr1() function sends a packet and returns the corresponding answer. srp1() does the same for layer two packets, i.e. Ethernet. If you are only interested in sending packets send() is your friend.
As an example, we can use the DNS protocol to get www.example.com IPv4 address.
End of explanation
r, u = srp(Ether()/IP(dst="8.8.8.8", ttl=(5,10))/UDP()/DNS(rd=1, qd=DNSQR(qname="www.example.com")))
r, u
Explanation: Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.
End of explanation
# Access the first tuple
print(r[0][0].summary()) # the packet sent
print(r[0][1].summary()) # the answer received
# Access the ICMP layer. Scapy received a time-exceeded error message
r[0][1][ICMP]
Explanation: sr() sent a list of packets, and returns two variables, here r and u, where:
1. r is a list of results (i.e tuples of the packet sent and its answer)
2. u is a list of unanswered packets
End of explanation
wrpcap("scapy.pcap", r)
pcap_p = rdpcap("scapy.pcap")
pcap_p[0]
Explanation: With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.
End of explanation
s = sniff(count=2)
s
Explanation: Sniffing the network is as straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.
End of explanation
sniff(count=2, prn=lambda p: p.summary())
Explanation: sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.
End of explanation
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # create an UDP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/UDP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNS
# Send the DNS query
ssck.sr1(DNS(rd=1, qd=DNSQR(qname="www.example.com")))
Explanation: Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.
Unlike other Scapy sockets, StreamSockets do not require root privileges.
End of explanation
ans, unans = srloop(IP(dst=["8.8.8.8", "8.8.4.4"])/ICMP(), inter=.1, timeout=.1, count=100, verbose=False)
Explanation: Visualization
Parts of the following examples require the matplotlib module.
With srloop(), we can send 100 ICMP packets to 8.8.8.8 and 8.8.4.4.
End of explanation
%matplotlib inline
ans.multiplot(lambda x, y: (y[IP].src, (y.time, y[IP].id)), plot_xy=True)
Explanation: Then we can use the results to plot the IP id values.
End of explanation
pkt = IP() / UDP() / DNS(qd=DNSQR())
print(repr(raw(pkt)))
Explanation: The raw() constructor can be used to "build" the packet's bytes as they would be sent on the wire.
End of explanation
print(pkt.summary())
Explanation: Since some people cannot read this representation, Scapy can:
- give a summary for a packet
End of explanation
hexdump(pkt)
Explanation: "hexdump" the packet's bytes
End of explanation
pkt.show()
Explanation: dump the packet, layer by layer, with the values for each field
End of explanation
pkt.canvas_dump()
Explanation: render a pretty and handy dissection of the packet
End of explanation
ans, unans = traceroute('www.secdev.org', maxttl=15)
Explanation: Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().
End of explanation
ans.world_trace()
Explanation: The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)
End of explanation
ans = sr(IP(dst=["scanme.nmap.org", "nmap.org"])/TCP(dport=[22, 80, 443, 31337]), timeout=3, verbose=False)[0]
ans.extend(sr(IP(dst=["scanme.nmap.org", "nmap.org"])/UDP(dport=53)/DNS(qd=DNSQR()), timeout=3, verbose=False)[0])
ans.make_table(lambda x, y: (x[IP].dst, x.sprintf('%IP.proto%/{TCP:%r,TCP.dport%}{UDP:%r,UDP.dport%}'), y.sprintf('{TCP:%TCP.flags%}{ICMP:%ICMP.type%}')))
Explanation: The PacketList.make_table() function can be very helpful. Here is a simple "port scanner":
End of explanation
class DNSTCP(Packet):
name = "DNS over TCP"
fields_desc = [ FieldLenField("len", None, fmt="!H", length_of="dns"),
PacketLenField("dns", 0, DNS, length_from=lambda p: p.len)]
# This method tells Scapy that the next packet must be decoded with DNSTCP
def guess_payload_class(self, payload):
return DNSTCP
Explanation: Implementing a new protocol
Scapy can be easily extended to support new protocols.
The following example defines DNS over TCP. The DNSTCP class inherits from Packet and defines two field: the length, and the real DNS message. The length_of and length_from arguments link the len and dns fields together. Scapy will be able to automatically compute the len value.
End of explanation
# Build then decode a DNS message over TCP
DNSTCP(raw(DNSTCP(dns=DNS())))
Explanation: This new packet definition can be direcly used to build a DNS message over TCP.
End of explanation
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # create an TCP socket
sck.connect(("8.8.8.8", 53)) # connect to 8.8.8.8 on 53/TCP
# Create the StreamSocket and gives the class used to decode the answer
ssck = StreamSocket(sck)
ssck.basecls = DNSTCP
# Send the DNS query
ssck.sr1(DNSTCP(dns=DNS(rd=1, qd=DNSQR(qname="www.example.com"))))
Explanation: Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.
End of explanation
from scapy.all import *
import argparse
parser = argparse.ArgumentParser(description="A simple ping6")
parser.add_argument("ipv6_address", help="An IPv6 address")
args = parser.parse_args()
print(sr1(IPv6(dst=args.ipv6_address)/ICMPv6EchoRequest(), verbose=0).summary())
Explanation: Scapy as a module
So far, Scapy was only used from the command line. It is also a Python module than can be used to build specific network tools, such as ping6.py:
End of explanation
# Specify the Wi-Fi monitor interface
#conf.iface = "mon0" # uncomment to test
# Create an answering machine
class ProbeRequest_am(AnsweringMachine):
function_name = "pram"
# The fake mac of the fake access point
mac = "00:11:22:33:44:55"
def is_request(self, pkt):
return Dot11ProbeReq in pkt
def make_reply(self, req):
rep = RadioTap()
# Note: depending on your Wi-Fi card, you might need a different header than RadioTap()
rep /= Dot11(addr1=req.addr2, addr2=self.mac, addr3=self.mac, ID=RandShort(), SC=RandShort())
rep /= Dot11ProbeResp(cap="ESS", timestamp=time.time())
rep /= Dot11Elt(ID="SSID",info="Scapy !")
rep /= Dot11Elt(ID="Rates",info=b'\x82\x84\x0b\x16\x96')
rep /= Dot11Elt(ID="DSset",info=chr(10))
OK,return rep
# Start the answering machine
#ProbeRequest_am()() # uncomment to test
Explanation: Answering machines
A lot of attack scenarios look the same: you want to wait for a specific packet, then send an answer to trigger the attack.
To this extent, Scapy provides the AnsweringMachine object. Two methods are especially useful:
1. is_request(): return True if the pkt is the expected request
2. make_reply(): return the packet that must be sent
The following example uses Scapy Wi-Fi capabilities to pretend that a "Scapy !" access point exists.
Note: your Wi-Fi interface must be set to monitor mode !
End of explanation
from scapy.all import *
import nfqueue, socket
def scapy_cb(i, payload):
s = payload.get_data() # get and parse the packet
p = IP(s)
# Check if the packet is an ICMP Echo Request to 8.8.8.8
if p.dst == "8.8.8.8" and ICMP in p:
# Delete checksums to force Scapy to compute them
del(p[IP].chksum, p[ICMP].chksum)
# Set the ICMP sequence number to 0
p[ICMP].seq = 0
# Let the modified packet go through
ret = payload.set_verdict_modified(nfqueue.NF_ACCEPT, raw(p), len(p))
else:
# Accept all packets
payload.set_verdict(nfqueue.NF_ACCEPT)
# Get an NFQUEUE handler
q = nfqueue.queue()
# Set the function that will be call on each received packet
q.set_callback(scapy_cb)
# Open the queue & start parsing packes
q.fast_open(2807, socket.AF_INET)
q.try_run()
Explanation: Cheap Man-in-the-middle with NFQUEUE
NFQUEUE is an iptables target than can be used to transfer packets to userland process. As a nfqueue module is available in Python, you can take advantage of this Linux feature to perform Scapy based MiTM.
This example intercepts ICMP Echo request messages sent to 8.8.8.8, sent with the ping command, and modify their sequence numbers. In order to pass packets to Scapy, the following iptable command put packets into the NFQUEUE #2807:
$ sudo iptables -I OUTPUT --destination 8.8.8.8 -p icmp -o eth0 -j NFQUEUE --queue-num 2807
End of explanation
class TCPScanner(Automaton):
@ATMT.state(initial=1)
def BEGIN(self):
pass
@ATMT.state()
def SYN(self):
print("-> SYN")
@ATMT.state()
def SYN_ACK(self):
print("<- SYN/ACK")
raise self.END()
@ATMT.state()
def RST(self):
print("<- RST")
raise self.END()
@ATMT.state()
def ERROR(self):
print("!! ERROR")
raise self.END()
@ATMT.state(final=1)
def END(self):
pass
@ATMT.condition(BEGIN)
def condition_BEGIN(self):
raise self.SYN()
@ATMT.condition(SYN)
def condition_SYN(self):
if random.randint(0, 1):
raise self.SYN_ACK()
else:
raise self.RST()
@ATMT.timeout(SYN, 1)
def timeout_SYN(self):
raise self.ERROR()
TCPScanner().run()
TCPScanner().run()
Explanation: Automaton
When more logic is needed, Scapy provides a clever way abstraction to define an automaton. In a nutshell, you need to define an object that inherits from Automaton, and implement specific methods:
- states: using the @ATMT.state decorator. They usually do nothing
- conditions: using the @ATMT.condition and @ATMT.receive_condition decorators. They describe how to go from one state to another
- actions: using the ATMT.action decorator. They describe what to do, like sending a back, when changing state
The following example does nothing more than trying to mimic a TCP scanner:
End of explanation
# Instantiate the blocks
clf = CLIFeeder()
ijs = InjectSink("enx3495db043a28")
# Plug blocks together
clf > ijs
# Create and start the engine
pe = PipeEngine(clf)
pe.start()
Explanation: Pipes
Pipes are an advanced Scapy feature that aims sniffing, modifying and printing packets. The API provides several buildings blocks. All of them, have high entries and exits (>>) as well as low (>) ones.
For example, the CliFeeder is used to send message from the Python command line to a low exit. It can be combined to the InjectSink that reads message on its low entry and inject them to the specified network interface. These blocks can be combined as follows:
End of explanation
clf.send("Hello Scapy !")
Explanation: Packet can be sent using the following command on the prompt:
End of explanation |
8,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
数据科学
数据科学问题解决流程 workflow stage
问题定义
获取训练和测试数据
数据清洗 预处理
分析数据
建模 预测 解决问题
可视化解决问题流程
提交结果到Kaggle
workflow 七大目标
数据科学workflow解决七大问题
1. 分类
1. 相关性 发现特征和结果之前的相关性 或者发现特征之前的相关性
1. 转换 建模阶段根据模型的不同可能需要将特征转换成数字类
1. 处理补全缺失值 数据预处理需要预估某个特征缺失值得影响
1. 校正 校正特征对结果的影响 如果发现特征对结果没有影响可以丢弃掉该特征
1. 生成新特征 可以通过已有的特征生成新的更加完善的特征
1. 根据数据的本质和解决目标选择合适的可视化工具来可视化
最佳实践
尽早进行特征相关性的分析
多用表象表格去提高代码的可读性
Code
Step1: 获取数据
Step2: 分析数据
数据中有哪些特征
Step3: 哪些特征是类别化(categorical)
类别化的特征包含名词性的,序数性的,比率和区间性的,这里类别化的特征包含生成 性别 embark 序数化的有Pclass
哪些特征是数字型的
数字型的特征是离散型 连续性 时间序列型
数字型的例如age fare 离散型的有 sibsp parch
Step4: 哪些特征是混合型的
同一个特征里面既有数字又有字母的是混合型的特征是我们去校正处理的目标特征
ticket是数字和字母混合型的 Cabin是字母型的
哪些特征数据里有错误或者错别字
很难去review一个大的数据集,但是一般观察数据集的一小部门就能看出问题在哪哪些特征需要去校正
name 特征是比较随意的可能一个name里会包含头衔 括号 之类的
Step5: 哪些特征包含空值 null 或者没有值
这些特征需要校正
每个特征都是什么数据类型
查看每个特征的数据类型
Step6: 数据集中数值类型的特征的分布
这帮助我们在早期从训练集中得出数据分布的规律
例如
总样本数是891是实际乘客的数量的40%
survied 是一个0 1 的类别型特征
...
train_df.describe()
类别型特征的分布
name 特征是unique的 没有重复
sex 特征有俩个可能的取值 0 和 1 男性占比65%
cabin 特征有重复值
Step7: 根据数据分析得出相应的假设结论
相关性
在项目的早期判断哪个特征和分类结果的相关性最强
补全数据
有些关键的特征的缺失值需要去处理一下
数据校正特征取舍
ticket特征有较高的重复性并且可能和最终的分类结果没有直接关系所以舍弃掉
cabin 特征因为较高的不完整性和空值过多也被舍弃
passengerid 因为没有直接的对结果产生贡献所以也舍弃掉
name 特征没有标准值并且对结果也贡献不大所以舍弃
衍生新特征
可能会根据parch 和 sibsp 产生一个family的新特征
提取name特征产生新的特征
会对age fare 产生范围类的特征以供分析使用
分类
我们根据特征判断新的假设
妇女 儿童 头等舱的人生还几率比较大
通过转换特征来验证假设和观察
为了验证我们的观察和假设可以通过转换特征来分析特征和分类之间的关联性 在这个阶段可以对没有空值得特征进行 也可以对类别型 序数型 离散型的特征进行分析
1. 我们观察到Pclass=1 和 survive有明显的相关性,所以特征工程选择pclass加入到最终的model里
1. sex 女性明显有更高的生存率 所以加入model
1. sibsp parch 和分类结果明显没有相关性 最好拿着这俩个字段衍生生成新的特征
Step8: 可视化分析
用数据可视化来验证我们的假设
1. 数值类型特征的相关性可视化
类似age 一个区间范围的数据直方图可以很好的看出数据的分布
1. 观察直方图
大多age小于4的孩子活了下来 大于80的活了下来 15-25的死的最多 大部分乘客年龄在15-35
1. 结论
通过简单的看图分析得出结论 age应该在model里 补全age的缺失值
Step9: 数值型和序数型的特征分析
观察
Pclass=3有大多数的乘客而且大多数没有生还
大多数pclass=2 3 的小孩都活了下来
大多数pclass=1的活了下来
结论
pclass加入model
Step10: 分析类别型特征的关联性
观察
女性乘客有更好的生存率
Pclass=3的男性比其他pclass有更高的生存率
结论
将sex特征加入model
补全embarked
Step11: 关联类别型特征和数值 型特征
可以关联类别型和数值型的特征
观察
fare更高的乘客有更高的生还率
结论
考虑fare分段
Step12: Wrangle data
目前为止我们已经有了好多的假设和结论,但是我们并没有改变特征的值
通过舍弃特征校正数据
丢弃一些无用的特征,这样我们处理的数据更少,加速我们的notebook和分析 根据分析和假设我们丢弃cabin ticket 特征
注意在训练集和测试集上保持数据的一致性要丢弃都丢弃
Step13: 从已有特征中抽取新特征
name特征可以抽取出titles 用正则表达式去处理name
观察
结论
Step14: 转换类别型特征
现在我们可以把包含字符串的特征转换成数值型的,这是大部分的算法模型需要的
Step15: 完善数值连续型特征
现在处理有缺失值或者空值的特征,首先处理age特征 考虑三种方法
Step16: 组合现有特征生成新特征
整合Parch 和 SibSp 生成新的特征
Step17: 完善一个类别型的特征
Embarked 特征含有S Q C 等值 我们的训练集有俩个缺失 需要简单的去填充他们
Step18: 转换类别型特征为数值型
Step19: 快速完善特征和转换特征
Step20: 转换段特征值为序数型
Step21: 建模 预测 和 评估
建模思路
现在可以开始建模预测了,并且评估模型的效果,目前机器学习有60+个模型, 首先我们应该理解问题的本质才能窄化算法的选择范围,这里是一个二分类或者回归问题,所以可以选择一些算法如下
Step22: Logistic 回归
该算法在workflow的早期非常有用
Step23: 我们可以使用Logistic Regression来验证我们的假设和结论,可以通过特征的系数来观察
整的系数增加可能性,负的系数减少可能性
1. sex 是正系数最高的特征 这意味着sex的值增加提高生存率最高
1. Pclass减少 Survived = 1的可能性降低
1. Age*class是一个好的特征因为他是第二高的负相关
1. Title是deerga
Step24: SVM
下个model使用SVM,另一个监督学习的模型
Step25: KNN
下一个model是KNN 在模式识别领域 KNN是用来分类和回归的非参数方法
KNN的表现比Logistic Regression要好 比SVM要差
Step26: 朴素贝叶斯
在机器学习领域 朴素贝叶斯分类器是对朴素贝叶斯理论的应用里面的一直 朴素贝叶斯性能很高
朴素贝叶斯的结果是目前为止最差的
Step27: 感知机
感知机是监督学习力的一个二分类分类器 是线性分类的一直算法 该算法支持在线学习是一种在线算法
Step28: 决策树
决策树算法是目前为止表现最好的
Step29: 随机森林
随机森林是目前最流行的分类算法 也是目前表现最好的 所以选定随机森林
Step30: 模型评测
现在对各个model的结果进行排名 决策树和随机森林的结果相同 但是考虑到过拟合问题选择随机森林 | Python Code:
# 数据分析和预处理
import pandas as pd
import numpy as np
import random as rnd
# 数据可视化
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# 机器学习算法
from sklearn.linear_model import LogisticRegression # logstic回归
from sklearn.linear_model import Perceptron # 感知机
from sklearn.linear_model import SGDClassifier # 随机梯度下降
from sklearn.svm import SVC,LinearSVC # svm
from sklearn.ensemble import RandomForestClassifier # 随机森林
from sklearn.neighbors import KNeighborsClassifier # KNN分类
from sklearn.naive_bayes import GaussianNB # 朴素贝叶斯
from sklearn.tree import DecisionTreeClassifier # 决策树
Explanation: 数据科学
数据科学问题解决流程 workflow stage
问题定义
获取训练和测试数据
数据清洗 预处理
分析数据
建模 预测 解决问题
可视化解决问题流程
提交结果到Kaggle
workflow 七大目标
数据科学workflow解决七大问题
1. 分类
1. 相关性 发现特征和结果之前的相关性 或者发现特征之前的相关性
1. 转换 建模阶段根据模型的不同可能需要将特征转换成数字类
1. 处理补全缺失值 数据预处理需要预估某个特征缺失值得影响
1. 校正 校正特征对结果的影响 如果发现特征对结果没有影响可以丢弃掉该特征
1. 生成新特征 可以通过已有的特征生成新的更加完善的特征
1. 根据数据的本质和解决目标选择合适的可视化工具来可视化
最佳实践
尽早进行特征相关性的分析
多用表象表格去提高代码的可读性
Code
End of explanation
train_df = pd.read_csv('./input/titanic/train.csv')
test_df = pd.read_csv('./input/titanic/test.csv')
combine = [train_df,test_df]
Explanation: 获取数据
End of explanation
print(train_df.columns.values)
Explanation: 分析数据
数据中有哪些特征
End of explanation
# 数据预览
train_df.head()
Explanation: 哪些特征是类别化(categorical)
类别化的特征包含名词性的,序数性的,比率和区间性的,这里类别化的特征包含生成 性别 embark 序数化的有Pclass
哪些特征是数字型的
数字型的特征是离散型 连续性 时间序列型
数字型的例如age fare 离散型的有 sibsp parch
End of explanation
train_df.tail()
Explanation: 哪些特征是混合型的
同一个特征里面既有数字又有字母的是混合型的特征是我们去校正处理的目标特征
ticket是数字和字母混合型的 Cabin是字母型的
哪些特征数据里有错误或者错别字
很难去review一个大的数据集,但是一般观察数据集的一小部门就能看出问题在哪哪些特征需要去校正
name 特征是比较随意的可能一个name里会包含头衔 括号 之类的
End of explanation
# 查看数据集特征的数据类型
train_df.info()
print('='*40)
test_df.info()
Explanation: 哪些特征包含空值 null 或者没有值
这些特征需要校正
每个特征都是什么数据类型
查看每个特征的数据类型
End of explanation
train_df.describe()
Explanation: 数据集中数值类型的特征的分布
这帮助我们在早期从训练集中得出数据分布的规律
例如
总样本数是891是实际乘客的数量的40%
survied 是一个0 1 的类别型特征
...
train_df.describe()
类别型特征的分布
name 特征是unique的 没有重复
sex 特征有俩个可能的取值 0 和 1 男性占比65%
cabin 特征有重复值
End of explanation
# 分析Pclass和分类结果的相关性 结论 Pclass =1 时明显高 所以强相关性
train_df[['Pclass','Survived']].groupby(['Pclass'],as_index=False).mean().sort_values(by='Survived',ascending=False)
# 分析sex 和 结果的相关性
train_df[['Sex','Survived']].groupby(['Sex'],as_index=False).mean().sort_values(by='Sex',ascending=False)
train_df[['SibSp','Survived']].groupby(['SibSp'],as_index=False).mean().sort_values(by='Survived',ascending=False)
train_df[['Parch','Survived']].groupby(['Parch'],as_index=False).mean().sort_values(by='Survived',ascending=False)
Explanation: 根据数据分析得出相应的假设结论
相关性
在项目的早期判断哪个特征和分类结果的相关性最强
补全数据
有些关键的特征的缺失值需要去处理一下
数据校正特征取舍
ticket特征有较高的重复性并且可能和最终的分类结果没有直接关系所以舍弃掉
cabin 特征因为较高的不完整性和空值过多也被舍弃
passengerid 因为没有直接的对结果产生贡献所以也舍弃掉
name 特征没有标准值并且对结果也贡献不大所以舍弃
衍生新特征
可能会根据parch 和 sibsp 产生一个family的新特征
提取name特征产生新的特征
会对age fare 产生范围类的特征以供分析使用
分类
我们根据特征判断新的假设
妇女 儿童 头等舱的人生还几率比较大
通过转换特征来验证假设和观察
为了验证我们的观察和假设可以通过转换特征来分析特征和分类之间的关联性 在这个阶段可以对没有空值得特征进行 也可以对类别型 序数型 离散型的特征进行分析
1. 我们观察到Pclass=1 和 survive有明显的相关性,所以特征工程选择pclass加入到最终的model里
1. sex 女性明显有更高的生存率 所以加入model
1. sibsp parch 和分类结果明显没有相关性 最好拿着这俩个字段衍生生成新的特征
End of explanation
g = sns.FacetGrid(train_df,col='Survived')
g.map(plt.hist,'Age',bins=20)
Explanation: 可视化分析
用数据可视化来验证我们的假设
1. 数值类型特征的相关性可视化
类似age 一个区间范围的数据直方图可以很好的看出数据的分布
1. 观察直方图
大多age小于4的孩子活了下来 大于80的活了下来 15-25的死的最多 大部分乘客年龄在15-35
1. 结论
通过简单的看图分析得出结论 age应该在model里 补全age的缺失值
End of explanation
grid = sns.FacetGrid(train_df,col='Survived',row='Pclass',size=2.2,aspect=1.6)
grid.map(plt.hist,'Age',bins=20,alpha=.5)
grid.add_legend()
Explanation: 数值型和序数型的特征分析
观察
Pclass=3有大多数的乘客而且大多数没有生还
大多数pclass=2 3 的小孩都活了下来
大多数pclass=1的活了下来
结论
pclass加入model
End of explanation
grid = sns.FacetGrid(train_df,row='Embarked',size=2.2,aspect=1.6)
grid.map(sns.pointplot,'Pclass','Survived','Sex',palette='deep')
grid.add_legend()
Explanation: 分析类别型特征的关联性
观察
女性乘客有更好的生存率
Pclass=3的男性比其他pclass有更高的生存率
结论
将sex特征加入model
补全embarked
End of explanation
grid = sns.FacetGrid(train_df,row='Embarked',col='Survived',size=2.2,aspect=1.6)
grid.map(sns.barplot,'Sex','Fare',alpha=.5,ci=None)
grid.add_legend()
Explanation: 关联类别型特征和数值 型特征
可以关联类别型和数值型的特征
观察
fare更高的乘客有更高的生还率
结论
考虑fare分段
End of explanation
print('before',train_df.shape,test_df.shape,combine[0].shape,combine[1].shape)
train_df = train_df.drop(['Cabin','Ticket'],axis=1)
test_df = test_df.drop(['Cabin','Ticket'],axis=1)
combine = [train_df,test_df]
print('After',train_df.shape,test_df.shape,combine[0].shape,combine[1].shape)
Explanation: Wrangle data
目前为止我们已经有了好多的假设和结论,但是我们并没有改变特征的值
通过舍弃特征校正数据
丢弃一些无用的特征,这样我们处理的数据更少,加速我们的notebook和分析 根据分析和假设我们丢弃cabin ticket 特征
注意在训练集和测试集上保持数据的一致性要丢弃都丢弃
End of explanation
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract('([A-Za-z]+)\.',expand=False)
combine[0].head()
pd.crosstab(train_df['Title'],train_df['Sex'])
# 给绝大多数的title 一个更通用的名字 或者rare
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady','Countess','Capt','Col','Don','Dr','Major','Rev','Sir','Jonkheer','Dona'],'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle','Miss')
dataset['Title'] = dataset['Title'].replace('Ms','Miss')
dataset['Title'] = dataset['Title'].replace('Mme','Mrs')
train_df[['Title','Survived']].groupby(['Title'],as_index=False).mean()
# 把类别性的特征值处理成数值型
title_mapping = {'Mr':1,'Miss':2,'Mrs':3,'Master':4,'Rare':5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
# 现在可以放心的删除掉name特征 passengerId也不需要
train_df = train_df.drop(['Name','PassengerId'],axis=1)
test_df = test_df.drop(['Name'],axis=1)
combine = [train_df,test_df]
train_df.shape,test_df.shape
Explanation: 从已有特征中抽取新特征
name特征可以抽取出titles 用正则表达式去处理name
观察
结论
End of explanation
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map({'female':1,'male':0}).astype(int)
train_df.head()
Explanation: 转换类别型特征
现在我们可以把包含字符串的特征转换成数值型的,这是大部分的算法模型需要的
End of explanation
grid = sns.FacetGrid(train_df,row='Pclass',col='Sex',size=2.2,aspect=1.6)
grid.map(plt.hist,'Age',alpha=.5,bins=20)
grid.add_legend()
# 准备一个空数组存储guess的age
guess_ages = np.zeros((2,3))
guess_ages
for dataset in combine:
for i in range(0,2):
for j in range(0,3):
guess_df = dataset[(dataset['Sex']==i) & (dataset['Pclass']==j+1)]['Age'].dropna()
age_guess = guess_df.median()
guess_ages[i,j] = int(age_guess/0.5 + 0.5) * 0.5
for i in range(0,2):
for j in range(0,3):
dataset.loc[(dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
# 创建age 段并且判断和survived之间的相关性
train_df['AgeBand'] = pd.cut(train_df['Age'],5)
train_df[['AgeBand','Survived']].groupby(['AgeBand'],as_index=False).mean().sort_values(by='Survived',ascending=False)
# 将age转换成代表年龄段的序数
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16,'Age'] = 0
dataset.loc[(dataset['Age'] > 16 ) & (dataset['Age'] <= 32),'Age'] = 1
dataset.loc[(dataset['Age'] > 32 ) & (dataset['Age'] <= 48),'Age'] = 2
dataset.loc[(dataset['Age'] > 48 ) & (dataset['Age'] <= 60),'Age'] = 3
dataset.loc[dataset['Age'] > 64 ,'Age'] = 4
train_df.head()
# 移除AgeBand
train_df = train_df.drop(['AgeBand'],axis=1)
combine = [train_df,test_df]
train_df.head()
Explanation: 完善数值连续型特征
现在处理有缺失值或者空值的特征,首先处理age特征 考虑三种方法:
1. 考虑生成一个均值和标准差之间的随机数
1.
End of explanation
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize','Survived']].groupby(['FamilySize'],as_index=False).mean().sort_values(by='Survived',ascending=False)
# 可以生成另一个IsAlone的特征
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1,'IsAlone'] = 1
train_df[['IsAlone','Survived']].groupby(['IsAlone'],as_index=False).mean().sort_values(by='Survived',ascending=False)
# 剔除parch sibsp familysize 等特征 只留下IsAlone
train_df = train_df.drop(['Parch','SibSp','FamilySize'],axis=1)
test_df = test_df.drop(['Parch','SibSp','FamilySize'],axis=1)
combine = [train_df,test_df]
train_df.head()
# 可以组合age和class生成一个新的特征
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:,['Age*Class','Age','Pclass']].head(10)
Explanation: 组合现有特征生成新特征
整合Parch 和 SibSp 生成新的特征
End of explanation
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked','Survived']].groupby(['Embarked'],as_index=False).mean().sort_values(by='Survived',ascending=False)
Explanation: 完善一个类别型的特征
Embarked 特征含有S Q C 等值 我们的训练集有俩个缺失 需要简单的去填充他们
End of explanation
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map({'S':0,'C':1,'Q':2}).astype(int)
train_df.head()
Explanation: 转换类别型特征为数值型
End of explanation
test_df['Fare'].fillna(test_df['Fare'].dropna().median(),inplace=True)
test_df.head()
# 创建fare段
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
Explanation: 快速完善特征和转换特征
End of explanation
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
test_df.head(10)
Explanation: 转换段特征值为序数型
End of explanation
X_train = train_df.drop('Survived',axis=1)
Y_train = train_df['Survived']
X_test = test_df.drop('PassengerId',axis=1).copy()
X_train.shape,Y_train.shape,X_test.shape
Explanation: 建模 预测 和 评估
建模思路
现在可以开始建模预测了,并且评估模型的效果,目前机器学习有60+个模型, 首先我们应该理解问题的本质才能窄化算法的选择范围,这里是一个二分类或者回归问题,所以可以选择一些算法如下:
1. Logistic 回归
1. KNN
1. SVM
1. 朴素贝叶斯
1. 决策树
1. 随机森林
1. 感知机
1. Artificial Neural network
1. RVM
End of explanation
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train,Y_train)
Y_pred = logreg.predict(X_test)
print(Y_pred)
print('='*10)
print(logreg.score(X_train,Y_train))
acc_log = round(logreg.score(X_train,Y_train)*100,2)
print(acc_log)
Explanation: Logistic 回归
该算法在workflow的早期非常有用
End of explanation
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns=['Feature']
coeff_df['Correlation'] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation',ascending=False)
Explanation: 我们可以使用Logistic Regression来验证我们的假设和结论,可以通过特征的系数来观察
整的系数增加可能性,负的系数减少可能性
1. sex 是正系数最高的特征 这意味着sex的值增加提高生存率最高
1. Pclass减少 Survived = 1的可能性降低
1. Age*class是一个好的特征因为他是第二高的负相关
1. Title是deerga
End of explanation
svc = SVC()
svc.fit(X_train,Y_train)
Y_predict = svc.predict(X_test)
print(Y_predict)
acc_svc = round(svc.score(X_train,Y_train)*100,2)
acc_svc
Explanation: SVM
下个model使用SVM,另一个监督学习的模型
End of explanation
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(X_train,Y_train)
Y_predict = knn.predict(X_test)
print(Y_predict)
acc_knn = round(knn.score(X_train,Y_train)*100,2)
acc_knn
Explanation: KNN
下一个model是KNN 在模式识别领域 KNN是用来分类和回归的非参数方法
KNN的表现比Logistic Regression要好 比SVM要差
End of explanation
nb = GaussianNB()
nb.fit(X_train,Y_train)
Y_predict = nb.predict(X_test)
print(Y_predict)
acc_nb = round(nb.score(X_train,Y_train)*100,2)
acc_nb
Explanation: 朴素贝叶斯
在机器学习领域 朴素贝叶斯分类器是对朴素贝叶斯理论的应用里面的一直 朴素贝叶斯性能很高
朴素贝叶斯的结果是目前为止最差的
End of explanation
perceptron = Perceptron()
perceptron.fit(X_train,Y_train)
Y_predict = perceptron.predict(X_test)
print(Y_predict)
acc_perc = round(perceptron.score(X_train,Y_train)*100,2)
acc_perc
# linear_svc
linear_svc = LinearSVC()
linear_svc.fit(X_train,Y_train)
Y_predict = linear_svc.predict(X_test)
print(Y_predict)
acc_lsvc = round(linear_svc.score(X_train,Y_train)*100,2)
acc_lsvc
# SGD
sgd = SGDClassifier()
sgd.fit(X_train,Y_train)
Y_predict= sgd.predict(X_test)
print(Y_predict)
acc_sgd = round(sgd.score(X_train,Y_train)*100,2)
acc_sgd
Explanation: 感知机
感知机是监督学习力的一个二分类分类器 是线性分类的一直算法 该算法支持在线学习是一种在线算法
End of explanation
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train,Y_train)
Y_predict= decision_tree.predict(X_test)
print(Y_predict)
acc_dt = round(decision_tree.score(X_train,Y_train)*100,2)
acc_dt
Explanation: 决策树
决策树算法是目前为止表现最好的
End of explanation
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train,Y_train)
Y_predict= random_forest.predict(X_test)
print(Y_predict)
acc_rf = round(random_forest.score(X_train,Y_train)*100,2)
acc_rf
Explanation: 随机森林
随机森林是目前最流行的分类算法 也是目前表现最好的 所以选定随机森林
End of explanation
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_rf, acc_nb, acc_perc,
acc_sgd, acc_lsvc, acc_dt]})
models.sort_values(by='Score', ascending=False)
Explanation: 模型评测
现在对各个model的结果进行排名 决策树和随机森林的结果相同 但是考虑到过拟合问题选择随机森林
End of explanation |
8,673 | Given the following text description, write Python code to implement the functionality described.
Description:
Search , insert and delete in a sorted array
function to implement binary search ; low + ( high - low ) / 2 ; Driver program to check above functions Let us search 3 in below array
| Python Code:
def binarySearch(arr , low , high , key ) :
mid =(low + high ) / 2
if(key == arr[int(mid ) ] ) :
return mid
if(key > arr[int(mid ) ] ) :
return binarySearch(arr ,(mid + 1 ) , high , key )
if(key < arr[int(mid ) ] ) :
return binarySearch(arr , low ,(mid - 1 ) , key )
return 0
arr =[5 , 6 , 7 , 8 , 9 , 10 ]
n = len(arr )
key = 10
print("Index : ", int(binarySearch(arr , 0 , n - 1 , key ) ) )
|
8,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-1', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
8,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MLE with Normal distribution
Step1: Draw normal density
$$f\left(y_{i};\mu,\sigma\right)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{\left(y_{i}-\mu\right)^{2}}{2\sigma^{2}}\right)$$
Step2: Simulate data and draw histogram
Step3: Simulate data and estimate model parameter by MLE
MLE estimator is
$$\begin{eqnarray}
\hat{\mu} & = & \frac{1}{n}\sum_{i=1}^{n}y_{i},\
\hat{\sigma}^{2} & = & \frac{1}{n}\sum_{i=1}^{n}\left(y_{i}-\hat{\mu}\right)^{2}.
\end{eqnarray}$$ | Python Code:
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
np.set_printoptions(precision=4, suppress=True)
sns.set_context('notebook')
%matplotlib inline
Explanation: MLE with Normal distribution
End of explanation
theta = [[0., 1.], [.5, .5], [-.25, 2.]]
f = lambda x, mu, sigma: 1 / np.sqrt(2 * np.pi * sigma ** 2) * np.exp(- .5 * (x - mu) ** 2 / sigma ** 2)
y = np.linspace(-3, 3, 1e3)
for t in theta:
mu, sigma = t[0], t[1]
ff = [f(x, mu, sigma) for x in y]
plt.plot(y, ff)
plt.legend(theta)
plt.show()
Explanation: Draw normal density
$$f\left(y_{i};\mu,\sigma\right)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{\left(y_{i}-\mu\right)^{2}}{2\sigma^{2}}\right)$$
End of explanation
n = int(1e2)
mu, sigma = 1.5, .5
# simulate data
y = np.sort(np.random.normal(mu, sigma, n))
# plot data
plt.hist(y, bins=10, normed=True, histtype='stepfilled', alpha=.5, lw=0)
plt.plot(y, f(y, mu, sigma), c = 'red', lw = 4)
plt.xlabel('$y_i$')
plt.ylabel('$\hat{f}$')
plt.show()
Explanation: Simulate data and draw histogram
End of explanation
# sample size
n = int(1e2)
# true parameter value
mu, sigma = 1.5, .5
# simulate data
y = np.sort(np.random.normal(mu, sigma, n))
# MLE estimator
mu_hat = np.mean(y)
sigma_hat = np.sqrt( np.mean( (y - mu_hat) ** 2 ) )
print('Estimates are: mu = ', mu_hat, ' sigma = ', sigma_hat)
# function of exponential density
ff = lambda y, mu, sigma: [f(x, mu, sigma) for x in y]
# plot results
plt.hist(y, bins=10, normed=True, alpha=.2, lw=0)
plt.plot(y, ff(y, mu, sigma), c='black', lw=4)
plt.plot(y, ff(y, mu_hat, sigma_hat), c='red', lw=4)
plt.xlabel('$y_i$')
plt.ylabel('$\hat{f}$')
plt.legend(('True', 'Fitted','Histogram'))
plt.show()
Explanation: Simulate data and estimate model parameter by MLE
MLE estimator is
$$\begin{eqnarray}
\hat{\mu} & = & \frac{1}{n}\sum_{i=1}^{n}y_{i},\
\hat{\sigma}^{2} & = & \frac{1}{n}\sum_{i=1}^{n}\left(y_{i}-\hat{\mu}\right)^{2}.
\end{eqnarray}$$
End of explanation |
8,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 2
Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output
Step5: Problem set 3
Step6: Problem set 4
Step7: BONUS | Python Code:
import pg8000
conn = pg8000.connect(database="homework2")
Explanation: Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate.
You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps:
Make sure you're viewing this notebook in Github.
Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer.
Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb).
Open the notebook on your computer using your locally installed version of IPython Notebook.
When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class).
Setting the scene
These problem sets address SQL, with a focus on joins and aggregates.
I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL.
To import the data, follow these steps:
Launch psql.
At the prompt, type CREATE DATABASE homework2;
Connect to the database you just created by typing \c homework2
Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file.
After you run the \i command, you should see the following output:
CREATE TABLE
CREATE TABLE
CREATE TABLE
COPY 100000
COPY 1682
COPY 943
The table schemas for the data look like this:
Table "public.udata"
Column | Type | Modifiers
-----------+---------+-----------
user_id | integer |
item_id | integer |
rating | integer |
timestamp | integer |
Table "public.uuser"
Column | Type | Modifiers
------------+-----------------------+-----------
user_id | integer |
age | integer |
gender | character varying(1) |
occupation | character varying(80) |
zip_code | character varying(10) |
Table "public.uitem"
Column | Type | Modifiers
--------------------+------------------------+-----------
movie_id | integer | not null
movie_title | character varying(81) | not null
release_date | date |
video_release_date | character varying(32) |
imdb_url | character varying(134) |
unknown | integer | not null
action | integer | not null
adventure | integer | not null
animation | integer | not null
childrens | integer | not null
comedy | integer | not null
crime | integer | not null
documentary | integer | not null
drama | integer | not null
fantasy | integer | not null
film_noir | integer | not null
horror | integer | not null
musical | integer | not null
mystery | integer | not null
romance | integer | not null
scifi | integer | not null
thriller | integer | not null
war | integer | not null
western | integer | not null
Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2.
End of explanation
conn.rollback()
Explanation: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
End of explanation
cursor = conn.cursor()
statement = "your query here"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 1: WHERE and ORDER BY
In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query.
Expected output:
Deep Rising (1998)
Alien: Resurrection (1997)
Hellraiser: Bloodline (1996)
Robert A. Heinlein's The Puppet Masters (1994)
Body Snatchers (1993)
Army of Darkness (1993)
Body Snatchers (1993)
Alien 3 (1992)
Heavy Metal (1981)
Alien (1979)
Night of the Living Dead (1968)
Blob, The (1958)
End of explanation
cursor = conn.cursor()
statement = "your query here"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 2: Aggregation, GROUP BY and HAVING
In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate.
Expected output: 157
End of explanation
cursor = conn.cursor()
statement = "your query here"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output:
administrator 79
programmer 66
librarian 51
student 196
other 105
engineer 67
educator 95
Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.)
End of explanation
cursor = conn.cursor()
statement = "your query here"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 3: Joining tables
In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output:
Madonna: Truth or Dare (1991)
Koyaanisqatsi (1983)
Paris Is Burning (1990)
Thin Blue Line, The (1988)
Hints:
JOIN the udata and uitem tables.
Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once).
The SQL expression to include in order to find movies released before 1992 is uitem.release_date < '1992-01-01'.
End of explanation
cursor = conn.cursor()
statement = "your query here"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: Problem set 4: Joins and aggregations... together at last
This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go.
In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.)
Expected output:
Amityville 1992: It's About Time (1992) 1.00
Beyond Bedlam (1993) 1.00
Amityville: Dollhouse (1996) 1.00
Amityville: A New Generation (1993) 1.00
Amityville 3-D (1983) 1.17
Castle Freak (1995) 1.25
Amityville Curse, The (1990) 1.25
Children of the Corn: The Gathering (1996) 1.32
Machine, The (1994) 1.50
Body Parts (1991) 1.62
End of explanation
cursor = conn.cursor()
statement = "your query here"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below.
Expected output:
Children of the Corn: The Gathering (1996) 1.32
Body Parts (1991) 1.62
Amityville II: The Possession (1982) 1.64
Jaws 3-D (1983) 1.94
Hellraiser: Bloodline (1996) 2.00
Tales from the Hood (1995) 2.04
Audrey Rose (1977) 2.17
Addiction, The (1995) 2.18
Halloween: The Curse of Michael Myers (1995) 2.20
Phantoms (1998) 2.23
End of explanation |
8,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SVN Reports Dashboard
Reporting import/upload pipelines
Import Libraries
Libraries necessary to parse through each dashboard
Step1: Parametrization
Adding parameters for Papermill
Step2: Setting Column Names
Use an existing JSON file
Step3: Set up Table Styles
Use pandas.style to set up some graphical styles for the main reports table.
Step4: Define Functions
Define functions to parse through each of the sites and print them to the screen
Step5: Run all
Run functions and produce dashboards for each site | Python Code:
import datetime
from IPython.display import display, Markdown, Latex
import json
import numpy as np
import pandas as pd
Explanation: SVN Reports Dashboard
Reporting import/upload pipelines
Import Libraries
Libraries necessary to parse through each dashboard
End of explanation
site: str
Explanation: Parametrization
Adding parameters for Papermill
End of explanation
def get_special_columns(file_path):
f = open(file_path, "r")
data = json.load(f)
return data[0]
new_column_names = get_special_columns("/fs/ncanda-share/beta/chris/ncanda-data-integration/scripts/dashboards/reference/svn_col_names.json")
Explanation: Setting Column Names
Use an existing JSON file
End of explanation
table_styles = [{
'selector': 'td',
'props': [('background-color', '#FFFFFF'), ('color', '#000000')]
},
{
'selector': 'th',
'props': [('background-color', '#000000'), ('color', '#FFFFFF')]
}]
def style_row(x):
# For the sla_percentage: row == 3
style = [None] * x.size
sample_series = pd.DataFrame(data=x).iloc[3]
for index, value in sample_series.items():
if (value > 1):
style[3] = "color: red !important;"
if (value > 10):
style[3] = style[3] + " font-weight: bold;"
return style
Explanation: Set up Table Styles
Use pandas.style to set up some graphical styles for the main reports table.
End of explanation
# In future iterations, display path is: '../../../../../log/status_reports/'
def load_dataframe_and_display(file_name=None):
# Header and load in dataframe
display(Markdown('## SLA ' + file_name + ' Dashboard'))
df = pd.read_csv('/fs/ncanda-share/log/status_reports/sla_files/' + file_name.lower() + '.csv', parse_dates=['date_updated'])
durations = df['sla'].unique().tolist()
durations = np.sort(durations)
# Categorize each report by SLA (3, 30, 3000 -> Dead)
for duration in durations:
# Find specific data subset and
duration_df = df.loc[df['sla'] == duration] # Find specific dataframe
duration_df = duration_df.rename(columns={'laptop':'Laptop'}, errors="raise")
# Create header
duration_str = str(duration) + '-Day'
if (duration == 3000):
duration_str = 'Dead'
display(Markdown(duration_str + ' SLA Laptop Report'))
# Set index and update date format (latter is hacky -> can fix)
duration_df = duration_df.set_index(['Laptop'])
if (duration == 3):
duration_df['date_updated'] = duration_df['date_updated'].dt.strftime('%Y-%m-%d %H:%M')
duration_df['time_diff'] = duration_df['time_diff'].apply(lambda x: str(x)[:str(x).find(".") - 3])
else:
duration_df['date_updated'] = duration_df['date_updated'].dt.strftime('%Y-%m-%d')
duration_df['time_diff'] = duration_df['time_diff'].apply(lambda x: str(x)[:str(x).find("days") + len("days")])
# Rename columns, style, and display
duration_df = duration_df.rename(columns=new_column_names)
se = duration_df.style.set_table_styles(table_styles).apply(lambda x: style_row(x), axis=1)
display(se)
def main():
load_dataframe_and_display(site)
Explanation: Define Functions
Define functions to parse through each of the sites and print them to the screen
End of explanation
main()
Explanation: Run all
Run functions and produce dashboards for each site
End of explanation |
8,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Unicode 文字列
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: tf.string データ型
標準的な TensorFlow のtf.string型は、バイト列のテンソルを作ります。また、Unicode文字列はデフォルトでは utf-8 でエンコードされます。
Step3: バイト列が最小限の単位として扱われるため、tf.string 型のテンソルは可変長のバイト文字列を保持できます。また、文字列長はテンソルの次元には含まれません。
Step4: 注
Step5: Unicode 表現間の変換
TensorFlowでは、これらの異なる Unicode 表現間で変換する方法を用意しています。
tf.strings.unicode_decode:エンコードされた文字列スカラーを、コードポイントのベクトルに変換します。
tf.strings.unicode_encode:コードポイントのベクトルを、エンコードされた文字列スカラーに変換します。
tf.strings.unicode_transcode:エンコードされた文字列スカラーを、別の文字コードに再エンコードします。
Step6: バッチの次元
複数の文字列をデコードする場合、各文字列の文字数が等しくない場合があります。返される結果はtf.RaggedTensorであり、最も内側の次元の長さは各文字列の文字数によって異なります。
Step7: この tf.RaggedTensor を直接使用することも、tf.RaggedTensor.to_tensor メソッドを使ってパディングを追加した密な tf.Tensor に変換するか、あるいは tf.RaggedTensor.to_sparse メソッドを使って tf.SparseTensor に変換することもできます。
Step8: 同じ長さの複数の文字列をエンコードする場合、tf.Tensor を入力値として使用できます。
Step9: 可変長の複数の文字列をエンコードする場合、tf.RaggedTensor を入力値として使用する必要があります。
Step10: パディングされた、あるいはスパースな複数の文字列を含むテンソルがある場合は、unicode_encode を呼び出す前に tf.RaggedTensor に変換します。
Step11: Unicode 操作
文字列長
tf.strings.length は、文字列長をどう計算するかを示す unit パラメーターが使えます。unit のデフォルトは "BYTE" ですが、"UTF8_CHAR" や "UTF16_CHAR" など他の値に設定して、エンコードされた string 文字列のUnicodeコードポイントの数を決めることができます。
Step12: 部分文字列
同様に、 tf.strings.substr では " unit" パラメーターを使い、かつ "pos" および "len" パラメーターを指定することで、オフセットの種類を決めることができます。
Step13: Unicode文字列を分割する
tf.strings.unicode_split は、Unicode文字列を個々の文字に分割します。
Step14: 文字のバイトオフセット
tf.strings.unicode_decode によって生成された文字テンソルを元の文字列に戻すには、各文字の開始位置のオフセットを知ることが役立ちます。tf.strings.unicode_decode_with_offsetsメソッド は unicode_decode に似ていますが、各文字の開始オフセットを含む2番目のテンソルを返す点が異なります。
Step15: Unicode スクリプト
各Unicodeコードポイントは、スクリプト として知られる単一のコードポイント集合に属しています。文字スクリプトは、その文字がどの言語なのかを判断するのに役立ちます。たとえば、「Б」がキリル文字であることがわかれば、その文字を含むテキストはロシア語やウクライナ語などのスラブ言語である可能性が高いことがわかります。
TensorFlowは、あるコードポイントがどのスクリプトかを返す tf.strings.unicode_script を提供しています。戻り値のスクリプトコードは、International Components for Unicode (ICU) の UScriptCode に対応する int32 値になります。
Step16: tf.strings.unicode_script は、多次元のコードポイントの tf.Tensors や tf.RaggedTensor にも適用できます。
Step17: 例:シンプルなセグメンテーション
セグメンテーションは、テキストを単語のような粒度に分割するタスクです。これは、スペース文字を使用して単語を区切れる場合には簡単に行えますが、一部の言語(中国語や日本語など)はスペースを使いませんし、また、一部の言語(ドイツ語など)には、意味を解析するために分ける必要がある、単語を結合した長い複合語があります。Webテキストでは、「NY株価」(ニューヨーク株価)のように、異なる言語とスクリプトがしばしば混在しています。
単語の境界を推定してスクリプトを変更することにより、(MLモデルを実装せずに)非常に大まかなセグメンテーションを実行できます。これは、上記の「NY株価」の例のような文字列に対して機能します。さまざまな言語のスペース文字はすべて、実際のテキストとは異なる特別なスクリプトコードである USCRIPT_COMMON として分類されるため、スペースを使用するほとんどの言語でも機能します。
Step18: 最初に、文章を文字ごとのコードポイントにデコードし、それから各文字のスクリプトコード(識別子)を調べます。
Step19: 次に、これらのスクリプトコードを使って、単語の境界を追加すべき場所を決めます。前の文字とスクリプトコードが異なるそれぞれの文字の先頭に、単語の境界を追加します。
Step20: そして、これらの目印(開始オフセット)を使って、各単語ごとの文字リストを含む RaggedTensor を作成します。
Step21: 最後に、RaggedTensor をコードポイント単位でセグメント化して、文章に戻します。
Step22: 最終的な結果を見やすくするために、UTF-8文字列にエンコードします。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
Explanation: Unicode 文字列
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/unicode"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳はベストエフォートであるため、この翻訳が正確であることや英語の公式ドキュメントの 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリtensorflow/docsにプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 docs-ja@tensorflow.org メーリングリストにご連絡ください。
はじめに
自然言語モデルは、しばしば異なる文字セットを使った異なる言語を扱います。 Unicodeは、ほぼすべての言語で文字表示に使われている標準的なエンコードの仕組みです。各文字は、0 から0x10FFFFまでの一意の整数の コードポイント(符号位置) を使ってエンコードされます。1つの Unicode文字列は、ゼロ個以上のコードポイントのシーケンスです。
このチュートリアルでは、TensorFlow での Unicode文字列の表現方法と、どうやって Unicode で標準的な文字列操作と同様の操作を行うかについて示します。また、スクリプト検出にもとづいて Unicode 文字列をトークンに分解します。
End of explanation
tf.constant(u"Thanks 😊")
Explanation: tf.string データ型
標準的な TensorFlow のtf.string型は、バイト列のテンソルを作ります。また、Unicode文字列はデフォルトでは utf-8 でエンコードされます。
End of explanation
tf.constant([u"You're", u"welcome!"]).shape
Explanation: バイト列が最小限の単位として扱われるため、tf.string 型のテンソルは可変長のバイト文字列を保持できます。また、文字列長はテンソルの次元には含まれません。
End of explanation
# Unicode文字列。UTF-8にエンコードされた文字列スカラーとして表される
text_utf8 = tf.constant(u"语言处理")
text_utf8
# Unicode文字列。UTF-16-BEにエンコードされた文字列スカラーとして表される
text_utf16be = tf.constant(u"语言处理".encode("UTF-16-BE"))
text_utf16be
# Unicode文字列。Unicodeコードポイントのベクトルとして表される
text_chars = tf.constant([ord(char) for char in u"语言处理"])
text_chars
Explanation: 注 : Pythonを使って文字列を構成するとき、v2.x系とv3.x系では Unicode の扱いが異なります。v2.x系では、Unicode文字列は上記のようにプレフィックス "u" で明示します。v3.x系では、デフォルトで Unicode としてエンコードされます。
Unicode 表現
TensorFlow での Unicode文字列表現は、2つの標準的な方法があります:
string スカラー — コードポイントのシーケンスは既知の 文字符合化方式 でエンコードされる
int32 ベクトル — 各文字には単一のコードポイントが入る
たとえば、以下3つはすべて Unicode文字列 "语言処理 "(中国語で「言語処理」を意味します)を表します。
End of explanation
tf.strings.unicode_decode(text_utf8,
input_encoding='UTF-8')
tf.strings.unicode_encode(text_chars,
output_encoding='UTF-8')
tf.strings.unicode_transcode(text_utf8,
input_encoding='UTF8',
output_encoding='UTF-16-BE')
Explanation: Unicode 表現間の変換
TensorFlowでは、これらの異なる Unicode 表現間で変換する方法を用意しています。
tf.strings.unicode_decode:エンコードされた文字列スカラーを、コードポイントのベクトルに変換します。
tf.strings.unicode_encode:コードポイントのベクトルを、エンコードされた文字列スカラーに変換します。
tf.strings.unicode_transcode:エンコードされた文字列スカラーを、別の文字コードに再エンコードします。
End of explanation
# Unicode文字列のバッチ。それぞれが、UTF8にエンコードされた文字列として表される
batch_utf8 = [s.encode('UTF-8') for s in
[u'hÃllo', u'What is the weather tomorrow', u'Göödnight', u'😊']]
batch_chars_ragged = tf.strings.unicode_decode(batch_utf8,
input_encoding='UTF-8')
for sentence_chars in batch_chars_ragged.to_list():
print(sentence_chars)
Explanation: バッチの次元
複数の文字列をデコードする場合、各文字列の文字数が等しくない場合があります。返される結果はtf.RaggedTensorであり、最も内側の次元の長さは各文字列の文字数によって異なります。:
End of explanation
batch_chars_padded = batch_chars_ragged.to_tensor(default_value=-1)
print(batch_chars_padded.numpy())
batch_chars_sparse = batch_chars_ragged.to_sparse()
Explanation: この tf.RaggedTensor を直接使用することも、tf.RaggedTensor.to_tensor メソッドを使ってパディングを追加した密な tf.Tensor に変換するか、あるいは tf.RaggedTensor.to_sparse メソッドを使って tf.SparseTensor に変換することもできます。
End of explanation
tf.strings.unicode_encode([[99, 97, 116], [100, 111, 103], [ 99, 111, 119]],
output_encoding='UTF-8')
Explanation: 同じ長さの複数の文字列をエンコードする場合、tf.Tensor を入力値として使用できます。
End of explanation
tf.strings.unicode_encode(batch_chars_ragged, output_encoding='UTF-8')
Explanation: 可変長の複数の文字列をエンコードする場合、tf.RaggedTensor を入力値として使用する必要があります。
End of explanation
tf.strings.unicode_encode(
tf.RaggedTensor.from_sparse(batch_chars_sparse),
output_encoding='UTF-8')
tf.strings.unicode_encode(
tf.RaggedTensor.from_tensor(batch_chars_padded, padding=-1),
output_encoding='UTF-8')
Explanation: パディングされた、あるいはスパースな複数の文字列を含むテンソルがある場合は、unicode_encode を呼び出す前に tf.RaggedTensor に変換します。
End of explanation
# 最後の絵文字は、UTF8で4バイトを占めることに注意する
thanks = u'Thanks 😊'.encode('UTF-8')
num_bytes = tf.strings.length(thanks).numpy()
num_chars = tf.strings.length(thanks, unit='UTF8_CHAR').numpy()
print('{} bytes; {} UTF-8 characters'.format(num_bytes, num_chars))
Explanation: Unicode 操作
文字列長
tf.strings.length は、文字列長をどう計算するかを示す unit パラメーターが使えます。unit のデフォルトは "BYTE" ですが、"UTF8_CHAR" や "UTF16_CHAR" など他の値に設定して、エンコードされた string 文字列のUnicodeコードポイントの数を決めることができます。
End of explanation
# デフォルト: unit='BYTE'. len=1 の場合、1バイトを返す
tf.strings.substr(thanks, pos=7, len=1).numpy()
# unit = 'UTF8_CHAR' を指定すると、単一の文字(この場合は4バイト)が返される
print(tf.strings.substr(thanks, pos=7, len=1, unit='UTF8_CHAR').numpy())
Explanation: 部分文字列
同様に、 tf.strings.substr では " unit" パラメーターを使い、かつ "pos" および "len" パラメーターを指定することで、オフセットの種類を決めることができます。
End of explanation
tf.strings.unicode_split(thanks, 'UTF-8').numpy()
Explanation: Unicode文字列を分割する
tf.strings.unicode_split は、Unicode文字列を個々の文字に分割します。
End of explanation
codepoints, offsets = tf.strings.unicode_decode_with_offsets(u"🎈🎉🎊", 'UTF-8')
for (codepoint, offset) in zip(codepoints.numpy(), offsets.numpy()):
print("At byte offset {}: codepoint {}".format(offset, codepoint))
Explanation: 文字のバイトオフセット
tf.strings.unicode_decode によって生成された文字テンソルを元の文字列に戻すには、各文字の開始位置のオフセットを知ることが役立ちます。tf.strings.unicode_decode_with_offsetsメソッド は unicode_decode に似ていますが、各文字の開始オフセットを含む2番目のテンソルを返す点が異なります。
End of explanation
uscript = tf.strings.unicode_script([33464, 1041]) # ['芸', 'Б']
print(uscript.numpy()) # [17, 8] == [USCRIPT_HAN, USCRIPT_CYRILLIC]
Explanation: Unicode スクリプト
各Unicodeコードポイントは、スクリプト として知られる単一のコードポイント集合に属しています。文字スクリプトは、その文字がどの言語なのかを判断するのに役立ちます。たとえば、「Б」がキリル文字であることがわかれば、その文字を含むテキストはロシア語やウクライナ語などのスラブ言語である可能性が高いことがわかります。
TensorFlowは、あるコードポイントがどのスクリプトかを返す tf.strings.unicode_script を提供しています。戻り値のスクリプトコードは、International Components for Unicode (ICU) の UScriptCode に対応する int32 値になります。
End of explanation
print(tf.strings.unicode_script(batch_chars_ragged))
Explanation: tf.strings.unicode_script は、多次元のコードポイントの tf.Tensors や tf.RaggedTensor にも適用できます。:
End of explanation
# dtype: string; shape: [num_sentences]
#
# 処理する文章。この行を編集して、さまざまな入力を試してみてください!
sentence_texts = [u'Hello, world.', u'世界こんにちは']
Explanation: 例:シンプルなセグメンテーション
セグメンテーションは、テキストを単語のような粒度に分割するタスクです。これは、スペース文字を使用して単語を区切れる場合には簡単に行えますが、一部の言語(中国語や日本語など)はスペースを使いませんし、また、一部の言語(ドイツ語など)には、意味を解析するために分ける必要がある、単語を結合した長い複合語があります。Webテキストでは、「NY株価」(ニューヨーク株価)のように、異なる言語とスクリプトがしばしば混在しています。
単語の境界を推定してスクリプトを変更することにより、(MLモデルを実装せずに)非常に大まかなセグメンテーションを実行できます。これは、上記の「NY株価」の例のような文字列に対して機能します。さまざまな言語のスペース文字はすべて、実際のテキストとは異なる特別なスクリプトコードである USCRIPT_COMMON として分類されるため、スペースを使用するほとんどの言語でも機能します。
End of explanation
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_codepoint[i, j] は、i番目の文のn番目の文字のコードポイント
sentence_char_codepoint = tf.strings.unicode_decode(sentence_texts, 'UTF-8')
print(sentence_char_codepoint)
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_scripts[i, j] は、i番目の文のn番目の文字のスクリプトコード
sentence_char_script = tf.strings.unicode_script(sentence_char_codepoint)
print(sentence_char_script)
Explanation: 最初に、文章を文字ごとのコードポイントにデコードし、それから各文字のスクリプトコード(識別子)を調べます。
End of explanation
# dtype: bool; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_starts_word[i, j] は、i番目の文のn番目の文字が単語の始まりである場合にTrue
sentence_char_starts_word = tf.concat(
[tf.fill([sentence_char_script.nrows(), 1], True),
tf.not_equal(sentence_char_script[:, 1:], sentence_char_script[:, :-1])],
axis=1)
# dtype: int64; shape: [num_words]
#
# word_starts[i] は、i番目の単語の始まりである文字のインデックス
# (すべての文がフラット化された文字リスト)
word_starts = tf.squeeze(tf.where(sentence_char_starts_word.values), axis=1)
print(word_starts)
Explanation: 次に、これらのスクリプトコードを使って、単語の境界を追加すべき場所を決めます。前の文字とスクリプトコードが異なるそれぞれの文字の先頭に、単語の境界を追加します。:
End of explanation
# dtype: int32; shape: [num_words, (num_chars_per_word)]
#
# word_char_codepoint[i, j] は、i番目の単語のn番目の文字のコードポイント
word_char_codepoint = tf.RaggedTensor.from_row_starts(
values=sentence_char_codepoint.values,
row_starts=word_starts)
print(word_char_codepoint)
Explanation: そして、これらの目印(開始オフセット)を使って、各単語ごとの文字リストを含む RaggedTensor を作成します。:
End of explanation
# dtype: int64; shape: [num_sentences]
#
# sentence_num_words[i] は、i番目の文の単語数
sentence_num_words = tf.reduce_sum(
tf.cast(sentence_char_starts_word, tf.int64),
axis=1)
# dtype: int32; shape: [num_sentences, (num_words_per_sentence), (num_chars_per_word)]
#
# sentence_word_char_codepoint[i, j, k] は、i番目の文のn番目の単語のk番目の文字のコードポイント
sentence_word_char_codepoint = tf.RaggedTensor.from_row_lengths(
values=word_char_codepoint,
row_lengths=sentence_num_words)
print(sentence_word_char_codepoint)
Explanation: 最後に、RaggedTensor をコードポイント単位でセグメント化して、文章に戻します。:
End of explanation
tf.strings.unicode_encode(sentence_word_char_codepoint, 'UTF-8').to_list()
Explanation: 最終的な結果を見やすくするために、UTF-8文字列にエンコードします。:
End of explanation |
8,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional variational autoencoder with PyMC3 and Keras
In this document, I will show how autoencoding variational Bayes (AEVB) works in PyMC3's automatic differentiation variational inference (ADVI). The example here is borrowed from Keras example, where convolutional variational autoencoder is applied to the MNIST dataset. The network architecture of the encoder and decoder are completely same. However, PyMC3 allows us to define the probabilistic model, which combines the encoder and decoder, in the way by which other general probabilistic models (e.g., generalized linear models), rather than directly implementing of Monte Carlo sampling and the loss function as done in the Keras example. Thus I think the framework of AEVB in PyMC3 can be extended to more complex models such as latent dirichlet allocation.
Notebook Written by Taku Yoshioka (c) 2016
For using Keras with PyMC3, we need to choose Theano as the backend of Keras.
Install required packages, including pymc3, if it is not already available
Step1: Load images
MNIST dataset can be obtained by scikit-learn API or from Keras datasets. The dataset contains images of digits.
Step3: Use Keras
We define a utility function to get parameters from Keras models. Since we have set the backend to Theano, parameter objects are obtained as shared variables of Theano.
In the code, 'updates' are expected to include update objects (dictionary of pairs of shared variables and update equation) of scaling parameters of batch normalization. While not using batch normalization in this example, if we want to use it, we need to pass these update objects as an argument of theano.function() inside the PyMC3 ADVI function. The current version of PyMC3 does not support it, it is easy to modify (I want to send PR in future).
The learning phase below is used for Keras to known the learning phase, training or test. This information is important also for batch normalization.
Step5: Encoder and decoder
First, we define the convolutional neural network for encoder using Keras API. This function returns a CNN model given the shared variable representing observations (images of digits), the dimension of latent space, and the parameters of the model architecture.
Step8: Then we define a utility class for encoders. This class does not depend on the architecture of the encoder except for input shape (tensor4 for images), so we can use this class for various encoding networks.
Step12: In a similar way, we define the decoding network and a utility class for decoders.
Step13: Generative model
We can construct the generative model with PyMC3 API and the functions and classes defined above. We set the size of mini-batches to 100 and the dimension of the latent space to 2 for visualization.
Step14: A placeholder of images is required to which mini-batches of images will be placed in the ADVI inference. It is also the input to the encoder. In the below, enc.model is a Keras model of the encoder network, thus we can check the model architecture using the method summary().
Step15: The probabilistic model involves only two random variables; latent variable $\mathbf{z}$ and observation $\mathbf{x}$. We put a Normal prior on $\mathbf{z}$, decode the variational parameters of $q(\mathbf{z}|\mathbf{x})$ and define the likelihood of the observation $\mathbf{x}$.
Step16: In the above definition of the generative model, we do not know how the decoded variational parameters are passed to $q(\mathbf{z}|\mathbf{x})$. To do this, we will set the argument local_RVs in the ADVI function of PyMC3.
Step17: This argument is a OrderedDict whose keys are random variables to which the decoded variational parameters are set, zs in this model. Each value of the dictionary contains two theano expressions representing variational mean (enc.means) and log of standard deviations (enc.lstds). In addition, a scaling constant (len(data) / float(minibatch_size)) is required to compensate for the size of mini-batches of the corresponding log probability terms in the evidence lower bound (ELBO), the objective of the variational inference.
The scaling constant for the observed random variables is set in the same way.
Step18: We can also check the architecture of the decoding network as for the encoding network.
Step19: Inference
To perform inference, we need to create generators of mini-batches and define the optimizer used for ADVI. The optimizer is a function that returns Theano parameter update object (dictionary).
Step20: Let us execute ADVI function of PyMC3.
Step21: Results
v_params, the returned value of the ADVI function, has the trace of ELBO during inference (optimization). We can see the convergence of the inference.
Step22: Finally, we see the distribution of the images in the latent space. To do this, we make 2-dimensional points in a grid and feed them into the decoding network. The mean of $p(\mathbf{x}|\mathbf{z})$ is the image corresponding to the samples on the grid. | Python Code:
#!pip install --upgrade git+https://github.com/Theano/Theano.git#egg=Theano
#!pip install --upgrade keras
#!pip install --upgrade pymc3
#!conda install -y mkl-service
%autosave 0
%matplotlib inline
import sys, os
os.environ['KERAS_BACKEND'] = 'theano'
from theano import config
config.floatX = 'float32'
config.optimizer = 'fast_run'
from collections import OrderedDict
from keras.layers import InputLayer, BatchNormalization, Dense, Conv2D, Deconv2D, Activation, Flatten, Reshape
import numpy as np
import pymc3 as pm
from pymc3.variational import advi_minibatch
from theano import shared, config, function, clone, pp
import theano.tensor as tt
import keras
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import seaborn as sns
from keras import backend as K
K.set_image_dim_ordering('th')
import pymc3, theano
print(pymc3.__version__)
print(theano.__version__)
print(keras.__version__)
Explanation: Convolutional variational autoencoder with PyMC3 and Keras
In this document, I will show how autoencoding variational Bayes (AEVB) works in PyMC3's automatic differentiation variational inference (ADVI). The example here is borrowed from Keras example, where convolutional variational autoencoder is applied to the MNIST dataset. The network architecture of the encoder and decoder are completely same. However, PyMC3 allows us to define the probabilistic model, which combines the encoder and decoder, in the way by which other general probabilistic models (e.g., generalized linear models), rather than directly implementing of Monte Carlo sampling and the loss function as done in the Keras example. Thus I think the framework of AEVB in PyMC3 can be extended to more complex models such as latent dirichlet allocation.
Notebook Written by Taku Yoshioka (c) 2016
For using Keras with PyMC3, we need to choose Theano as the backend of Keras.
Install required packages, including pymc3, if it is not already available:
End of explanation
# from sklearn.datasets import fetch_mldata
# mnist = fetch_mldata('MNIST original')
# print(mnist.keys())
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
data = x_train.reshape(-1, 1, 28, 28).astype('float32')
data /= np.max(data)
Explanation: Load images
MNIST dataset can be obtained by scikit-learn API or from Keras datasets. The dataset contains images of digits.
End of explanation
from keras.models import Sequential
from keras.layers import Dense, BatchNormalization
def get_params(model):
Get parameters and updates from Keras model
shared_in_updates = list()
params = list()
updates = dict()
for l in model.layers:
attrs = dir(l)
# Updates
if 'updates' in attrs:
updates.update(l.updates)
shared_in_updates += [e[0] for e in l.updates]
# Shared variables
for attr_str in attrs:
attr = getattr(l, attr_str)
if type(attr) is tt.sharedvar.TensorSharedVariable:
if attr is not model.get_input_at(0):
params.append(attr)
return list(set(params) - set(shared_in_updates)), updates
# This code is required when using BatchNormalization layer
keras.backend.theano_backend._LEARNING_PHASE = \
shared(np.uint8(1), name='keras_learning_phase')
Explanation: Use Keras
We define a utility function to get parameters from Keras models. Since we have set the backend to Theano, parameter objects are obtained as shared variables of Theano.
In the code, 'updates' are expected to include update objects (dictionary of pairs of shared variables and update equation) of scaling parameters of batch normalization. While not using batch normalization in this example, if we want to use it, we need to pass these update objects as an argument of theano.function() inside the PyMC3 ADVI function. The current version of PyMC3 does not support it, it is easy to modify (I want to send PR in future).
The learning phase below is used for Keras to known the learning phase, training or test. This information is important also for batch normalization.
End of explanation
def cnn_enc(xs, latent_dim, nb_filters=64, nb_conv=3, intermediate_dim=128):
Returns a CNN model of Keras.
Parameters
----------
xs : theano.tensor.sharedvar.TensorSharedVariable
Input tensor.
latent_dim : int
Dimension of latent vector.
input_layer = InputLayer(input_tensor=xs,
batch_input_shape=xs.get_value().shape)
model = Sequential()
model.add(input_layer)
cp1 = {'padding': 'same', 'activation': 'relu'}
cp2 = {'padding': 'same', 'activation': 'relu', 'strides': (2, 2)}
cp3 = {'padding': 'same', 'activation': 'relu', 'strides': (1, 1)}
cp4 = cp3
model.add(Conv2D(1, (2, 2), **cp1))
model.add(Conv2D(nb_filters, (2, 2), **cp2))
model.add(Conv2D(nb_filters, (nb_conv, nb_conv), **cp3))
model.add(Conv2D(nb_filters, (nb_conv, nb_conv), **cp4))
model.add(Flatten())
model.add(Dense(intermediate_dim, activation='relu'))
model.add(Dense(2 * latent_dim))
return model
Explanation: Encoder and decoder
First, we define the convolutional neural network for encoder using Keras API. This function returns a CNN model given the shared variable representing observations (images of digits), the dimension of latent space, and the parameters of the model architecture.
End of explanation
class Encoder:
Encode observed images to variational parameters (mean/std of Gaussian).
Parameters
----------
xs : theano.tensor.sharedvar.TensorSharedVariable
Placeholder of input images.
dim_hidden : int
The number of hidden variables.
net : Function
Returns
def __init__(self, xs, dim_hidden, net):
model = net(xs, dim_hidden)
self.model = model
self.xs = xs
self.out = model.get_output_at(-1)
self.means = self.out[:, :dim_hidden]
self.lstds = self.out[:, dim_hidden:]
self.params, self.updates = get_params(model)
self.enc_func = None
self.dim_hidden = dim_hidden
def _get_enc_func(self):
if self.enc_func is None:
xs = tt.tensor4()
means = clone(self.means, {self.xs: xs})
lstds = clone(self.lstds, {self.xs: xs})
self.enc_func = function([xs], [means, lstds])
return self.enc_func
def encode(self, xs):
# Used in test phase
keras.backend.theano_backend._LEARNING_PHASE.set_value(np.uint8(0))
enc_func = self._get_enc_func()
means, _ = enc_func(xs)
return means
def draw_samples(self, xs, n_samples=1):
Draw samples of hidden variables based on variational parameters encoded.
Parameters
----------
xs : numpy.ndarray, shape=(n_images, 1, height, width)
Images.
# Used in test phase
keras.backend.theano_backend._LEARNING_PHASE.set_value(np.uint8(0))
enc_func = self._get_enc_func()
means, lstds = enc_func(xs)
means = np.repeat(means, n_samples, axis=0)
lstds = np.repeat(lstds, n_samples, axis=0)
ns = np.random.randn(len(xs) * n_samples, self.dim_hidden)
zs = means + np.exp(lstds) * ns
return ns
Explanation: Then we define a utility class for encoders. This class does not depend on the architecture of the encoder except for input shape (tensor4 for images), so we can use this class for various encoding networks.
End of explanation
def cnn_dec(zs, nb_filters=64, nb_conv=3, output_shape=(1, 28, 28)):
Returns a CNN model of Keras.
Parameters
----------
zs : theano.tensor.var.TensorVariable
Input tensor.
minibatch_size, dim_hidden = zs.tag.test_value.shape
input_layer = InputLayer(input_tensor=zs,
batch_input_shape=zs.tag.test_value.shape)
model = Sequential()
model.add(input_layer)
model.add(Dense(dim_hidden, activation='relu'))
model.add(Dense(nb_filters * 14 * 14, activation='relu'))
cp1 = {'padding': 'same', 'activation': 'relu', 'strides': (1, 1)}
cp2 = cp1
cp3 = {'padding': 'valid', 'activation': 'relu', 'strides': (2, 2)}
cp4 = {'padding': 'same', 'activation': 'sigmoid'}
output_shape_ = (minibatch_size, nb_filters, 14, 14)
model.add(Reshape(output_shape_[1:]))
model.add(Deconv2D(nb_filters, (nb_conv, nb_conv), data_format='channels_first', **cp1))
model.add(Deconv2D(nb_filters, (nb_conv, nb_conv), data_format='channels_first', **cp2))
output_shape_ = (minibatch_size, nb_filters, 29, 29)
model.add(Deconv2D(nb_filters, (2, 2), data_format='channels_first', **cp3))
model.add(Conv2D(1, (2, 2), **cp4))
return model
class Decoder:
Decode hidden variables to images.
Parameters
----------
zs : Theano tensor
Hidden variables.
def __init__(self, zs, net):
model = net(zs)
self.model = model
self.zs = zs
self.out = model.get_output_at(-1)
self.params, self.updates = get_params(model)
self.dec_func = None
def _get_dec_func(self):
if self.dec_func is None:
zs = tt.matrix()
xs = clone(self.out, {self.zs: zs})
self.dec_func = function([zs], xs)
return self.dec_func
def decode(self, zs):
Decode hidden variables to images.
An image consists of the mean parameters of the observation noise.
Parameters
----------
zs : numpy.ndarray, shape=(n_samples, dim_hidden)
Hidden variables.
# Used in test phase
keras.backend.theano_backend._LEARNING_PHASE.set_value(np.uint8(0))
return self._get_dec_func()(zs)
Explanation: In a similar way, we define the decoding network and a utility class for decoders.
End of explanation
# Constants
minibatch_size = 100
dim_hidden = 2
Explanation: Generative model
We can construct the generative model with PyMC3 API and the functions and classes defined above. We set the size of mini-batches to 100 and the dimension of the latent space to 2 for visualization.
End of explanation
# Placeholder of images
xs_t = shared(np.zeros((minibatch_size, 1, 28, 28)).astype('float32'), name='xs_t')
# Encoder
enc = Encoder(xs_t, dim_hidden, net=cnn_enc)
enc.model.summary()
Explanation: A placeholder of images is required to which mini-batches of images will be placed in the ADVI inference. It is also the input to the encoder. In the below, enc.model is a Keras model of the encoder network, thus we can check the model architecture using the method summary().
End of explanation
with pm.Model() as model:
# Hidden variables
zs = pm.Normal('zs', mu=0, sd=1, shape=(minibatch_size, dim_hidden), dtype='float32')
# Decoder and its parameters
dec = Decoder(zs, net=cnn_dec)
# Observation model
xs_ = pm.Normal('xs_', mu=dec.out.ravel(), sd=0.1, observed=xs_t.ravel(), dtype='float32')
Explanation: The probabilistic model involves only two random variables; latent variable $\mathbf{z}$ and observation $\mathbf{x}$. We put a Normal prior on $\mathbf{z}$, decode the variational parameters of $q(\mathbf{z}|\mathbf{x})$ and define the likelihood of the observation $\mathbf{x}$.
End of explanation
local_RVs = OrderedDict({zs: ((enc.means, enc.lstds), len(data) / float(minibatch_size))})
Explanation: In the above definition of the generative model, we do not know how the decoded variational parameters are passed to $q(\mathbf{z}|\mathbf{x})$. To do this, we will set the argument local_RVs in the ADVI function of PyMC3.
End of explanation
observed_RVs = OrderedDict({xs_: len(data) / float(minibatch_size)})
Explanation: This argument is a OrderedDict whose keys are random variables to which the decoded variational parameters are set, zs in this model. Each value of the dictionary contains two theano expressions representing variational mean (enc.means) and log of standard deviations (enc.lstds). In addition, a scaling constant (len(data) / float(minibatch_size)) is required to compensate for the size of mini-batches of the corresponding log probability terms in the evidence lower bound (ELBO), the objective of the variational inference.
The scaling constant for the observed random variables is set in the same way.
End of explanation
dec.model.summary()
Explanation: We can also check the architecture of the decoding network as for the encoding network.
End of explanation
# Mini-batches
def create_minibatch(data, minibatch_size):
rng = np.random.RandomState(0)
start_idx = 0
while True:
# Return random data samples of set size batchsize each iteration
ixs = rng.randint(data.shape[0], size=minibatch_size)
yield data[ixs]
minibatches = zip(create_minibatch(data, minibatch_size))
def rmsprop(loss, param):
adam_ = keras.optimizers.RMSprop()
return adam_.get_updates(param, [], loss)
Explanation: Inference
To perform inference, we need to create generators of mini-batches and define the optimizer used for ADVI. The optimizer is a function that returns Theano parameter update object (dictionary).
End of explanation
with model:
v_params = pm.variational.advi_minibatch(
n=1000, minibatch_tensors=[xs_t], minibatches=minibatches,
local_RVs=local_RVs, observed_RVs=observed_RVs,
encoder_params=(enc.params + dec.params),
optimizer=rmsprop
)
Explanation: Let us execute ADVI function of PyMC3.
End of explanation
plt.plot(v_params.elbo_vals);
Explanation: Results
v_params, the returned value of the ADVI function, has the trace of ELBO during inference (optimization). We can see the convergence of the inference.
End of explanation
nn = 10
zs = np.array([(z1, z2)
for z1 in np.linspace(-2, 2, nn)
for z2 in np.linspace(-2, 2, nn)]).astype('float32')
xs = dec.decode(zs)[:, 0, :, :]
xs = np.bmat([[xs[i + j * nn] for i in range(nn)] for j in range(nn)])
matplotlib.rc('axes', **{'grid': False})
plt.figure(figsize=(10, 10))
plt.imshow(xs, interpolation='none', cmap='gray')
plt.show()
Explanation: Finally, we see the distribution of the images in the latent space. To do this, we make 2-dimensional points in a grid and feed them into the decoding network. The mean of $p(\mathbf{x}|\mathbf{z})$ is the image corresponding to the samples on the grid.
End of explanation |
8,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
¿Cómo vibra un tambor cuando lo golpeas?
Analizar el problema de la membrana vibrante permite entender el funcionamiento de instrumentos de percusión tales como los tambores, timbales e incluso sistemas biológicos como el tímpano.
<img style="float
Step1: Por simplicidad vamos a suponer que $a = 1$ y determinar los ceros, significa encontrar todas las intersecciones de las curvas anteriores con el eje horizontal.
Ejemplo
Step2: Dado que la rapidez inicial es cero, entonces $a^{}_{nk} = b^{}_{nk} = 0$, y la solución para el desplazamiento en el tiempo es simplemente,
\begin{equation}
u(r,\theta, t) = \sum_{n=0}^{\infty}\sum_{k = 1}^{\infty}J_{n}(\lambda_{nk} r)(a_{nk}\cos{n\theta} + b_{nk}\sin{n\theta})\cos{(v\lambda_{nk}t)}
\end{equation}
Entonces, solo será necesario encontrar $a_{nk}$ y $b_{nk}$.
\begin{align}
a_{0k} &= \frac{1}{\pi a^2 J_{1}^{2}(\alpha_{0k})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{0}(\lambda_{0k}r)\, r \, dr \, d\theta\
a_{nk} &= \frac{2}{\pi a^2 J_{n+1}^{2}(\alpha_{nk})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{n}(\lambda_{nk}r)\cos(n\theta)\, r \, dr \, d\theta\
b_{nk} &= \frac{2}{\pi a^2 J_{n+1}^{2}(\alpha_{nk})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{n}(\lambda_{nk}r)\sin(n\theta)\, r \, dr \, d\theta
\end{align}
Para resolver estas integrales haremos uso de SymPy.
Iniciemos con $a_{nk}$.
Step3: Entonces para cualquier $n>0$ no se tiene contribución.
Evaluamos entonces $a_{0k}$.
Step4: Ahora $b_{nk}$.
Step5: \begin{equation}
u(r,\theta, t) = \sum_{k = 1}^{\infty} a_{0k}J_{0}(\lambda_{0k} r)\cos{(v\lambda_{0k}t)}
\end{equation}
Step6: Primero vamos a programar para algún modo $k$.
Step7: Y ahora, la solución completa.
No podemos sumar infinitos términos. Pero sí tantos como queramos...
Step8: Fíjise bien, la condición inicial en $t = 0$, se cumple para la solución encontrada.
Tarea
Suponga que $a = 1$, $v = 1$ y que las condiciones iniciales son
Step9: Entonces, por ejemplo si $n = 1$, $a = 1$, $v = 1$, $k = 1$ y $t= 0$. Este sería el modo de vibración $(n,k)\rightarrow (1,1)$.
Step10: Ahora, veamos como lucen todos demás modos de vibración $(n,k)$.
Step11: Ahora, tal vez nos interesaría conocer el comportamiento de la membrana cuando sumamos sobre un conjunto de modos $k$. Es decir,
$$u(r,\theta, t){n} =\sum{k = 1}u(r,\theta, t){nk} = \sum{k = 1}J_{n}(\lambda_{nk} r)\,\cos(n\theta)\,\cos(\lambda_{nk} v t) $$
La manera usual de hacer esto es considerar la suma en series de Fourier, es decir a esta suma le falta un coeficiente $A_{nk}$, pero por simplicidad aquí no vamos a considerar este término.
Una posible función para realizar esto sería,
Step12: Por último, nos queda el caso cuando sumamos sobre todos los modos $n$. Es decir, | Python Code:
# Importamos todas las librerías que usaremos. Explicación...
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import special
import numpy as np
from ipywidgets import *
# Graficamos funciones de Bessel de orden n = 0,1,...,4
r = np.linspace(0, 10,100)
for n in range(5):
plt.plot(r, special.jn(n, r), label = '$J_{%s}(r)$'%n)
plt.xlabel('$r$', fontsize = 18)
plt.ylabel('$J_{n}(r)$', fontsize = 18)
plt.axhline(y = 0, color = 'k') # Para graficar lineas horizontales
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5), prop={'size': 14})
plt.show()
Explanation: ¿Cómo vibra un tambor cuando lo golpeas?
Analizar el problema de la membrana vibrante permite entender el funcionamiento de instrumentos de percusión tales como los tambores, timbales e incluso sistemas biológicos como el tímpano.
<img style="float: left; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/1/15/Drum_vibration_mode01.gif" width="300px" height="100px" />
Referencias:
- https://en.wikipedia.org/wiki/Bessel_function
- https://es.wikipedia.org/wiki/Vibraciones_de_una_membrana_circular
- https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/special.html
Modelo y solución general
Considere un tambor (membrana) de radio $a$, entonces la función de onda en $\mathbb{R}^2$ para este sistema se puede escribir como,
$$ \frac{1}{v^2}\frac{\partial^2 u}{\partial t^2} = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} $$
donde $u\equiv u(x,y,t)$ es el desplazamiento transversal_(elevación)_ y $v$ es la rapidez de propagación de la onda.
La forma habitual de encontrar la solución a esta ecuación es primero hacer un cambio de coordenadas, de cartesianas a polares $(x,y,t)\to(r,\theta,t)$. En estas coordenadas, la ecuación queda:
$$ \frac{1}{v^2}\frac{\partial^2 u}{\partial t^2} = \frac{\partial^2 u}{\partial r^2} + \frac{1}{r}\frac{\partial u}{\partial r} + \frac{1}{r^2}\frac{\partial^2 u}{\partial \theta^2},$$
para $0\leq r<a$, $0\leq\theta<2\pi$ y con $u(a,\theta,t)=0$.
Posteriormente se considera el método de separación de variable. Es decir, buscamos soluciones de la forma
$$ u(r, \theta, t) = R(r) \Theta(\theta) T(t).$$
Esta sustitución da como resultado tres ecuaciones diferenciales, una para cada variable de separación. Al resolver y sustituir en la función de arriba, se obtienen los llamados modos normales.
$$u_{nk}(r,\theta, t) = J_{n}(\lambda_{nk} r)(a_{nk}\cos{n\theta} + b_{nk}\sin{n\theta})\cos{(v\lambda_{nk}t)}$$
$$u^{}{nk}(r,\theta, t) = J{n}(\lambda_{nk}r)(a^{}{nk}\cos{n\theta} + b^{*}{nk}\sin{n\theta})\sin{(v\lambda_{nk}t)}),$$
para $n = 0,1,2,\dots$, $k = 1,2,3,\dots$, donde $J_{n}$ es la función de Bessel de orden $n$ de primera clase. Además,
$$\lambda_{nk} = \frac{\alpha_{nk}}{a}$$
donde $\alpha_{nk}$ es el k-ésimo cero de $J_{n}(\lambda a)=0$. Esto es consecuencia de que $u$ sea cero en la frontera de la membrana, $r = a$.
Los coeficientes $a_{nk} , b_{nk}, a^{}_{nk}$ y $b^{}_{nk}$ se determinan de tal forma que cumplan con las condiciones iniciales:
$$u(r,\theta, 0) = f(r,\theta)$$
$$u_{t}(r,\theta, 0) = g(r,\theta)$$
donde el primer termino es la geometría inicial y el segundo la rapidez inicial. Se puede demostrar que las expresiones para estos coeficientes se pueden escribir como:
\begin{align}
a_{0k} &= \frac{1}{\pi a^2 J_{1}^{2}(\alpha_{0k})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{0}(\lambda_{0k}r)\, r \, dr \, d\theta\
a_{nk} &= \frac{2}{\pi a^2 J_{n+1}^{2}(\alpha_{nk})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{n}(\lambda_{nk}r)\cos(n\theta)\, r \, dr \, d\theta\
b_{nk} &= \frac{2}{\pi a^2 J_{n+1}^{2}(\alpha_{nk})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{n}(\lambda_{nk}r)\sin(n\theta)\, r \, dr \, d\theta
\end{align}
Y similarmente,
\begin{align}
a^{}{0k} &= \frac{1}{\pi \,v\, \alpha{0k}\,a J_{1}^{2}(\alpha_{0k})}\int_{0}^{2\pi}\int_{0}^{a}\; g(r,\theta)\, J_{0}(\lambda_{0k}r)\, r \, dr \, d\theta\
a^{}{nk} &= \frac{2}{\pi\, v\,\alpha{0k}\, a J_{n+1}^{2}(\alpha_{nk})}\int_{0}^{2\pi}\int_{0}^{a}\; g(r,\theta)\, J_{n}(\lambda_{nk}r)\cos(n\theta)\, r \, dr \, d\theta\
b^{*}{nk} &= \frac{2}{\pi\, v\,\alpha{0k}\, a J_{n+1}^{2}(\alpha_{nk})}\int_{0}^{2\pi}\int_{0}^{a}\; g(r,\theta)\, J_{n}(\lambda_{nk}r)\sin(n\theta)\, r \, dr \, d\theta
\end{align}
Tenemos entonces infinitas soluciones (explicar porqué). Para considerarlas todas, las sumamos:
\begin{align}
u(r,\theta, t) &= \sum_{n=0}^{\infty}\sum_{k = 1}^{\infty}J_{n}(\lambda_{nk} r)(a_{nk}\cos{n\theta} + b_{nk}\sin{n\theta})\cos{(v\lambda_{nk}t)}\
&+ \sum_{n=0}^{\infty}\sum_{k = 1}^{\infty}J_{n}(\lambda_{nk}r)(a^{}_{nk}\cos{n\theta} + b^{}{nk}\sin{n\theta})\sin{(v\lambda{nk}t)}).
\end{align}
Estamos familiarizados con la función coseno, pero no tanto con la función de Bessel. Entonces, nuestra primera actividad será conocer su comportamiento.
End of explanation
def f_shape(r):
return 1 - r**4
a = 1
r = np.linspace(0, 1, 100)
angle = np.linspace(0, 2*np.pi, 200)
r_shape = f_shape(r)
# Explicación de los siguientes comandos...
u = np.array([np.full(len(angle), radi) for radi in r_shape])
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
# ¿Cómo se grafica en polares?
plt.figure(figsize = (6, 5))
plt.pcolor(x, y, u, cmap = 'viridis')
plt.axis('off')
plt.colorbar()
plt.show()
Explanation: Por simplicidad vamos a suponer que $a = 1$ y determinar los ceros, significa encontrar todas las intersecciones de las curvas anteriores con el eje horizontal.
Ejemplo: Caso radialmente simétrico
Suponga que $a = 1$, $v = 1$ y que las condiciones iniciales son (no dependen de theta):
$$ f(r,\theta) = 1- r^4\quad\quad g(r,\theta) = 0$$
¿Cómo luce el tambor en su condición inicial?
End of explanation
# Para imprimir con fotmato LaTeX
from sympy import init_printing; init_printing(use_latex='mathjax')
import sympy as sym
sym.var('r theta', real = True)
#r, theta, k = sym.symbols('r theta k')
r, theta
sym.var('n k', positive = True, integer=True)
#n, k = sym.symbols('n k', positive = True, integer=True)
n, k
def lamb(n,k):
return sym.Symbol('lambda_%s%s'%(n,k), positive = True, real = True)
lamb(0,k)
f = 1 - r**4; f
integrand = f * sym.besselj(n, lamb(n,k) * r) * sym.cos(n *theta) * r
integrand
ank = sym.Integral(integrand, (r, 0, 1), (theta, 0, 2*sym.pi))
ank
solution_ank = ank.doit()
solution_ank
Explanation: Dado que la rapidez inicial es cero, entonces $a^{}_{nk} = b^{}_{nk} = 0$, y la solución para el desplazamiento en el tiempo es simplemente,
\begin{equation}
u(r,\theta, t) = \sum_{n=0}^{\infty}\sum_{k = 1}^{\infty}J_{n}(\lambda_{nk} r)(a_{nk}\cos{n\theta} + b_{nk}\sin{n\theta})\cos{(v\lambda_{nk}t)}
\end{equation}
Entonces, solo será necesario encontrar $a_{nk}$ y $b_{nk}$.
\begin{align}
a_{0k} &= \frac{1}{\pi a^2 J_{1}^{2}(\alpha_{0k})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{0}(\lambda_{0k}r)\, r \, dr \, d\theta\
a_{nk} &= \frac{2}{\pi a^2 J_{n+1}^{2}(\alpha_{nk})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{n}(\lambda_{nk}r)\cos(n\theta)\, r \, dr \, d\theta\
b_{nk} &= \frac{2}{\pi a^2 J_{n+1}^{2}(\alpha_{nk})}\int_{0}^{2\pi}\int_{0}^{a}\; f(r,\theta)\, J_{n}(\lambda_{nk}r)\sin(n\theta)\, r \, dr \, d\theta
\end{align}
Para resolver estas integrales haremos uso de SymPy.
Iniciemos con $a_{nk}$.
End of explanation
integ = lambda n: f * sym.besselj(n, lamb(n,k) * r) * sym.cos(n*theta) * r
integ(0)
a0k = sym.Integral(integ(0), (r, 0, 1), (theta, 0, 2*sym.pi))
a0k
solution_a0k = a0k.doit()
solution_a0k
a0k_solution = solution_a0k/(sym.pi*sym.besselj(1, lamb(0,k))**2)
a0k_solution
sym.simplify(a0k_sol)
Explanation: Entonces para cualquier $n>0$ no se tiene contribución.
Evaluamos entonces $a_{0k}$.
End of explanation
integrand_b = f * sym.besselj(n, lamb(n,k) * r) * sym.sin(n *theta) * r
integrand_b
bnk = sym.Integral(integrand_b, (r, 0, 1), (theta, 0, 2*sym.pi))
bnk
solution_bnk = bnk.doit()
solution_bnk
Explanation: Ahora $b_{nk}$.
End of explanation
a0k_solution
Explanation: \begin{equation}
u(r,\theta, t) = \sum_{k = 1}^{\infty} a_{0k}J_{0}(\lambda_{0k} r)\cos{(v\lambda_{0k}t)}
\end{equation}
End of explanation
def a0k_sym(lambd):
solucion = 2*(-4*special.jn(0, lambd)/lambd**2
+16*special.jn(1, lambd)/lambd**3 +
32*special.jn(0, lambd)/lambd**4 -
64*special.jn(1, lambd)/lambd**5)/special.jn(1, lambd)**2
return solucion
def tambor(v, kth_zero, nt, t):
r = np.r_[0:1:100j]
angle = np.r_[0:2*np.pi:200j]
ceros = special.jn_zeros(0, nt)
lambd = ceros[kth_zero]
u_r = a0k_sym(lambd)*special.jn(0, lambd * r) * np.cos(lambd * v * t)
u = np.array([np.full(len(angle), u_rs) for u_rs in u_r])
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u
x1, y1, u1 = tambor(1, 0, 15, 7)
plt.figure(figsize = (6, 5))
plt.pcolor(x1 , y1 , u1, cmap = 'viridis')
plt.axis('off')
plt.colorbar()
plt.show()
def tambor_nk(t = 0, kth=0):
fig = plt.figure(figsize = (6,5))
ax = fig.add_subplot(1, 1, 1)
x, y, u = tambor(1, kth, 50, t)
im = ax.pcolor(x, y, u, cmap = 'viridis')
ax.set_xlim(xmin = -1.1, xmax = 1.1)
ax.set_ylim(ymin = -1.1, ymax = 1.1)
plt.colorbar(im)
plt.axis('off')
fig.canvas.draw()
interact_manual(tambor_nk, t = (0, 15,.01), kth = (0, 10, 1));
Explanation: Primero vamos a programar para algún modo $k$.
End of explanation
def tambor_n_allk(v, nk_zeros, t):
r = np.linspace(0, 1, 100)
angle = np.linspace(0, 2*np.pi, 200)
ceros = special.jn_zeros(0, nk_zeros)
lambd = ceros[0]
u_r = a0k_sym(lambd)*special.jn(0, lambd * r) * np.cos(lambd * v * t)
u0 = np.array([np.full(len(angle), u_rs) for u_rs in u_r])
for cero in range(1, nk_zeros):
lambd = ceros[cero]
u_r = a0k_sym(lambd)*special.jn(0, lambd * r) * np.cos(lambd * v * t)
u = np.array([np.full(len(angle), u_rs) for u_rs in u_r])
u0 += u
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u0
def tambor_0(t = 0):
fig = plt.figure(figsize = (6,5))
ax = fig.add_subplot(1, 1, 1)
x, y, u = tambor_n_allk(1, 15, t)
im = ax.pcolor(x, y, u, cmap = 'viridis')
ax.set_xlim(xmin = -1.1, xmax = 1.1)
ax.set_ylim(ymin = -1.1, ymax = 1.1)
plt.colorbar(im)
plt.axis('off')
fig.canvas.draw()
interact_manual(tambor_0, t = (0, 15,.01));
Explanation: Y ahora, la solución completa.
No podemos sumar infinitos términos. Pero sí tantos como queramos...
End of explanation
def tambor(n, r_max, v, kth_zero, nt, t):
r = np.r_[0:r_max:100j]
angle = np.r_[0:2*np.pi:200j]
ceros = special.jn_zeros(0, nt)
lamb = ceros[kth_zero]
u = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u
Explanation: Fíjise bien, la condición inicial en $t = 0$, se cumple para la solución encontrada.
Tarea
Suponga que $a = 1$, $v = 1$ y que las condiciones iniciales son:
$$ f(r,\theta) = (1- r^4)\cos(\theta)\quad\quad g(r,\theta) = 0$$
Fin Modulo 1
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Lázaro Alonso. Modified by Esteban Jiménez Rodríguez.
</footer>
Versión antigua (Lo único que tal vez sirva de algo, son los programas)
Tal vez sirvan de algo para sus tareas.
Entonces, primero veamos algunos modos normales del sistema. Por ejemplo(demasiado simplificado),
$$u(r,\theta, t){nk} = J{n}(\lambda_{nk} r)\,\cos(n\theta)\,\cos(\lambda_{nk} v t)$$
La siguiente función se aplica caso simplificado.
End of explanation
x, y, u = tambor(1, 1, 1, 0, 15, 0)
plt.figure(figsize = (6, 5))
plt.pcolor(x, y, u, cmap = 'viridis')
plt.axis('off')
plt.colorbar()
plt.show()
Explanation: Entonces, por ejemplo si $n = 1$, $a = 1$, $v = 1$, $k = 1$ y $t= 0$. Este sería el modo de vibración $(n,k)\rightarrow (1,1)$.
End of explanation
def tambor_nk(t = 0, n = 0, kth=0):
fig = plt.figure(figsize = (6,5))
ax = fig.add_subplot(1, 1, 1)
x, y, u = tambor(n, 1, 1, kth, 15, t)
im = ax.pcolor(x, y, u, cmap = 'viridis')
ax.set_xlim(xmin = -1.1, xmax = 1.1)
ax.set_ylim(ymin = -1.1, ymax = 1.1)
plt.colorbar(im)
plt.axis('off')
fig.canvas.draw()
interact_manual(tambor_nk, t = (0, 15,.01), n = (0, 10, 1), kth = (0, 10, 1));
Explanation: Ahora, veamos como lucen todos demás modos de vibración $(n,k)$.
End of explanation
def tambor_n_allk(n, r_max, v, nk_zeros, t):
r = np.r_[0:r_max:100j]
angle = np.r_[0:2*np.pi:200j]
ceros = special.jn_zeros(0, nk_zeros)
lamb = ceros[0]
u0 = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
for cero in range(1, nk_zeros):
lamb = ceros[cero]
u = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
u0 += u
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u0
def tambor_n(t = 0, n = 0):
fig = plt.figure(figsize = (6,5))
ax = fig.add_subplot(1, 1, 1)
x, y, u = tambor_n_allk(n, 1, 1, 15, t)
im = ax.pcolor(x, y, u, cmap = 'viridis')
ax.set_xlim(xmin = -1.1, xmax = 1.1)
ax.set_ylim(ymin = -1.1, ymax = 1.1)
plt.colorbar(im)
plt.axis('off')
fig.canvas.draw()
interact_manual(tambor_n, t = (0, 15,.01), n = (0, 10, 1));
Explanation: Ahora, tal vez nos interesaría conocer el comportamiento de la membrana cuando sumamos sobre un conjunto de modos $k$. Es decir,
$$u(r,\theta, t){n} =\sum{k = 1}u(r,\theta, t){nk} = \sum{k = 1}J_{n}(\lambda_{nk} r)\,\cos(n\theta)\,\cos(\lambda_{nk} v t) $$
La manera usual de hacer esto es considerar la suma en series de Fourier, es decir a esta suma le falta un coeficiente $A_{nk}$, pero por simplicidad aquí no vamos a considerar este término.
Una posible función para realizar esto sería,
End of explanation
def order_n(n, ceros, nk_zeros, angle, v, r, t):
lamb = ceros[0]
u0 = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
for cero in range(1, nk_zeros):
lamb = ceros[cero]
u = np.array([special.jn(n, lamb* var_r) * np.cos(n * angle)
* np.cos(lamb * v * t) for var_r in r])
u0 += u
return u0
def tambor(orden_n, r_max, v, nk_zeros, t):
r = np.r_[0:r_max:100j]
angle = np.r_[0:2*np.pi:100j]
ceros = special.jn_zeros(0, nk_zeros)
u0 = order_n(0, ceros, nk_zeros, angle, v, r, t)
for n in range(1, orden_n):
u = order_n(n, ceros, nk_zeros, angle, v, r, t)
u0 += u
x = np.array([var_r * np.cos(angle) for var_r in r])
y = np.array([var_r * np.sin(angle) for var_r in r])
return x, y, u0
x, y, u = tambor(10, 1, 1, 5, 5)
plt.figure(figsize = (5, 5))
plt.pcolor(x, y, u, cmap = 'inferno')
plt.axis('on')
plt.show()
Explanation: Por último, nos queda el caso cuando sumamos sobre todos los modos $n$. Es decir,
End of explanation |
8,681 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Given a pandas DataFrame, how does one convert several binary columns (where 0 denotes the value exists, 1 denotes it doesn't) into a single categorical column? | Problem:
import pandas as pd
df = pd.DataFrame({'A': [0, 1, 1, 1, 0, 1],
'B': [1, 0, 1, 1, 1, 0],
'C': [1, 1, 0, 1, 1, 1],
'D': [1, 1, 1, 0, 1, 1]})
df["category"] = df.idxmin(axis=1) |
8,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Python and MySQL tutorial </center>
<center> Author
Step1: Calculator
Step2: Strings
Step3: show ' and " in a string
Step4: span multiple lines
Step5: slice and index
Step6: Index in the Python way
Step7: List
Step8: Built-in functions like sum and len are explained in the document too. Here is a link to it.
Mutable
Step9: Nest lists
Step10: tuple
similar to list, but immutable (element cannot be changed)
Step11: dict
Step12: Quiz
Step13: while
Fibonacci series
Step14: for
Step15: Crawl the reviews for UT Dallas at Yelp.com
The University of Texas at Dallas is reviewed on Yelp.com. It shows on this page that it attracted 38 reviews so far from various reviewers. You learn from the webpage that Yelp displays at most 20 recommended reviews per page and we need to go to page 2 to see the review 21 to review 38. You notice that the URL in the address box of your browser changed when you click on the Next page. Previouly, on page 1, the URL is
Step17: Define function
Step18: Data I/O
Create some data in Python and populate the database with the created data. We want to create a table with 3 columns
Step19: MySQL
Install MySQL 5.7 Workbench first following this link. You might also need to install the prerequisites listed here before you can install the Workbench. The Workbench is an interface to interact with MySQL database. The actual MySQL database server requires a second step
Step20: Quiz
Step21: Remember that we use Python to save 50 kids' infomation into a csv file named data.csv first and then use the load command in MySQL to import the data? We don't actually need to save the data.csv file to hard disk. And we can "load" the same data into database without leaving Python.
Step22: To get better understanding of the table we just created. We will use MySQL command line again.
Step23: Again, you can actually do everything in Python without going to the MySQL workbench.
Step24: Now we want to add one new column of mother_name to record the mother's name for each child in the child care.
Step25: Check if you've updated the data successfully in MySQL database from Python
Step26: Regular expression in Python
Before you run this part, you need to download the digits.txt and spaces.txt files to the same folder as this notebook
What's in the digits.txt file?
Step27: How can I find all the numbers in a file like digits.txt?
Step28: How can I find all the equations?
Step29: The equations seem to be incorrect, how can I correct them without affecting other text information?
Step30: Preprocessing a text file with various types of spaces.
Step31: More about index
Step32: How about selecting every other character?
Step33: Negative index
Step34: More about list
Step35: Versatile features of a list
Step36: How to get the third power of integers between 0 and 10.
Step37: Target | Python Code:
width = 20
height = 5*9
width * height
Explanation: <center> Python and MySQL tutorial </center>
<center> Author: Cheng Nie </center>
<center> Check chengnie.com for the most recent version </center>
<center> Current Version: Feb 18, 2016</center>
Python Setup
Since most students in this class use Windows 7, I will use Windows 7 for illustration of the setup. Setting up the environmnet in Mac OS and Linux should be similar. Please note that the code should produce the same results whichever operating system (even on your smart phone) you are using because Python is platform independent.
Download the Python 3.5 version of Anaconda that matches your operating system from this link. You can accept the default options during installation. To see if your Windows is 32 bit or 64 bit, check here
You can save and run this document using the Jupyter notebook (previously known as IPython notebook). Another tool that I recommend would be PyCharm, which has a free community edition.
This is a tutorial based on the official Python Tutorial for Python 3.5.1. If you need a little more motivation to learn this programming language, consider reading this article.
Numbers
End of explanation
tax = 8.25 / 100
price = 100.50
price * tax
price + _
round(_, 2)
Explanation: Calculator
End of explanation
print('spam email')
Explanation: Strings
End of explanation
# This would cause error
print('doesn't')
# One way of doing it correctly
print('doesn\'t')
# Another way of doing it correctly
print("doesn't")
Explanation: show ' and " in a string
End of explanation
print('''
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
''')
print('''Cheng highly recommends Python programming language''')
Explanation: span multiple lines
End of explanation
word = 'HELP' + 'A'
word
Explanation: slice and index
End of explanation
word[0]
word[4]
# endding index not included
word[0:2]
word[2:4]
# length of a string
len(word)
Explanation: Index in the Python way
End of explanation
a = ['spam', 'eggs', 100, 1234]
a
a[0]
a[3]
a[2:4]
sum(a[2:4])
Explanation: List
End of explanation
a
a[2] = a[2] + 23
a
Explanation: Built-in functions like sum and len are explained in the document too. Here is a link to it.
Mutable
End of explanation
q = [2, 3]
p = [1, q, 4]
p
len(p)
p[1]
p[1][0]
Explanation: Nest lists
End of explanation
x=(1,2,3,4)
x[0]
x[0]=7 # it will raise error since tuple is immutable
Explanation: tuple
similar to list, but immutable (element cannot be changed)
End of explanation
tel = {'jack': 4098, 'sam': 4139}
tel['dan'] = 4127
tel
tel['jack']
del tel['sam']
tel
tel['mike'] = 4127
tel
# Is dan in the dict?
'dan' in tel
for key in tel:
print('key:', key, '; value:', tel[key])
import collections
od = collections.OrderedDict(sorted(tel.items()))
od
Explanation: dict
End of explanation
x = int(input("Please enter an integer for x: "))
if x < 0:
x = 0
print('Negative; changed to zero')
elif x == 0:
print('Zero')
elif x == 1:
print('Single')
else:
print('More')
Explanation: Quiz: how to print the tel dict sorted by the key?
Control of flow
if
Ask a user to input a number, if it's negative, x=0, else if it's 1
End of explanation
# multiple assignment to assign two variables at the same time
a, b = 0, 1
while a < 10:
print(a)
a, b = b, a+b
Explanation: while
Fibonacci series: the sum of two elements defines the next with the first two elements to be 0 and 1.
End of explanation
# Measure some strings:
words = ['cat', 'window', 'defenestrate']
for i in words:
print(i, len(i))
Explanation: for
End of explanation
# crawl_UTD_reviews
# Author: Cheng Nie
# Email: me@chengnie.com
# Date: Feb 8, 2016
# Updated: Feb 12, 2016
from urllib.request import urlopen
num_pages = 2
reviews_per_page = 20
# the file we will save the rating and date
out_file = open('UTD_reviews.csv', 'w')
# the url that we need to locate the page for UTD reviews
url = 'http://www.yelp.com/biz/university-of-texas-at-dallas-\
richardson?start={start_number}'
# the three string patterns we just explained
review_start_pattern = '<div class="review-wrapper">'
rating_pattern = '<i class="star-img stars_'
date_pattern = '"datePublished" content="'
reviews_count = 0
for page in range(num_pages):
print('processing page', page)
# open the url and save the source code string to page_content
html = urlopen(url.format(start_number = page * reviews_per_page))
page_content = html.read().decode('utf-8')
# locate the beginning of an individual review
review_start = page_content.find(review_start_pattern)
while review_start != -1:
# it means there at least one more review to be crawled
reviews_count += 1
# get the rating
cut_front = page_content.find(rating_pattern, review_start) \
+ len(rating_pattern)
cut_end = page_content.find('" title="', cut_front)
rating = page_content[cut_front:cut_end]
# get the date
cut_front = page_content.find(date_pattern, cut_end) \
+ len(date_pattern)
cut_end = page_content.find('">', cut_front)
date = page_content[cut_front:cut_end]
# save the data into out_file
out_file.write(','.join([rating, date]) + '\n')
review_start = page_content.find(review_start_pattern, cut_end)
print('crawled', reviews_count, 'reviews so far')
out_file.close()
Explanation: Crawl the reviews for UT Dallas at Yelp.com
The University of Texas at Dallas is reviewed on Yelp.com. It shows on this page that it attracted 38 reviews so far from various reviewers. You learn from the webpage that Yelp displays at most 20 recommended reviews per page and we need to go to page 2 to see the review 21 to review 38. You notice that the URL in the address box of your browser changed when you click on the Next page. Previouly, on page 1, the URL is:
http://www.yelp.com/biz/university-of-texas-at-dallas-richardson
On page 2, the URL is:
http://www.yelp.com/biz/university-of-texas-at-dallas-richardson?start=20
You learn that probably Yelp use this ?start=20 to skip(or offset in MySQL language) the first 20 records to show you the next 18 reviews. You can use this pattern of going to the next page to enumerate all pages of a business in Yelp.com.
In this exmaple, we are going get the rating (number of stars) and the date for each of these 38 reviews.
The general procedure to crawl any web page is the following:
Look for the string patterns proceeding and succeeding the information you are looking for in the source code of the page (the html file).
Write a program to enumerate (for or while loop) all the pages.
For this example, I did a screenshot with my annotation to illustrate the critical patterns in the Yelp page for UTD reviews.
review_start_pattern is a variable to stroe the string of '<div class="review-wrapper">' to locate the beginning of an individual review.
rating_pattern is a variable to stroe the string of '<i class="star-img stars_' to locate the rating.
date_pattern is a variable to stroe the string of '"datePublished" content="' to locate date of the rating.
It takes some trails and errors to figure out what are good string patterns to use to locate the information you need in an html. For example, I found that '<div class="review-wrapper">' appeared exactly 20 times in the webpage, which is a good indication that it corresponds to the 20 individual reviews on the page (the review-wrapper tag seems to imply that too).
End of explanation
def fib(n): # write Fibonacci series up to n
Print a Fibonacci series up to n.
a, b = 0, 1
while a < n:
print(a)
a, b = b, a+b
fib(200)
fib(2000000000000000) # do not need to worry about the type of a,b
Explanation: Define function
End of explanation
# output for eyeballing the data
import string
import random
# fix the pseudo-random sequences for easy replication
# It will generate the same random sequences
# of nubmers/letters with the same seed.
random.seed(123)
for i in range(50):
# Data values separated by comma(csv file)
print(i+1,random.choice(string.ascii_uppercase),
random.choice(range(6)), sep=',')
# write the data to a file called data.csv
random.seed(123)
out_file=open('data.csv','w')
columns=['id','name','age']
out_file.write(','.join(columns)+'\n')
for i in range(50):
row=[str(i+1),random.choice(string.ascii_uppercase),
str(random.choice(range(6)))]
out_file.write(','.join(row)+'\n')
else:
out_file.close()
# load data back into Python
for line in open('data.csv', 'r'):
print(line)
# To disable to the new line added for each print
# use the end parameter in print function
for line in open('data.csv', 'r'):
print(line, end = '')
Explanation: Data I/O
Create some data in Python and populate the database with the created data. We want to create a table with 3 columns: id, name, and age to store information about 50 kids in a day care.
The various modules that extend the basic Python funtions are indexed here.
End of explanation
These commands are executed in MySQL query tab, not in Python.
In mysql, you need to end all commands with ;
#
# ----------------------- In MySQL ------------------
# display the database
show databases;
# create a database named test
create database test;
# choose a database for future commands
use test;
# display the tables in test database
show tables;
# create a new table named example
create table example(
id int not null,
name varchar(30),
age tinyint,
primary key(id));
# now we should have the example table
show tables;
# how was the table example defined again?
desc example;
# is there anything in the example table?
select * from example;
# import csv file into MySQL database
load data local infile "C:\\Users\\cxn123430\\Downloads\\data.csv" into table test.example FIELDS TERMINATED BY ',' lines terminated by '\r\n' ignore 1 lines;
# is there anything now?
select * from example;
# drop the table
drop table example;
# does the example table still exist?
show tables;
Explanation: MySQL
Install MySQL 5.7 Workbench first following this link. You might also need to install the prerequisites listed here before you can install the Workbench. The Workbench is an interface to interact with MySQL database. The actual MySQL database server requires a second step: run the MySQL Installer, then add and intall the MySQL servers using the Installer. You can accept the default options during installation. Later, you will connect to MySQL using the password you set during the installation and configuration. I set the password to be pythonClass.
The documentation for MySQL is here.
To get comfortable with it, you might find this tutorial of Structured Query Language(SQL) to be helpful.
End of explanation
#
# ----------------------- In Windows command line(cmd) ------------------
conda install mysql-connector-python
Explanation: Quiz: import the crawled Yelp review file UTD_reviews.csv into a table in your database.
Use Python to access MySQL database
Since the official MySQL 5.7 provides support for Python upto Version 3.4 as of writing this tutorial, we need to install a package named mysql-connector-python to provide support for the cutting-edge Python 3.5. Execute the following line in Windows command line to install it.
This is relatively easy since you have the Anancoda installed. We can use the conda command to intall that package in the Windows command line.
End of explanation
#
# ----------------------- In Python ------------------
# access table from Python
# connect to MySQL in Python
import mysql.connector
cnx = mysql.connector.connect(user='root',
password='pythonClass',
database='test')
# All DDL (Data Definition Language) statements are
# executed using a handle structure known as a cursor
cursor = cnx.cursor()
# create a table named example
cursor.execute('''create table example(
id int not null,
name varchar(30),
age tinyint,
primary key(id));''')
cnx.commit()
# write the same data to the example table without saving a csv file
query0_template = '''insert into example (id, name, age) \
values ({id_num},"{c_name}",{c_age});'''
random.seed(123)
for i in range(50):
query0 = query0_template.format(id_num = i+1,
c_name = random.choice(string.ascii_uppercase),
c_age = random.choice(range(6)))
print(query0)
cursor.execute(query0)
cnx.commit()
Explanation: Remember that we use Python to save 50 kids' infomation into a csv file named data.csv first and then use the load command in MySQL to import the data? We don't actually need to save the data.csv file to hard disk. And we can "load" the same data into database without leaving Python.
End of explanation
#
# ----------------------- In MySQL ------------------
# To get the totoal number of records
select count(*) from example;
# To get age histgram
select distinct age, count(*) from example group by age;
# create a copy of the example table for modifying.
create table e_copy select * from example;
select * from e_copy;
# note that the primary key is not copied to the e_copy table
desc e_copy;
# add the primary key to e_copy table using the alter command
alter table e_copy add primary key(id);
# is it done correctly?
desc e_copy;
# does MySQL take the primary key seriously?
insert into e_copy (id, name, age) values (null,'P',6);
insert into e_copy (id, name, age) values (3,'P',6);
# alright, let's insert something else
insert into e_copy (id, name, age) values (51,'P',6);
insert into e_copy (id, name, age) values (52,'Q',null);
insert into e_copy (id, name, age) values (54,'S',null),(55,'T',null);
insert into e_copy (id, name) values (53,'R');
# who is the child with id of 53?
select * from e_copy where id = 53;
# update the age for this child.
update e_copy set age=3 where id=53;
select * from e_copy where id = 53;
# what's inside the table now?
select * from e_copy;
Explanation: To get better understanding of the table we just created. We will use MySQL command line again.
End of explanation
#
# ----------------------- In Python ------------------
# query all the content in the e_copy table
cursor.execute('select * from e_copy;')
for i in cursor:
print(i)
Explanation: Again, you can actually do everything in Python without going to the MySQL workbench.
End of explanation
#
# ----------------------- In Python ------------------
# # example for adding new info for existing record
cursor.execute('alter table e_copy add mother_name varchar(1) default null')
cnx.commit()
query1_template='update e_copy set mother_name="{m_name}" where id={id_num};'
random.seed(333)
for i in range(55):
query1=query1_template.format(m_name = random.choice(string.ascii_uppercase),id_num = i+1)
print(query1)
cursor.execute(query1)
cnx.commit()
#
# ----------------------- In Python ------------------
# example for insert new records
query2_template='insert into e_copy (id, name,age,mother_name) \
values ({id_num},"{c_name}",{c_age},"{m_name}")'
for i in range(10):
query2=query2_template.format(id_num = i+60,
c_name = random.choice(string.ascii_uppercase),
c_age = random.randint(0,6),
m_name = random.choice(string.ascii_uppercase))
print(query2)
cursor.execute(query2)
cnx.commit()
Explanation: Now we want to add one new column of mother_name to record the mother's name for each child in the child care.
End of explanation
#
# ----------------------- In Python ------------------
# query all the content in the e_copy table
cursor.execute('select * from e_copy;')
for i in cursor:
print(i)
#
# ----------------------- In MySQL ------------------
Use the GUI to export the database into a self-contained file (the extension name would be sql)
Explanation: Check if you've updated the data successfully in MySQL database from Python
End of explanation
import re
infile=open('digits.txt','r')
content=infile.read()
print(content)
Explanation: Regular expression in Python
Before you run this part, you need to download the digits.txt and spaces.txt files to the same folder as this notebook
What's in the digits.txt file?
End of explanation
# Find all the numbers in the file
numbers=re.findall('\d+',content)
for n in numbers:
print(n)
Explanation: How can I find all the numbers in a file like digits.txt?
End of explanation
# find equations
equations=re.findall('(\d+)=\d+',content)
for e in equations:
print(e)
Explanation: How can I find all the equations?
End of explanation
# subsitute equations to correct them
# use the left hand side number
print(re.sub('(\d+)=\d+','\1=\1',content))
# another way to subsitute equations to correct them
# use the right hand side number
print(re.sub('\d+=(\d+)','\\1=\\1',content))
# Save to file
print(re.sub('(\d+)=\d+','\\1=\\1',content), file = open('digits_corrected.txt', 'w'))
Explanation: The equations seem to be incorrect, how can I correct them without affecting other text information?
End of explanation
infile=open('spaces.txt','r')
content=infile.read()
print(content)
print(re.sub('[\t ]+','\t',content))
print(re.sub('[\t ]+','\t',content), file = open('spaces_corrected.txt', 'w'))
Explanation: Preprocessing a text file with various types of spaces.
End of explanation
word = 'HELP' + 'A'
word
# first index default to 0 and second index default to the size
word[:2]
# It's equivalent to
word[0:2]
# Everything except the first two characters
word[2:]
# It's equivalent to
word[2:len(word)]
Explanation: More about index
End of explanation
# start: end: step
word[0::2]
# It's equivalent to
word[0:len(word):2]
Explanation: How about selecting every other character?
End of explanation
word[-1] # The last character
word[-2] # The last-but-one character
word[-2:] # The last two characters
word[:-2] # Everything except the last two characters
Explanation: Negative index
End of explanation
a = ['spam', 'eggs', 100, 1234]
a
a[-2]
a[1:-1]
a[:2] + ['bacon', 2*2]
3*a[:3] + ['Boo!']
Explanation: More about list
End of explanation
# Replace some items:
a[0:2] = [1, 12]
a
# Remove some:
del a[0:2] # or a[0:2] = []
a
# create some copies for change
b = a.copy()
c = a.copy()
# Insert some:
b[1:1] = ['insert', 'some']
b
# inserting at one position is not the same as changing one element
c[1] = ['insert', 'some']
c
Explanation: Versatile features of a list
End of explanation
# loop way
cubes = []
for x in range(11):
cubes.append(x**3)
cubes
# map way
def cube(x):
return x*x*x
list(map(cube, range(11)))
# list comprehension way
[x**3 for x in range(11)]
Explanation: How to get the third power of integers between 0 and 10.
End of explanation
result = []
for i in range(11):
if i%2 == 0:
result.append(i)
else:
print(result)
# Use if in list comprehension
[i for i in range(11) if i%2==0]
l=[1,3,5,6,8,10]
[i for i in l if i%2==0]
Explanation: Target: find the even number below 10
End of explanation |
8,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 05
Step1: Define a class called SOM. The constructor builds a grid of nodes, and also defines some helper ops
Step2: Time to use our newfound powers. Let's test it out on some data | Python Code:
%matplotlib inline
import tensorflow as tf
import numpy as np
Explanation: Ch 05: Concept 03
Self-organizing map
Import TensorFlow and NumPy:
End of explanation
class SOM:
def __init__(self, width, height, dim):
self.num_iters = 100
self.width = width
self.height = height
self.dim = dim
self.node_locs = self.get_locs()
# Each node is a vector of dimension `dim`
# For a 2D grid, there are `width * height` nodes
nodes = tf.Variable(tf.random_normal([width*height, dim]))
self.nodes = nodes
# These two ops are inputs at each iteration
x = tf.placeholder(tf.float32, [dim])
iter = tf.placeholder(tf.float32)
self.x = x
self.iter = iter
# Find the node that matches closest to the input
bmu_loc = self.get_bmu_loc(x)
self.propagate_nodes = self.get_propagation(bmu_loc, x, iter)
def get_propagation(self, bmu_loc, x, iter):
'''
Define the weight propagation function that will update weights of the best matching unit (BMU).
The intensity of weight updates decreases over time, as dictated by the `iter` variable.
'''
num_nodes = self.width * self.height
rate = 1.0 - tf.div(iter, self.num_iters)
alpha = rate * 0.5
sigma = rate * tf.to_float(tf.maximum(self.width, self.height)) / 2.
expanded_bmu_loc = tf.expand_dims(tf.to_float(bmu_loc), 0)
sqr_dists_from_bmu = tf.reduce_sum(tf.square(tf.subtract(expanded_bmu_loc, self.node_locs)), 1)
neigh_factor = tf.exp(-tf.div(sqr_dists_from_bmu, 2 * tf.square(sigma)))
rate = tf.multiply(alpha, neigh_factor)
rate_factor = tf.stack([tf.tile(tf.slice(rate, [i], [1]), [self.dim]) for i in range(num_nodes)])
nodes_diff = tf.multiply(rate_factor, tf.subtract(tf.stack([x for i in range(num_nodes)]), self.nodes))
update_nodes = tf.add(self.nodes, nodes_diff)
return tf.assign(self.nodes, update_nodes)
def get_bmu_loc(self, x):
'''
Define a helper function to located the BMU:
'''
expanded_x = tf.expand_dims(x, 0)
sqr_diff = tf.square(tf.subtract(expanded_x, self.nodes))
dists = tf.reduce_sum(sqr_diff, 1)
bmu_idx = tf.argmin(dists, 0)
bmu_loc = tf.stack([tf.mod(bmu_idx, self.width), tf.div(bmu_idx, self.width)])
return bmu_loc
def get_locs(self):
'''
Build a grid of nodes:
'''
locs = [[x, y]
for y in range(self.height)
for x in range(self.width)]
return tf.to_float(locs)
def train(self, data):
'''
Define a function to training the SOM on a given dataset:
'''
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(self.num_iters):
for data_x in data:
sess.run(self.propagate_nodes, feed_dict={self.x: data_x, self.iter: i})
centroid_grid = [[] for i in range(self.width)]
self.nodes_val = list(sess.run(self.nodes))
self.locs_val = list(sess.run(self.node_locs))
for i, l in enumerate(self.locs_val):
centroid_grid[int(l[0])].append(self.nodes_val[i])
self.centroid_grid = centroid_grid
Explanation: Define a class called SOM. The constructor builds a grid of nodes, and also defines some helper ops:
End of explanation
import matplotlib.pyplot as plt
colors = np.array(
[[0., 0., 1.],
[0., 0., 0.95],
[0., 0.05, 1.],
[0., 1., 0.],
[0., 0.95, 0.],
[0., 1, 0.05],
[1., 0., 0.],
[1., 0.05, 0.],
[1., 0., 0.05],
[1., 1., 0.]])
som = SOM(4, 4, 3)
som.train(colors)
plt.imshow(som.centroid_grid)
plt.show()
Explanation: Time to use our newfound powers. Let's test it out on some data:
End of explanation |
8,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 2
Step1: Example 1
Step2: Example 2
Step3: Example 3
Step4: Example 4
Step5: Example 5
Step6: Example 6 | Python Code:
# Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import healpy as hp
from NPTFit import create_mask as cm # Module for creating masks
Explanation: Example 2: Creating Masks
In this example we show how to create masks using create_mask.py.
Often it is convenient to consider only a reduced Region of Interest (ROI) when analyzing the data. In order to do this we need to create a mask. The masks are boolean arrays where pixels labelled as True are masked and those labelled False are unmasked. In this notebook we give examples of how to create various masks.
The masks are created by create_mask.py and can be passed to an instance of nptfit via the function load_mask for a run, or an instance of dnds_analysis via load_mask_analysis for an analysis. If no mask is specified the code defaults to the full sky as the ROI.
NB: Before you can call functions from NPTFit, you must have it installed. Instructions to do so can be found here:
http://nptfit.readthedocs.io/
End of explanation
example1 = cm.make_mask_total()
hp.mollview(example1, title='', cbar=False, min=0,max=1)
Explanation: Example 1: Mask Nothing
If no options are specified, create mask returns an empty mask. In the plot here and for those below, purple represents unmasked, yellow masked.
End of explanation
example2 = cm.make_mask_total(band_mask = True, band_mask_range = 30)
hp.mollview(example2, title='', cbar = False, min=0, max=1)
Explanation: Example 2: Band Mask
Here we show an example of how to mask a region either side of the plane - specifically we mask 30 degrees either side
End of explanation
example3a = cm.make_mask_total(l_mask = False, l_deg_min = -30, l_deg_max = 30,
b_mask = True, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3a,title='',cbar=False,min=0,max=1)
example3b = cm.make_mask_total(l_mask = True, l_deg_min = -30, l_deg_max = 30,
b_mask = False, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3b,title='',cbar=False,min=0,max=1)
example3c = cm.make_mask_total(l_mask = True, l_deg_min = -30, l_deg_max = 30,
b_mask = True, b_deg_min = -30, b_deg_max = 30)
hp.mollview(example3c,title='',cbar=False,min=0,max=1)
Explanation: Example 3: Mask outside a band in b and l
This example shows several methods of masking outside specified regions in galactic longitude (l) and latitude (b). The third example shows how when two different masks are specified, the mask returned is the combination of both.
End of explanation
example4a = cm.make_mask_total(mask_ring = True, inner = 0, outer = 30, ring_b = 0, ring_l = 0)
hp.mollview(example4a,title='',cbar=False,min=0,max=1)
example4b = cm.make_mask_total(mask_ring = True, inner = 30, outer = 180, ring_b = 0, ring_l = 0)
hp.mollview(example4b,title='',cbar=False,min=0,max=1)
example4c = cm.make_mask_total(mask_ring = True, inner = 30, outer = 90, ring_b = 0, ring_l = 0)
hp.mollview(example4c,title='',cbar=False,min=0,max=1)
example4d = cm.make_mask_total(mask_ring = True, inner = 0, outer = 30, ring_b = 45, ring_l = 45)
hp.mollview(example4d,title='',cbar=False,min=0,max=1)
Explanation: Example 4: Ring and Annulus Mask
Next we show examples of masking outside a ring or annulus. The final example demonstrates that the ring need not be at the galactic center.
End of explanation
random_custom_mask = np.random.choice(np.array([True, False]), hp.nside2npix(128))
example5 = cm.make_mask_total(custom_mask = random_custom_mask)
hp.mollview(example5,title='',cbar=False,min=0,max=1)
Explanation: Example 5: Custom Mask
In addition to the options above, we can also add in custom masks. In this example we highlight this by adding a random mask.
End of explanation
pscmask=np.array(np.load('fermi_data/fermidata_pscmask.npy'), dtype=bool)
example6 = cm.make_mask_total(band_mask = True, band_mask_range = 2,
mask_ring = True, inner = 0, outer = 30,
custom_mask = pscmask)
hp.mollview(example6,title='',cbar=False,min=0,max=1)
Explanation: Example 6: Full Analysis Mask including Custom Point Source Catalog Mask
Finally we show an example of a full analysis mask that we will use for an analysis of the Galactic Center Excess in Example 3 and 8. Here we mask the plane with a band mask, mask outside a ring and also include a custom point source mask. The details of the point source mask are given in Example 1.
NB: before the point source mask can be loaded, the Fermi Data needs to be downloaded. See details in Example 1.
End of explanation |
8,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifying Risky P2P Loans
Abstract
The prevalence of a global Peer-to-Peer (P2P) economy, coupled with the recent deregulation of financial markets, has lead to the widespread adoption of Artificial Intelligence driven by FinTech firms to manage risk when speculating on unsecured P2P debt obligations. After meticulously identifying ‘debt belonging to high-risk individuals’ by leveraging an ensemble of Machine Learning algorithms, these firms are able to find ideal trading opportunities.
While researching AI-driven portfolio management that favor risk-minimization strategies by unmasking subtle interactions amongst high dimensional features to identify prospective trades that exhibit modest ,low-risk gains, I was impressed that the overall portfolio
Step1: Notebook Config
Step2: Data Preprocessing
Load Dataset
Data used for this project comes directly from Lending Club’s historical loan records (the full record contains more than 100 columns).
Step3: Exploration
Summary
<b>Target
Step4: Data Munging
Cleaning
all_util, inq_last_12m
Drop features (all observations contain null/missing values)
revol_util
Remove the percent sign (%) from string
Convert to a float
earliest_cr_line, issue_d
Convert to datetime data type.
emp_length
Strip leading and trailing whitespace
Replace '< 1' with '0.5'
Replace '10+' with '10.5'
Fill null values with '-1.5'
Convert to float
Step5: Feature Engineering
New Features
loan_amnt_to_inc
the ratio of loan amount to annual income
earliest_cr_line_age
age of first credit line from when the loan was issued
avg_cur_bal_to_inc
the ratio of avg current balance to annual income
avg_cur_bal_to_loan_amnt
the ratio of avg current balance to loan amount
acc_open_past_24mths_groups
level of accounts opened in the last 2 yrs
Step6: Drop Features
Step7: Load & Prepare Function
Step8: Exploratory Data Analysis (EDA)
Helper Functions
Step9: Overview
Missing Data
Step10: Factor Analysis
Target
Step11: Summary Statistics
Step12: Predictive Modeling
Step13: Initializing Train/Test Sets
Shuffle and Split Data
Let's split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.
Run the code cell below to perform this split.
Step14: Classification Models
Naive Predictor (Baseline)
Step15: Decision Tree Classifier
Step16: Random Forest Classifier
Step17: Blagging Classifier
Base Estimator -> RF
Step18: Base Estimator -> ExtraTrees
Step19: Evaluating Model Performance
Feature Importance (via RandomForestClassifier)
Step20: Model Selection
Comparative Analysis
Step21: Optimal Model
Step22: Optimizing Hyperparameters
ToDo | Python Code:
from IPython.display import display
from IPython.core.display import HTML
import warnings
warnings.filterwarnings('ignore')
import os
if os.getcwd().split('/')[-1] == 'notebooks':
os.chdir('../')
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import pandas_profiling
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer
from sklearn.dummy import DummyClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# written by Gilles Louppe and distributed under the BSD 3 clause
from src.vn_datasci.blagging import BlaggingClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import fbeta_score
from sklearn.metrics import make_scorer
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.metrics import classification_report
# self-authored library that to facilatate ML classification and evaluation
from src.vn_datasci.skhelper import LearningModel, eval_db
Explanation: Classifying Risky P2P Loans
Abstract
The prevalence of a global Peer-to-Peer (P2P) economy, coupled with the recent deregulation of financial markets, has lead to the widespread adoption of Artificial Intelligence driven by FinTech firms to manage risk when speculating on unsecured P2P debt obligations. After meticulously identifying ‘debt belonging to high-risk individuals’ by leveraging an ensemble of Machine Learning algorithms, these firms are able to find ideal trading opportunities.
While researching AI-driven portfolio management that favor risk-minimization strategies by unmasking subtle interactions amongst high dimensional features to identify prospective trades that exhibit modest ,low-risk gains, I was impressed that the overall portfolio: realized a modest return through a numerosity of individual gains; achieved an impressive Sharpe ratio stemming from infrequent losses and minimal portfolio volatility.
Project Overview
Objective
Build a binary classification model that predicts the "Charged Off" or "Fully Paid" Status of a loan by analyzing predominant characteristics which differentiate the two classes in order to engineer new features that may better enable our Machine Learning algorithms to reach efficacy in minimizing portfolio risk while observing better-than-average returns. Ultimately, the aim is to deploy this model to assist in placing trades on loans immediately after they are issued by Lending Club.
About P2P Lending
Peer-to-Peer (P2P) lending offers borrowers with bad credit to get the necessary funds to meet emergency deadlines. It might seem careless to lend even more money to people who have demonstrated an inability to repay loans in the past. However, by implementing Machine Learning algorithms to classify poor trade prospects, one can effectively minimize portfolio risk.
There is a large social component to P2P lending, for sociological factors (stigma of defaulting) often plays a greater role than financial metrics in determining an applicant’s creditworthiness. For example the “online friendships of borrowers act as signals of credit quality.” ( Lin et all, 2012)
The social benefit of providing finance for another individual has wonderful implications, and, while it is nice to engage in philanthropic activities, the motivating factor for underwriting speculating in p2p lending markets is financial gain, especially since the underlying debt is unsecured and investors are liable to defaults.
Project Setup
Import Libraries & Modules
End of explanation
from IPython.display import display
from IPython.core.display import HTML
import warnings
warnings.filterwarnings('ignore')
import os
if os.getcwd().split('/')[-1] == 'notebooks':
os.chdir('../')
%matplotlib inline
#%config figure_format='retina'
plt.rcParams.update({'figure.figsize': (10, 7)})
sns.set_context("notebook", font_scale=1.75, rc={"lines.linewidth": 1.25})
sns.set_style("darkgrid")
sns.set_palette("deep")
pd.options.display.width = 80
pd.options.display.max_columns = 50
pd.options.display.max_rows = 50
Explanation: Notebook Config
End of explanation
def load_dataset(path='data/raw/lc_historical.csv'):
lc = pd.read_csv(path, index_col='id', memory_map=True, low_memory=False)
lc.loan_status = pd.Categorical(lc.loan_status, categories=['Fully Paid', 'Charged Off'])
return lc
dataset = load_dataset()
Explanation: Data Preprocessing
Load Dataset
Data used for this project comes directly from Lending Club’s historical loan records (the full record contains more than 100 columns).
End of explanation
def calc_incomplete_stats(dataset):
warnings.filterwarnings("ignore", 'This pattern has match groups')
missing_data = pd.DataFrame(index=dataset.columns)
missing_data['Null'] = dataset.isnull().sum()
missing_data['NA_or_Missing'] = (
dataset.apply(lambda col: (
col.str.contains('(^$|n/a|^na$|^%$)', case=False).sum()))
.fillna(0).astype(int))
missing_data['Incomplete'] = (
(missing_data.Null + missing_data.NA_or_Missing) / len(dataset))
incomplete_stats = ((missing_data[(missing_data > 0).any(axis=1)])
.sort_values('Incomplete', ascending=False))
return incomplete_stats
def display_incomplete_stats(incomplete_stats):
stats = incomplete_stats.copy()
df_incomplete = (
stats.style
.set_caption('Missing')
.background_gradient(cmap=sns.light_palette("orange", as_cmap=True),
low=0, high=1, subset=['Null', 'NA_or_Missing'])
.background_gradient(cmap=sns.light_palette("red", as_cmap=True),
low=0, high=.6, subset=['Incomplete'])
.format({'Null': '{:,}', 'NA_or_Missing': '{:,}', 'Incomplete': '{:.1%}'}))
display(df_incomplete)
def plot_incomplete_stats(incomplete_stats, ylim_range=(0, 100)):
stats = incomplete_stats.copy()
stats.Incomplete = stats.Incomplete * 100
_ = sns.barplot(x=stats.index.tolist(), y=stats.Incomplete.tolist())
for item in _.get_xticklabels():
item.set_rotation(45)
_.set(xlabel='Feature', ylabel='Incomplete (%)',
title='Features with Missing or Null Values',
ylim=ylim_range)
plt.show()
def incomplete_data_report(dataset, display_stats=True, plot=True):
incomplete_stats = calc_incomplete_stats(dataset)
if display_stats:
display_incomplete_stats(incomplete_stats)
if plot:
plot_incomplete_stats(incomplete_stats)
incomplete_stats = load_dataset().pipe(calc_incomplete_stats)
display(incomplete_stats)
plot_incomplete_stats(incomplete_stats)
Explanation: Exploration
Summary
<b>Target:</b> loan-status
<b>Number of features:</b> 18
<b>Number of observations:</b> 138196
<b>Feature datatypes:</b>
object: dti, bc_util, fico_range_low, percent_bc_gt_75, acc_open_past_24mths, annual_inc, recoveries, avg_cur_bal, loan_amnt
<i>float64</i>: revol_util, earliest_cr_line, purpose, emp_length, home_ownership, addr_state, issue_d, loan_status
<b>Features with ALL missing or null values:</b>
inq_last_12m
all_util
<b>Features with SOME missing or null values:</b>
avg_cur_bal (30%)
bc_util (21%)
percent_bc_gt_75 (21%)
acc_open_past_24mths (20%)
emp_length (0.18%)
revol_util (0.08%)
Missing Data
Helper Functions
End of explanation
def clean_data(lc):
lc = lc.copy().dropna(axis=1, thresh=1)
dt_features = ['earliest_cr_line', 'issue_d']
lc[dt_features] = lc[dt_features].apply(
lambda col: pd.to_datetime(col, format='%Y-%m-%d'), axis=0)
cat_features =['purpose', 'home_ownership', 'addr_state']
lc[cat_features] = lc[cat_features].apply(pd.Categorical, axis=0)
lc.revol_util = (lc.revol_util
.str.extract('(\d+\.?\d?)', expand=False)
.astype('float'))
lc.emp_length = (lc.emp_length
.str.extract('(< 1|10\+|\d+)', expand=False)
.replace('< 1', '0.5')
.replace('10+', '10.5')
.fillna('-1.5')
.astype('float'))
return lc
dataset = load_dataset().pipe(clean_data)
Explanation: Data Munging
Cleaning
all_util, inq_last_12m
Drop features (all observations contain null/missing values)
revol_util
Remove the percent sign (%) from string
Convert to a float
earliest_cr_line, issue_d
Convert to datetime data type.
emp_length
Strip leading and trailing whitespace
Replace '< 1' with '0.5'
Replace '10+' with '10.5'
Fill null values with '-1.5'
Convert to float
End of explanation
def add_features(lc):
# ratio of loan amount to annual income
group_labels = ['low', 'avg', 'high']
lc['loan_amnt_to_inc'] = (
pd.cut((lc.loan_amnt / lc.annual_inc), 3, labels=['low', 'avg', 'high'])
.cat.set_categories(['low', 'avg', 'high'], ordered=True))
# age of first credit line from when the loan was issued
lc['earliest_cr_line_age'] = (lc.issue_d - lc.earliest_cr_line).astype(int)
# the ratio of avg current balance to annual income
lc['avg_cur_bal_to_inc'] = lc.avg_cur_bal / lc.annual_inc
# the ratio of avg current balance to loan amount
lc['avg_cur_bal_to_loan_amnt'] = lc.avg_cur_bal / lc.loan_amnt
# grouping level of accounts opened in the last 2 yrs
lc['acc_open_past_24mths_groups'] = (
pd.qcut(lc.acc_open_past_24mths, 3, labels=['low', 'avg', 'high'])
.cat.add_categories(['unknown']).fillna('unknown')
.cat.set_categories(['low', 'avg', 'high', 'unknown'], ordered=True))
return lc
dataset = load_dataset().pipe(clean_data).pipe(add_features)
Explanation: Feature Engineering
New Features
loan_amnt_to_inc
the ratio of loan amount to annual income
earliest_cr_line_age
age of first credit line from when the loan was issued
avg_cur_bal_to_inc
the ratio of avg current balance to annual income
avg_cur_bal_to_loan_amnt
the ratio of avg current balance to loan amount
acc_open_past_24mths_groups
level of accounts opened in the last 2 yrs
End of explanation
def drop_features(lc):
target_leaks = ['recoveries', 'issue_d']
other_features = ['earliest_cr_line', 'acc_open_past_24mths', 'addr_state']
to_drop = target_leaks + other_features
return lc.drop(to_drop, axis=1)
dataset = load_dataset().pipe(clean_data).pipe(add_features).pipe(drop_features)
Explanation: Drop Features
End of explanation
def load_and_preprocess_data():
return (load_dataset()
.pipe(clean_data)
.pipe(add_features)
.pipe(drop_features))
Explanation: Load & Prepare Function
End of explanation
def plot_factor_pct(dataset, feature):
if feature not in dataset.columns:
return
y = dataset[feature]
factor_counts = y.value_counts()
x_vals = factor_counts.index.tolist()
y_vals = ((factor_counts.values/factor_counts.values.sum())*100).round(2)
sns.barplot(y=x_vals, x=y_vals);
def plot_pct_charged_off(lc, feature):
lc_counts = lc[feature].value_counts()
charged_off = lc[lc.loan_status=='Charged Off']
charged_off_counts = charged_off[feature].value_counts()
charged_off_ratio = ((charged_off_counts / lc_counts * 100)
.round(2).sort_values(ascending=False))
x_vals = charged_off_ratio.index.tolist()
y_vals = charged_off_ratio
sns.barplot(y=x_vals, x=y_vals);
Explanation: Exploratory Data Analysis (EDA)
Helper Functions
End of explanation
processed_dataset = load_and_preprocess_data()
incomplete_stats = calc_incomplete_stats(processed_dataset)
display(incomplete_stats)
plot_incomplete_stats(incomplete_stats)
Explanation: Overview
Missing Data
End of explanation
processed_dataset.pipe(plot_factor_pct, 'loan_status')
Explanation: Factor Analysis
Target: loan_status
End of explanation
HTML(processed_dataset.pipe(pandas_profiling.ProfileReport).html)
Explanation: Summary Statistics
End of explanation
def to_xy(dataset):
y = dataset.pop('loan_status').cat.codes
X = pd.get_dummies(dataset, drop_first=True)
return X, y
Explanation: Predictive Modeling
End of explanation
X, y = load_and_preprocess_data().pipe(to_xy)
split_data = train_test_split(X, y, test_size=0.20, stratify=y, random_state=11)
X_train, X_test, y_train, y_test = split_data
train_test_sets = dict(
zip(['X_train', 'X_test', 'y_train', 'y_test'], [*split_data]))
(pd.DataFrame(
data={'Observations (#)': [X_train.shape[0], X_test.shape[0]],
'Percent (%)': ['80%', '20%'],
'Features (#)': [X_train.shape[1], X_test.shape[1]]},
index=['Training', 'Test'])
[['Percent (%)', 'Features (#)', 'Observations (#)']])
Explanation: Initializing Train/Test Sets
Shuffle and Split Data
Let's split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.
Run the code cell below to perform this split.
End of explanation
dummy_model = LearningModel(
'Naive Predictor - Baseline', Pipeline([
('imp', Imputer(strategy='median')),
('clf', DummyClassifier(strategy='constant', constant=0))]))
dummy_model.fit_and_predict(**train_test_sets)
model_evals = eval_db(dummy_model.eval_report)
Explanation: Classification Models
Naive Predictor (Baseline)
End of explanation
tree_model = LearningModel(
'Decision Tree Classifier', Pipeline([
('imp', Imputer(strategy='median')),
('clf', DecisionTreeClassifier(class_weight='balanced', random_state=11))]))
tree_model.fit_and_predict(**train_test_sets)
tree_model.display_evaluation()
model_evals = eval_db(model_evals, tree_model.eval_report)
Explanation: Decision Tree Classifier
End of explanation
rf_model = LearningModel(
'Random Forest Classifier', Pipeline([
('imp', Imputer(strategy='median')),
('clf', RandomForestClassifier(
class_weight='balanced_subsample', random_state=11))]))
rf_model.fit_and_predict(**train_test_sets)
rf_model.display_evaluation()
model_evals = eval_db(model_evals, rf_model.eval_report)
Explanation: Random Forest Classifier
End of explanation
blagging_pipeline = Pipeline([
('imp', Imputer(strategy='median')),
('clf', BlaggingClassifier(
random_state=11, n_jobs=-1,
base_estimator=RandomForestClassifier(
class_weight='balanced_subsample', random_state=11)))])
blagging_model = LearningModel('Blagging Classifier (RF)', blagging_pipeline)
blagging_model.fit_and_predict(**train_test_sets)
blagging_model.display_evaluation()
model_evals = eval_db(model_evals, blagging_model.eval_report)
Explanation: Blagging Classifier
Base Estimator -> RF
End of explanation
blagging_clf = BlaggingClassifier(
random_state=11, n_jobs=-1,
base_estimator=ExtraTreesClassifier(
criterion='entropy', class_weight='balanced_subsample',
max_features=None, n_estimators=60, random_state=11))
blagging_model = LearningModel(
'Blagging Classifier (Extra Trees)', Pipeline([
('imp', Imputer(strategy='median')),
('clf', blagging_clf)]))
blagging_model.fit_and_predict(**train_test_sets)
blagging_model.display_evaluation()
model_evals = eval_db(model_evals, blagging_model.eval_report)
Explanation: Base Estimator -> ExtraTrees
End of explanation
rf_top_features = LearningModel('Random Forest Classifier',
Pipeline([('imp', Imputer(strategy='median')),
('clf', RandomForestClassifier(max_features=None,
class_weight='balanced_subsample', random_state=11))]))
rf_top_features.fit_and_predict(**train_test_sets)
rf_top_features.display_top_features(top_n=15)
rf_top_features.plot_top_features(top_n=10)
Explanation: Evaluating Model Performance
Feature Importance (via RandomForestClassifier)
End of explanation
display(model_evals)
Explanation: Model Selection
Comparative Analysis
End of explanation
blagging_model = LearningModel('Blagging Classifier (Extra Trees)',
Pipeline([('imp', Imputer(strategy='median')),
('clf', BlaggingClassifier(
base_estimator=ExtraTreesClassifier(
criterion='entropy', class_weight='balanced_subsample',
max_features=None, n_estimators=60, random_state=11),
random_state=11, n_jobs=-1))]))
blagging_model.fit_and_predict(**train_test_sets)
Explanation: Optimal Model
End of explanation
(pd.DataFrame(data={'Benchmark Predictor': [0.7899, 0.1603, 0.5203],
'Unoptimized Model': [0.7499, 0.2602, 0.6463],
'Optimized Model': ['', '', '']},
index=['Accuracy Score', 'F1-score', 'AUC'])
[['Benchmark Predictor', 'Unoptimized Model', 'Optimized Model']])
Explanation: Optimizing Hyperparameters
ToDo: Perform GridSearch...
Results:
End of explanation |
8,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TODO
Step1: Коэффициент для учета вклада гелия в массу газа (см. Notes)
Step2: Для большой оси
Step3: Для случая бесконечного тонкого диска
Step4: Два других механизма из http
Step5: Hunter et al (1998), 'competition with shear' according to Leroy
Step6: Кривая вращения тонкого диска
Step7: Функция для печати статистики по фотометриям, куда добавлена также информация о полной массе диска $M_d = 2\pi h^2 \Sigma(0)$ (только нужно учесть, что там пк в arcsec надо перевести)
Step8: Префикс для плохих фотометрий, чтобы их не брать в конце.
Step9: Функция, которая возвращает суммарный профиль плотности для двухдисковой модели, если это необходимо
Step10: Цикл по стилям линий для того, что бы они отличались там, где их много на одной картинке и цвета сливаются.
Step11: Раскрашиваем задний фон в "зебру" с некоей периодичностью.
Step12: Сравним с оценкой Romeo & Falstad (2013) https
Step13: Для тонкого диска
Step14: Функция-сравнение с наблюдениями
Step15: Функция для исправления центральной поверхностной яркости за наклон (приведение диска к виду "плашмя"). Взята из http
Step16: Функция для анализа влияния параметров. Берет стандартные параметры, делает из них списки и прогоняет для всех в списке, после чего измеряет среднее и std. Можно варьировать несколько параметров одновременно. | Python Code:
%run ../../utils/load_notebook.py
from instabilities import *
import numpy as np
Explanation: TODO: сделать так, чтобы можно было импортировать
End of explanation
He_coeff = 1.34
def flat_end(argument):
'''декоратор для того, чтобы продолжать функцию на уровне последнего значения'''
def real_decorator(function):
def wrapper(*args, **kwargs):
if args[0] < argument:
return function(*args, **kwargs)
else:
return function(argument, *args[1:], **kwargs)
return wrapper
return real_decorator
Explanation: Коэффициент для учета вклада гелия в массу газа (см. Notes):
End of explanation
# sig_maj_lim=None
# spl_maj=None
# @flat_end(sig_maj_lim)
# def sig_R_maj_minmin(r, spl_maj=spl_maj):
# return spl_maj(r).item()
# @flat_end(sig_maj_lim)
# def sig_R_maj_min(r, spl_maj=spl_maj):
# return spl_maj(r).item()/sqrt(sin_i**2 + 0.49*cos_i**2)
# @flat_end(sig_maj_lim)
# def sig_R_maj_max(r, spl_maj=spl_maj):
# return spl_maj(r).item()/sqrt(0.5*sin_i**2 + 0.09*cos_i**2)
# @flat_end(sig_maj_lim)
# def sig_R_maj_maxmax(r, spl_maj=spl_maj):
# return spl_maj(r)*sqrt(2)/sin_i
# @flat_end(sig_maj_lim)
# def sig_R_maj_maxmaxtrue(r, spl_maj=spl_maj):
# return spl_maj(r)/sin_i/sqrt(sigPhi_to_sigR_real(r))
# sig_min_lim=None
# spl_min=None
# @flat_end(sig_min_lim)
# def sig_R_minor_minmin(r, spl_min=spl_min):
# return spl_min(r).item()
# @flat_end(sig_min_lim)
# def sig_R_minor_min(r, spl_min=spl_min):
# return spl_min(r).item()/sqrt(sin_i**2 + 0.49*cos_i**2)
# @flat_end(sig_min_lim)
# def sig_R_minor_max(r, spl_min=spl_min):
# return spl_min(r).item()/sqrt(sin_i**2 + 0.09*cos_i**2)
# @flat_end(sig_min_lim)
# def sig_R_minor_maxmax(r, spl_min=spl_min):
# return spl_min(r)/sin_i
# TODO: move to proper place
def plot_data_lim(ax, data_lim):
'''Вертикальная линия, обозначающая конец данных'''
ax.axvline(x=data_lim, ls='-.', color='black', alpha=0.5)
def plot_disc_scale(scale, ax, text=None):
'''Обозначает масштаб диска'''
ax.plot([scale, scale], [0., 0.05], '-', lw=6., color='black')
if text:
ax.annotate(text, xy=(scale, 0.025), xytext=(scale, 0.065), textcoords='data', arrowprops=dict(arrowstyle="->"))
def plot_Q_levels(ax, Qs, style='--', color='grey', alpha=0.4):
'''Функция, чтобы рисовать горизонтальные линии различных уровней $Q^{-1}$:'''
for Q in Qs:
ax.axhline(y=1./Q, ls=style, color=color, alpha=alpha)
def plot_2f_vs_1f(ax=None, total_gas_data=None, epicycl=None, gas_approx=None, sound_vel=None, scale=None, sigma_max=None, sigma_min=None, star_density_max=None,
star_density_min=None, data_lim=None, color=None, alpha=0.3, disk_scales=[], label=None):
'''Картинка сравнения 2F и 1F критерия для разных фотометрий и величин sig_R,
куда подается весь газ, результат НЕ исправляется за осесимметричные возмущения.'''
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=total_gas_data,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_max,
star_density=star_density_min))
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=total_gas_data,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_min,
star_density=star_density_max))
# invQg = map(lambda l: l*1.6, invQg)
# invQeff_min = map(lambda l: l*1.6, invQeff_min)
# invQeff_max = map(lambda l: l*1.6, invQeff_max)
rr = zip(*total_gas_data)[0]
ax.fill_between(rr, invQeff_min, invQeff_max, color=color, alpha=alpha, label=label)
ax.plot(rr, invQeff_min, 'd-', color=color, alpha=0.6)
ax.plot(rr, invQeff_max, 'd-', color=color, alpha=0.6)
ax.plot(rr, invQg, 'v-', color='b')
ax.set_ylim(0., 1.5)
ax.set_xlim(0., data_lim+50.)
# plot_SF(ax)
plot_data_lim(ax, data_lim)
for h, annot in disk_scales:
plot_disc_scale(h, ax, annot)
plot_Q_levels(ax, [1., 1.5, 2., 3.])
ax.legend()
Explanation: Для большой оси: $\sigma^2_{maj} = \sigma^2_{\varphi}\sin^2 i + \sigma^2_{z}\cos^2 i$, следовательно примерные ограничения
$$\sigma_{maj} < \frac{\sigma_{maj}}{\sqrt{\sin^2 i + 0.49\cos^2 i}}< \sigma_R = \frac{\sigma_{maj}}{\sqrt{f\sin^2 i + \alpha^2\cos^2 i}} ~< \frac{\sigma_{maj}}{\sqrt{0.5\sin^2 i + 0.09\cos^2 i}} < \frac{\sqrt{2}\sigma_{maj}}{\sin i} (или \frac{\sigma_{maj}}{\sqrt{f}\sin i}),$$
или можно более точную оценку дать, если построить $f$ (сейчас $0.5 < f < 1$).
Для малой оси: $\sigma^2_{min} = \sigma^2_{R}\sin^2 i + \sigma^2_{z}\cos^2 i$ и ограничения
$$\sigma_{min} < \frac{\sigma_{min}}{\sqrt{\sin^2 i + 0.49\cos^2 i}} < \sigma_R = \frac{\sigma_{min}}{\sqrt{\sin^2 i + \alpha^2\cos^2 i}} ~< \frac{\sigma_{min}}{\sqrt{\sin^2 i + 0.09\cos^2 i}} < \frac{\sigma_{min}}{\sin i}$$
Соответственно имеем 5 оценок из maj и 4 оценки из min.
End of explanation
def epicyclicFreq_real(poly_gas, R, resolution):
'''Честное вычисление эпициклической частоты на расстоянии R для сплайна или полинома'''
try:
return sqrt(2.0) * poly_gas(R) * sqrt(1 + R * poly_gas.deriv()(R) / poly_gas(R)) / (R * resolution )
except:
return sqrt(2.0) * poly_gas(R) * sqrt(1 + R * poly_gas.derivative()(R) / poly_gas(R)) / (R * resolution )
Explanation: Для случая бесконечного тонкого диска: $$\kappa=\frac{3}{R}\frac{d\Phi}{dR}+\frac{d^2\Phi}{dR^2}$$
где $\Phi$ - гравпотенциал, однако его знать не надо, т.к. есть проще формула: $$\kappa=\sqrt{2}\frac{\vartheta_c}{R}\sqrt{1+\frac{R}{\vartheta_c}\frac{d\vartheta_c}{dR}}$$
End of explanation
def Sigma_crit_S04(gas_dens, r_gas, star_surf_dens):
return 6.1 * gas_dens / (gas_dens + star_surf_dens(r_gas))
Explanation: Два других механизма из http://iopscience.iop.org/article/10.1088/0004-6256/148/4/69/pdf:
Schaye (2004), 'cold gas phase':
$$\Sigma_g > 6.1 f_g^{0.3} Z^{-0.3} I^{0.23}$$
или при constant metallicity of 0.1 $Z_{sun}$ and interstellar flux of ionizing photons 10^6 cm−2 s−1:
$$\Sigma_g > 6.1 \frac{\Sigma_g}{\Sigma_g + \Sigma_s}$$
End of explanation
def oort_a(r, gas_vel):
try:
return 0.5 * (gas_vel(r)/r - gas_vel.deriv()(r))
except:
return 0.5 * (gas_vel(r)/r - gas_vel.derivative()(r))
def Sigma_crit_A(r, gas_vel, alpha, sound_vel):
G = 4.32
return alpha * (sound_vel*oort_a(r, gas_vel)) / (np.pi*G)
Explanation: Hunter et al (1998), 'competition with shear' according to Leroy:
$$\Sigma_A = \alpha_A\frac{\sigma_g A}{\pi G}$$
End of explanation
from scipy.special import i0, i1, k0, k1
def disc_vel(r, Sigma0, h, scale, Sigma0_2=None, h_2=None):
G = 4.3
bessels = i0(0.5*r/h)*k0(0.5*r/h) - i1(0.5*r/h)*k1(0.5*r/h)
if h_2 is None:
return np.sqrt(2*np.pi*G*Sigma0*r*scale * 0.5*r/h * bessels)
else: #двухдисковая модель
bessels2 = i0(0.5*r/h_2)*k0(0.5*r/h_2) - i1(0.5*r/h_2)*k1(0.5*r/h_2)
return np.sqrt(2*np.pi*G*Sigma0*r*scale * 0.5*r/h * bessels + 2*np.pi*G*Sigma0_2*r*scale * 0.5*r/h_2 * bessels2)
Explanation: Кривая вращения тонкого диска:
$$\frac{v^2}{r} = 2\pi G \Sigma_0 \frac{r}{2h} \left[I_0(\frac{r}{2h})K_0(\frac{r}{2h}) - I_1(\frac{r}{2h})K_1(\frac{r}{2h})\right]$$
End of explanation
from tabulate import tabulate
import pandas as pd
def show_all_photometry_table(all_photometry, scale):
'''scale in kpc/arcsec'''
copy = [list(l) for l in all_photometry]
#все это дальше из-за того, что бывают двухдисковые модели и их нужно по другому использовать
for entry in copy:
if type(entry[5]) == tuple:
entry[5] = (round(entry[5][0], 2), round(entry[5][1], 2))
else:
entry[5] = round(entry[5], 2)
for entry in copy:
if type(entry[4]) == tuple:
entry[4] = (round(entry[4][0], 2), round(entry[4][1], 2))
else:
entry[4] = round(entry[4], 2)
for entry in copy:
if type(entry[5]) == tuple:
entry.append(2*math.pi*entry[5][0]**2 * entry[-1][0](0) * (scale * 1000.)**2 +
2*math.pi*entry[5][1]**2 * entry[-1][1](0) * (scale * 1000.)**2)
else:
entry.append(2*math.pi*entry[5]**2 * entry[-1](0) * (scale * 1000.)**2)
for entry in copy:
if type(entry[5]) == tuple:
entry.append(entry[7][0](0) + entry[7][1](0))
else:
entry.append(entry[7](0))
df = pd.DataFrame(data=copy, columns=['Name', 'r_eff', 'mu_eff', 'n', 'mu0_d', 'h_disc', 'M/L', 'surf', 'M_d/M_sun', 'Sigma_0'])
df['M/L'] = df['M/L'].apply(lambda l: '%2.2f'%l)
# df['Sigma_0'] = df['surf'].map(lambda l:l(0))
df['Sigma_0'] = df['Sigma_0'].apply(lambda l: '%2.0f' % l)
# df['M_d/M_sun'] = 2*math.pi*df['h_disc']**2 * df['surf'].map(lambda l:l(0)) * (scale * 1000.)**2
df['M_d/M_sun'] = df['M_d/M_sun'].apply(lambda l: '%.2E.' % l)
df.drop('surf', axis=1, inplace=True)
print tabulate(df, headers='keys', tablefmt='psql', floatfmt=".2f")
Explanation: Функция для печати статистики по фотометриям, куда добавлена также информация о полной массе диска $M_d = 2\pi h^2 \Sigma(0)$ (только нужно учесть, что там пк в arcsec надо перевести):
End of explanation
BAD_MODEL_PREFIX = 'b:'
Explanation: Префикс для плохих фотометрий, чтобы их не брать в конце.
End of explanation
def tot_dens(dens):
if type(dens) == tuple:
star_density = lambda l: dens[0](l) + dens[1](l)
else:
star_density = lambda l: dens(l)
return star_density
Explanation: Функция, которая возвращает суммарный профиль плотности для двухдисковой модели, если это необходимо:
End of explanation
from itertools import cycle
lines = ["-","--","-.",":"]
linecycler = cycle(lines)
Explanation: Цикл по стилям линий для того, что бы они отличались там, где их много на одной картинке и цвета сливаются.
End of explanation
def foreground_zebra(ax, step, alpha):
for i in range(int(ax.get_xlim()[1])+1):
if i%2 == 0:
ax.axvspan(i*step, (i+1)*step, color='grey', alpha=alpha)
Explanation: Раскрашиваем задний фон в "зебру" с некоей периодичностью.
End of explanation
from math import pi
def romeo_Qinv(r=None, epicycl=None, sound_vel=11., sigma_R=None, star_density=None,
HI_density=None, CO_density=None, alpha=None, scale=None, gas_approx=None, verbose=False, show=False):
G = 4.32
kappa = epicycl(gas_approx, r, scale)
Q_star = kappa*sigma_R(r)/(pi*G*star_density(r))
Q_CO = kappa*sound_vel/(pi*G*CO_density)
Q_HI = kappa*sound_vel/(pi*G*HI_density)
T_CO, T_HI = 1.5, 1.5
if alpha > 0 and alpha <= 0.5:
T_star = 1. + 0.6*alpha**2
else:
T_star = 0.8 + 0.7*alpha
# TODO: оставить только show или verbose
if show:
print 'r={:7.3f} Qg={:7.3f} Qs={:7.3f} Qg^-1={:7.3f} Qs^-1={:7.3f}'.format(r, Q_HI, Q_star, 1./Q_HI, 1./Q_star)
dispersions = [sigma_R(r), sound_vel, sound_vel]
QTs = [Q_star*T_star, Q_HI*T_HI, Q_CO*T_CO]
components = ['star', 'HI', 'H2']
index = QTs.index(min(QTs))
if verbose:
print 'QTs: {}'.format(QTs)
print 'min index: {}'.format(index)
print 'min component: {}'.format(components[index])
sig_m = dispersions[index]
def W_i(sig_m, sig_i):
return 2*sig_m*sig_i/(sig_m**2 + sig_i**2)
return W_i(sig_m, dispersions[0])/QTs[0] + W_i(sig_m, dispersions[1])/QTs[1] + W_i(sig_m, dispersions[2])/QTs[2], components[index]
Explanation: Сравним с оценкой Romeo & Falstad (2013) https://ui.adsabs.harvard.edu/#abs/2013MNRAS.433.1389R/abstract:
$$Q_N^{-1} = \sum_{i=1}^{N}\frac{W_i}{Q_iT_i}$$ где
$$Q_i = \frac{\kappa\sigma_{R,i}}{\pi G\Sigma_i}$$
$$T_i= \begin{cases} 1 + 0.6(\frac{\sigma_z}{\sigma_R})^2_i, & \mbox{if } 0.0 \le \frac{\sigma_z}{\sigma_R} \le 0.5,
\ 0.8 + 0.7(\frac{\sigma_z}{\sigma_R})_i, & \mbox{if } 0.5 \le \frac{\sigma_z}{\sigma_R} \le 1.0 \end{cases}$$
$$W_i = \frac{2\sigma_{R,m}\sigma_{R,i}}{\sigma_{R,m}^2 + \sigma_{R,i}^2},$$
$$m:\ index\ of\ min(T_iQ_i)$$
В самой развитой модели 3 компонента - HI, CO, stars. В их же модели полагается $(\sigma_z/\sigma_R){CO} = (\sigma_z/\sigma_R){HI} = 1$, т.е. $T_{CO} = T_{HI} = 1.5$. Скорость звука я буду полагать в обеих средах равной 11 км/c. Звезды у меня в двух крайних случаях бывают с $\alpha$ равным 0.3 и 0.7, соответственно $T_{0.3} = 1.05;\ T_{0.7}=1.29$.
End of explanation
def romeo_Qinv_thin(r=None, epicycl=None, sound_vel=11., sigma_R=None, star_density=None,
HI_density=None, CO_density=None, alpha=None, scale=None, gas_approx=None, verbose=False, show=False):
G = 4.32
kappa = epicycl(gas_approx, r, scale)
Q_star = kappa*sigma_R(r)/(pi*G*star_density(r))
Q_CO = kappa*sound_vel/(pi*G*CO_density)
Q_HI = kappa*sound_vel/(pi*G*HI_density)
# TODO: оставить только show или verbose
if show:
print 'r={:7.3f} Qg={:7.3f} Qs={:7.3f} Qg^-1={:7.3f} Qs^-1={:7.3f}'.format(r, Q_HI, Q_star, 1./Q_HI, 1./Q_star)
dispersions = [sigma_R(r), sound_vel, sound_vel]
QTs = [Q_star, Q_HI, Q_CO]
components = ['star', 'HI', 'H2']
index = QTs.index(min(QTs))
if verbose:
print 'QTs: {}'.format(QTs)
print 'min index: {}'.format(index)
print 'min component: {}'.format(components[index])
sig_m = dispersions[index]
def W_i(sig_m, sig_i):
return 2*sig_m*sig_i/(sig_m**2 + sig_i**2)
return W_i(sig_m, dispersions[0])/QTs[0] + W_i(sig_m, dispersions[1])/QTs[1] + W_i(sig_m, dispersions[2])/QTs[2], components[index]
Explanation: Для тонкого диска:
End of explanation
def plot_RF13_vs_2F(r_g_dens=None, HI_gas_dens=None, CO_gas_dens=None, epicycl=None, sound_vel=None, sigma_R_max=None, sigma_R_min=None,
star_density=None, alpha_max=None, alpha_min=None, scale=None, gas_approx=None, thin=True, show=False):
'''Плотности газа передаются не скорр. за гелий.'''
if thin:
romeo_Q = romeo_Qinv_thin
else:
romeo_Q = romeo_Qinv
fig = plt.figure(figsize=[20, 5])
ax = plt.subplot(131)
totgas = zip(r_g_dens, [He_coeff*(l[0]+l[1]) for l in zip(HI_gas_dens, CO_gas_dens)])[1:]
if show:
print 'sig_R_max case:'
romeo_min = []
for r, g, co in zip(r_g_dens, HI_gas_dens, CO_gas_dens):
rom, _ = romeo_Q(r=r, epicycl=epicycl, sound_vel=sound_vel, sigma_R=sigma_R_max,
star_density=star_density, HI_density=He_coeff*g, CO_density=He_coeff*co,
alpha=alpha_min, scale=scale, gas_approx=gas_approx, show=show)
romeo_min.append(rom)
if _ == 'star':
color = 'g'
elif _ == 'HI':
color = 'b'
else:
color = 'm'
ax.scatter(r, rom, 10, marker='o', color=color)
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=totgas,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_R_max,
star_density=star_density))
if show:
print 'sig_R_min case:'
romeo_max = []
for r, g, co in zip(r_g_dens, HI_gas_dens, CO_gas_dens):
rom, _ = romeo_Q(r=r, epicycl=epicycl, sound_vel=sound_vel, sigma_R=sigma_R_min,
star_density=star_density, HI_density=He_coeff*g, CO_density=He_coeff*co,
alpha=alpha_max, scale=scale, gas_approx=gas_approx, show=show)
romeo_max.append(rom)
if _ == 'star':
color = 'g'
elif _ == 'HI':
color = 'b'
else:
color = 'm'
ax.scatter(r, rom, 10, marker = 's', color=color)
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=totgas,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_R_min,
star_density=star_density))
ax.plot(r_g_dens[1:], invQeff_min, '-', alpha=0.5, color='r')
ax.plot(r_g_dens[1:], invQeff_max, '-', alpha=0.5, color='r')
plot_Q_levels(ax, [1., 1.5, 2., 3.])
ax.set_xlim(0)
ax.set_ylim(0)
ax.legend([matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='g', mec='none', marker='o'),
matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='b', mec='none', marker='o'),
matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='m', mec='none', marker='o')],
['star', 'HI', 'H2'], numpoints=1, markerscale=1, loc='upper right') #add custom legend
ax.set_title('RF13: major component')
ax = plt.subplot(132)
ax.plot(romeo_min[1:], invQeff_min, 'o')
ax.plot(romeo_max[1:], invQeff_max, 'o', color='m', alpha=0.5)
ax.set_xlabel('Romeo')
ax.set_ylabel('2F')
ax.set_xlim(0., 1.)
ax.set_ylim(0., 1.)
ax.plot(ax.get_xlim(), ax.get_ylim(), '--')
ax = plt.subplot(133)
ax.plot(r_g_dens[1:], [l[1]/l[0] for l in zip(romeo_min[1:], invQeff_min)], 'o-')
ax.plot(r_g_dens[1:], [l[1]/l[0] for l in zip(romeo_max[1:], invQeff_max)], 'o-', color='m', alpha=0.5)
ax.set_xlabel('R')
ax.set_ylabel('[2F]/[Romeo]');
Explanation: Функция-сравнение с наблюдениями:
End of explanation
def mu_face_on(mu0d, cos_i):
return mu0d + 2.5*np.log10(1./cos_i)
Explanation: Функция для исправления центральной поверхностной яркости за наклон (приведение диска к виду "плашмя"). Взята из http://www.astronet.ru/db/msg/1166765/node20.html, (61).
End of explanation
def plot_param_depend(ax=None, N=None, data_lim=None, color=None, alpha=0.3, disk_scales=[], label=None, **kwargs):
params = kwargs.copy()
for p in params.keys():
if p == 'total_gas_data':
depth = lambda L: isinstance(L, list) and max(map(depth, L))+1 #depth of nested lists
if depth(params[p]) == 1:
params[p] = [params[p]]*N
elif type(params[p]) is not list:
params[p] = [params[p]]*N
result = []
for i in range(N):
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=params['total_gas_data'][i],
epicycl=params['epicycl'][i],
gas_approx=params['gas_approx'][i],
sound_vel=params['sound_vel'][i],
scale=params['scale'][i],
sigma=params['sigma_max'][i],
star_density=params['star_density_min'][i]))
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=params['total_gas_data'][i],
epicycl=params['epicycl'][i],
gas_approx=params['gas_approx'][i],
sound_vel=params['sound_vel'][i],
scale=params['scale'][i],
sigma=params['sigma_min'][i],
star_density=params['star_density_max'][i]))
result.append((invQeff_min, invQeff_max))
rr = zip(*params['total_gas_data'][0])[0]
qmins = []
qmaxs = []
for ind, rrr in enumerate(rr):
qmin = [result[l][0][ind] for l in range(len(result))]
qmax = [result[l][1][ind] for l in range(len(result))]
qmins.append((np.mean(qmin), np.std(qmin)))
qmaxs.append((np.mean(qmax), np.std(qmax)))
ax.errorbar(rr, zip(*qmins)[0], fmt='o-', yerr=zip(*qmins)[1], elinewidth=6, alpha=0.3);
ax.errorbar(rr, zip(*qmaxs)[0], fmt='o-', yerr=zip(*qmaxs)[1])
ax.axhline(y=1., ls='-', color='grey')
ax.set_ylim(0.)
ax.set_xlim(0.)
plot_data_lim(ax, data_lim)
plot_Q_levels(ax, [1., 1.5, 2., 3.]);
Explanation: Функция для анализа влияния параметров. Берет стандартные параметры, делает из них списки и прогоняет для всех в списке, после чего измеряет среднее и std. Можно варьировать несколько параметров одновременно.
End of explanation |
8,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference
Step1: imports for Python, Pandas
Step2: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source
Step3: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source | Python Code:
import pandas as pd
Explanation: JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference: http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader
data source: http://jsonstudio.com/resources/
End of explanation
import json
from pandas.io.json import json_normalize
Explanation: imports for Python, Pandas
End of explanation
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
Explanation: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization
End of explanation
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
Explanation: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source: http://jsonstudio.com/resources/
End of explanation |
8,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Recursive Implementation of Minimax
This notebook implements the minimax algorithm in a pure form, i.e. it does not employ any memoization techniques.
In order to have some variation in our games, we use random numbers to choose between different optimal moves.
Step1: Given a player p, the function other(p) computes the opponent of p. This assumes that there are only two players and the set of all players is stored in the global variable Players.
Step2: The function value(State, player) takes two arguments
Step3: The function value_list takes three arguments
Step4: The function best_move takes two arguments
Step5: The next line is needed because we need the function IPython.display.clear_output to clear the output in a cell.
Step6: The function play_game plays a game on the given canvas. The game played is specified indirectly as follows
Step7: With the game tic-tac-toe represented as lists and without memoization, computing the value of the start state takes 7.11 seconds.
If we use a bitboard instead, it takes 3.85 seconds. However, the bitboard truly shines when we use memoization
Step8: The start state has the value 0 as neither player can force a win.
Step9: Let's draw the board.
Step10: Now it's time to play. In the input window that will pop up later, enter your move in the format "row,col" with no space between row and column. | Python Code:
import random
random.seed(1)
Explanation: A Recursive Implementation of Minimax
This notebook implements the minimax algorithm in a pure form, i.e. it does not employ any memoization techniques.
In order to have some variation in our games, we use random numbers to choose between different optimal moves.
End of explanation
other = lambda p: [o for o in Players if o != p][0]
Explanation: Given a player p, the function other(p) computes the opponent of p. This assumes that there are only two players and the set of all players is stored in the global variable Players.
End of explanation
def value(State, player):
if finished(State):
return utility(State, player)
Moves = next_states(State, player)
return value_list(Moves, player)
Explanation: The function value(State, player) takes two arguments:
- State is the current state of the game,
- player is a player.
The function value returns the value that the given State has for player if both players play their best game. This value is an element from the set ${-1, 0, 1}$.
* If player can force a win, then the return value is 1.
* If player can at best force a draw, then the return value is 0.
* If the opponent of player can force a win for herself, then the return value is -1.
For reasons of efficiency, this function is memoized. Mathematically, the function value
is defined recursively:
- $\texttt{finished}(s) \rightarrow \texttt{value}(s, p) = \texttt{utility}(s, p)$
- $\neg \texttt{finished}(s) \rightarrow
\texttt{value}(s, p) = \max\bigl(\bigl{
-\texttt{value}(n, o) \bigm| n \in \texttt{nextStates}(s, p)
\bigr}\bigr)
$, where $o = \texttt{other}(p)$
End of explanation
def value_list(Moves, player, alpha=-1):
if Moves == []:
return alpha
o = other(player)
move_val = -value(Moves[0], o)
alpha = max(move_val, alpha)
return value_list(Moves[1:], player, alpha)
Explanation: The function value_list takes three arguments:
- Moves is a list of states. Each of these states results from a move
that player has made in a given state.
When value_list is called initially, this list is non-empty.
- player defines the player who has made the moves.
- val is a lower bound for the value of the state where the moves have been
made. Initially, this value is $-1$ as we don't yet know how good or bad
this initial state is.
End of explanation
def best_move(State, player):
NS = next_states(State, player)
bestVal = value(State, player)
BestMoves = [s for s in NS if -value(s, other(player)) == bestVal]
BestState = random.choice(BestMoves)
return bestVal, BestState
Explanation: The function best_move takes two arguments:
- State is the current state of the game,
- player is a player.
The function best_move returns a pair of the form $(v, s)$ where $s$ is a state and $v$ is the value of this state. The state $s$ is a state that is reached from State if player makes one of her optimal moves. In order to have some variation in the game, the function randomly chooses any of the optimal moves.
End of explanation
import IPython.display
Explanation: The next line is needed because we need the function IPython.display.clear_output to clear the output in a cell.
End of explanation
def play_game(canvas):
State = Start
while True:
firstPlayer = Players[0]
val, State = best_move(State, firstPlayer);
draw(State, canvas, f'For me, the game has the value {val}.')
if finished(State):
final_msg(State)
return
IPython.display.clear_output(wait=True)
State = get_move(State)
draw(State, canvas, '')
if finished(State):
IPython.display.clear_output(wait=True)
final_msg(State)
return
%run Tic-Tac-Toe-Bitboard.ipynb
Explanation: The function play_game plays a game on the given canvas. The game played is specified indirectly as follows:
- Start is a global variable defining the start state of the game.
- next_states is a function such that $\texttt{next_states}(s, p)$ computes the set of all possible states that can be reached from state $s$ if player $p$ is next to move.
- finished is a function such that $\texttt{finished}(s)$ is true for a state $s$ if the game is over in state $s$.
- utility is a function such that $\texttt{utility}(s, p)$ returns either -1, 0, or 1 in the terminal state $s$. We have that
- $\texttt{utility}(s, p)= -1$ iff the game is lost for player $p$ in state $s$,
- $\texttt{utility}(s, p)= 0$ iff the game is drawn, and
- $\texttt{utility}(s, p)= 1$ iff the game is won for player $p$ in state $s$.
End of explanation
import resource
%%time
val = value(Start, 0)
Explanation: With the game tic-tac-toe represented as lists and without memoization, computing the value of the start state takes 7.11 seconds.
If we use a bitboard instead, it takes 3.85 seconds. However, the bitboard truly shines when we use memoization:
* Representing states as bitboards and using memoization we need 808 kilobytes and the computation needs 49 milliseconds.
* Representing states as lists of lists and using memoization uses 7648 kilobytes and takes 296 milliseconds.
Observe that memoization accounts for a more than tenfold speedup.
End of explanation
val
Explanation: The start state has the value 0 as neither player can force a win.
End of explanation
canvas = create_canvas()
draw(Start, canvas, f'Current value of game for "X": {val}')
Explanation: Let's draw the board.
End of explanation
play_game(canvas)
Explanation: Now it's time to play. In the input window that will pop up later, enter your move in the format "row,col" with no space between row and column.
End of explanation |
8,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Caso de uso 1 - Validación, transformación y harvesting con el catálogo del Ministerio de Justicia
Caso 1
Step1: Declaración de variables y paths
Step2: Validación del archivo xlsx y transformación a json
Validación del catálogo en xlsx
Step3: Transformación del catálogo, de xlsx a json
Step4: Validación del catalogo en json y harvesting
Validación del catálogo en json
Instanciación de la clase DataJson
Step5: Validación -V/F- del catálogo en json
Step6: Validación detallada del catálogo en json
Step7: Harvesting
Generación del archivo de reporte de datasets
Step8: Generación del archivo de configuración para el harvester | Python Code:
import arrow
import os, sys
sys.path.insert(0, os.path.abspath(".."))
from pydatajson import DataJson #lib y clase
from pydatajson.readers import read_catalog # lib, modulo ... metodo Lle el catalogo -json o xlsx o (local o url) dicc- y lo transforma en un diccionario de python
from pydatajson.writers import write_json_catalog
Explanation: Caso de uso 1 - Validación, transformación y harvesting con el catálogo del Ministerio de Justicia
Caso 1: catálogo válido
En esta prueba se realiza el proceso completo de validación, transformación y harvesting a partir de un archivo xlsx que contiene los metadatos pertenecientes al catálogo del Ministerio de Justicia.
Nota: Se trata de un catálogo conocido y válido en cuanto a su estructura y metadatos. Archivo utilizado: catalogo-justicia.xlsx
Setup
Importación de metodos y clases
End of explanation
#completar con lo que corresponda
ORGANISMO = 'justicia'
catalogo_xlsx = os.path.join("archivos-tests", "excel-validos", "catalogo-justicia.xlsx")
#NO MODIFICAR
#Creo la estructura de directorios necesaria si no existe
if not os.path.isdir("archivos-generados"):
os.mkdir("archivos-generados")
for directorio in ["jsons", "reportes", "configuracion"]:
path = os.path.join("archivos-generados", directorio)
if not os.path.isdir(path):
os.mkdir(path)
# Declaro algunas variables de interés
HOY = arrow.now().format('YYYY-MM-DD-HH_mm')
catalogo_a_json = os.path.join("archivos-generados","jsons","catalogo-{}-{}.json".format(ORGANISMO, HOY))
reporte_datasets = os.path.join("archivos-generados", "reportes", "reporte-catalogo-{}-{}.xlsx".format(ORGANISMO, HOY))
archivo_config_sin_reporte = os.path.join("archivos-generados", "configuracion", "archivo-config_-{}-{}-sinr.csv".format(ORGANISMO, HOY))
archivo_config_con_reporte = os.path.join("archivos-generados", "configuracion", "archivo-config-{}-{}-conr.csv".format(ORGANISMO, HOY))
Explanation: Declaración de variables y paths
End of explanation
catalogo = read_catalog(catalogo_xlsx)
# En el caso que quiera trabajarse con un archivo remoto:
#catalogo = read_catalog("https://raw.githubusercontent.com/datosgobar/pydatajson/master/tests/samples/catalogo_justicia.json")
Explanation: Validación del archivo xlsx y transformación a json
Validación del catálogo en xlsx
End of explanation
write_json_catalog(catalogo, catalogo_a_json)
##write_json_catalog(catalog, target_file) escrie un dicc a un archivo json
Explanation: Transformación del catálogo, de xlsx a json
End of explanation
dj = DataJson()
Explanation: Validación del catalogo en json y harvesting
Validación del catálogo en json
Instanciación de la clase DataJson
End of explanation
dj.is_valid_catalog(catalogo)
Explanation: Validación -V/F- del catálogo en json
End of explanation
dj.validate_catalog(catalogo)
Explanation: Validación detallada del catálogo en json
End of explanation
dj.generate_datasets_report(catalogo, harvest='valid',export_path=reporte_datasets)
# proceso el repote, 0 y 1s
Explanation: Harvesting
Generación del archivo de reporte de datasets
End of explanation
# usando el reporte
dj.generate_harvester_config(harvest='report', report=reporte_datasets, export_path=archivo_config_con_reporte)
# sin usar el reporte
dj.generate_harvester_config(catalogs=catalogo, harvest='valid', export_path=archivo_config_sin_reporte)
#(catalogs=None, harvest=u'valid', report=None, export_path=None)
Explanation: Generación del archivo de configuración para el harvester
End of explanation |
8,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: モジュール、レイヤー、モデルの概要
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: TensorFlow におけるモデルとレイヤーの定義
ほとんどのモデルはレイヤーで構成されています。レイヤーは、再利用およびトレーニング可能な変数を持つ既知の数学的構造を持つ関数です。TensorFlow では、Keras や Sonnet といった、レイヤーとモデルの高位実装の多くは、同じ基本クラスの tf.Module に基づいて構築されています。
スカラーテンソルで動作する非常に単純な tf.Module の例を次に示します。
Step3: モジュールと(その延長としての)レイヤーは、「オブジェクト」のディープラーニング用語です。これらには、内部状態と、その状態を使用するメソッドがあります。
__ call__ は Python コーラブルのように動作する以外何も特別なことではないため、任意の関数を使用してモデルを呼び出すことができます。
ファインチューニング中のレイヤーと変数を凍結するなど、様々な理由で、変数をトレーニング対象とするかどうかを設定することができます。
注意
Step4: これは、モジュールで構成された 2 層線形レイヤーモデルの例です。
最初の高密度(線形)レイヤーは以下のとおりです。
Step5: 2 つのレイヤーインスタンスを作成して適用する完全なモデルは以下のとおりです。
Step6: tf.Module インスタンスは、それに割り当てられた tf.Variable または tf.Module インスタンスを再帰的に自動収集します。これにより、単一のモデルインスタンスで tf.Module のコレクションを管理し、モデル全体を保存して読み込むことができます。
Step7: 変数の作成を延期する
ここで、レイヤーへの入力サイズと出力サイズの両方を定義する必要があることに気付いたかもしれません。これは、w 変数が既知の形状を持ち、割り当てることができるようにするためです。
モジュールが特定の入力形状で最初に呼び出されるまで変数の作成を延期することにより、入力サイズを事前に指定する必要がありません。
Step8: この柔軟性のため、多くの場合、TensorFlow レイヤーは、出力の形状(tf.keras.layers.Dense)などを指定するだけで済みます。入出力サイズの両方を指定する必要はありません。
重みを保存する
tf.Module はチェックポイントと SavedModel の両方として保存できます。
チェックポイントは単なる重み(モジュールとそのサブモジュール内の変数のセットの値)です。
Step9: チェックポイントは、データ自体とメタデータのインデックスファイルの 2 種類のファイルで構成されます。インデックスファイルは、実際に保存されているものとチェックポイントの番号を追跡し、チェックポイントデータには変数値とその属性ルックアップパスが含まれています。
Step10: チェックポイントの内部を調べると、変数のコレクション全体が保存されており、変数を含む Python オブジェクト別に並べ替えられていることを確認できます。
Step11: 分散(マルチマシン)トレーニング中にシャーディングされる可能性があるため、番号が付けられています(「00000-of-00001」など)。ただし、この例の場合、シャードは 1 つしかありません。
モデルを再度読み込むと、Python オブジェクトの値が上書きされます。
Step12: 注意
Step13: 作成したモジュールは、前と全く同じように動作します。関数に渡される一意のシグネチャごとにグラフが作成されます。詳細については、グラフと関数の基礎ガイドをご覧ください。
Step14: TensorBoard のサマリー内でグラフをトレースすると、グラフを視覚化できます。
Step15: TensorBoard を起動して、トレースの結果を確認します。
Step16: SavedModel の作成
トレーニングが完了したモデルを共有するには、SavedModel の使用が推奨されます。SavedModel には関数のコレクションと重みのコレクションの両方が含まれています。
次のようにして、トレーニングしたモデルを保存することができます。
Step17: saved_model.pb ファイルは、関数型の tf.Graph を記述するプロトコルバッファです。
モデルとレイヤーは、それを作成したクラスのインスタンスを実際に作成しなくても、この表現から読み込めます。これは、大規模なサービスやエッジデバイスでのサービスなど、Python インタープリタがない(または使用しない)場合や、元の Python コードが利用できないか実用的でない場合に有用です。
モデルを新しいオブジェクトとして読み込みます。
Step18: 保存したモデルを読み込んで作成された new_model は、クラスを認識しない内部の TensorFlow ユーザーオブジェクトです。SequentialModule ではありません。
Step19: この新しいモデルは、すでに定義されている入力シグネチャで機能します。このように復元されたモデルにシグネチャを追加することはできません。
Step20: したがって、SavedModel を使用すると、tf.Module を使用して TensorFlow の重みとグラフを保存し、それらを再度読み込むことができます。
Keras モデルとレイヤー
ここまでは、Keras に触れずに説明してきましたが、tf.Module の上に独自の高位 API を構築することは可能です。
このセクションでは、Keras が tf.Module をどのように使用するかを説明します。Keras モデルの完全なユーザーガイドは、Keras ガイドをご覧ください。
Keras レイヤー
tf.keras.layers.Layer はすべての Keras レイヤーの基本クラスであり、tf.Module から継承します。
親を交換してから、__call__ を call に変更するだけで、モジュールを Keras レイヤーに変換できます。
Step21: Keras レイヤーには独自の __call__ があり、次のセクションで説明する手順を実行してから、call() を呼び出します。動作には違いはありません。
Step22: build ステップ
前述のように、多くの場合都合よく、入力形状が確定するまで変数の作成を延期できます。
Keras レイヤーには追加のライフサイクルステップがあり、レイヤーをより柔軟に定義することができます。このステップは、build() 関数で定義されます。
build は 1 回だけ呼び出され、入力形状で呼び出されます。通常、変数(重み)を作成するために使用されます。
上記の MyDense レイヤーを、入力のサイズに柔軟に合わせられるように書き換えることができます。
Step23: この時点では、モデルは構築されていないため、変数も存在しません。
Step24: 関数を呼び出すと、適切なサイズの変数が割り当てられます。
Step25: buildは 1 回しか呼び出されないため、入力形状がレイヤーの変数と互換性がない場合、入力は拒否されます。
Step26: Keras レイヤーには、次のような多くの追加機能があります。
オプションの損失
メトリクスのサポート
トレーニングと推論の使用を区別する、オプションの training 引数の組み込みサポート
Python でモデルのクローンを作成するための構成を正確に保存する get_config と <code>from_config</code> メソッド
詳細は、カスタムレイヤーとモデルに関する完全ガイドをご覧ください。
Keras モデル
モデルはネストされた Keras レイヤーとして定義できます。
ただし、Keras は tf.keras.Model と呼ばれるフル機能のモデルクラスも提供します。Keras モデルは tf.keras.layers.Layer を継承しているため、 Keras レイヤーと同じ方法で使用、ネスト、保存することができます。Keras モデルには、トレーニング、評価、読み込み、保存、および複数のマシンでのトレーニングを容易にする追加機能があります。
上記の SequentialModule をほぼ同じコードで定義できます。先ほどと同じように、__call__ をcall() に変換して、親を変更します。
Step27: 追跡変数やサブモジュールなど、すべて同じ機能を利用できます。
注意
Step28: 非常に Python 的なアプローチとして、tf.keras.Model をオーバーライドして TensorFlow モデルを構築することができます。ほかのフレームワークからモデルを移行する場合、これは非常に簡単な方法です。
モデルが既存のレイヤーと入力の単純な集合として構築されている場合は、モデルの再構築とアーキテクチャに関する追加機能を備えた Functional API を使用すると手間とスペースを節約できます。
以下は、Functional API を使用した同じモデルです。
Step29: ここでの主な違いは、入力形状が関数構築プロセスの一部として事前に指定されることです。この場合、input_shape 引数を完全に指定する必要がないため、一部の次元を None のままにしておくことができます。
注意:サブクラス化されたモデルでは、input_shape や InputLayer を指定する必要はありません。これらの引数とレイヤーは無視されます。
Keras モデルの保存
Keras モデルでは tf.Moduleと同じようにチェックポイントを設定できます。
Keras モデルはモジュールであるため、tf.saved_models.save() を使用して保存することもできます。ただし、Keras モデルには便利なメソッドやその他の機能があります。
Step30: このように簡単に、読み込み直すことができます。
Step31: また、Keras SavedModel は、メトリクス、損失、およびオプティマイザの状態も保存します。
再構築されたこのモデルを使用すると、同じデータで呼び出されたときと同じ結果が得られます。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from datetime import datetime
%load_ext tensorboard
Explanation: モジュール、レイヤー、モデルの概要
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/guide/intro_to_modules" class=""><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/intro_to_modules.ipynb" class=""><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/intro_to_modules.ipynb" class=""><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/intro_to_modules.ipynb" class=""><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
TensorFlow で機械学習を実行するには、モデルを定義、保存、復元する必要があります。
モデルは、抽象的に次のように定義できます。
テンソルで何かを計算する関数(フォワードパス)
トレーニングに応じて更新できる何らかの変数
このガイドでは、Keras 内で TensorFlow モデルがどのように定義されているか、そして、TensorFlow が変数とモデルを収集する方法、および、モデルを保存および復元する方法を説明します。
注意:今すぐ Keras を使用するのであれば、一連の Keras ガイドをご覧ください。
セットアップ
End of explanation
class SimpleModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.a_variable = tf.Variable(5.0, name="train_me")
self.non_trainable_variable = tf.Variable(5.0, trainable=False, name="do_not_train_me")
def __call__(self, x):
return self.a_variable * x + self.non_trainable_variable
simple_module = SimpleModule(name="simple")
simple_module(tf.constant(5.0))
Explanation: TensorFlow におけるモデルとレイヤーの定義
ほとんどのモデルはレイヤーで構成されています。レイヤーは、再利用およびトレーニング可能な変数を持つ既知の数学的構造を持つ関数です。TensorFlow では、Keras や Sonnet といった、レイヤーとモデルの高位実装の多くは、同じ基本クラスの tf.Module に基づいて構築されています。
スカラーテンソルで動作する非常に単純な tf.Module の例を次に示します。
End of explanation
# All trainable variables
print("trainable variables:", simple_module.trainable_variables)
# Every variable
print("all variables:", simple_module.variables)
Explanation: モジュールと(その延長としての)レイヤーは、「オブジェクト」のディープラーニング用語です。これらには、内部状態と、その状態を使用するメソッドがあります。
__ call__ は Python コーラブルのように動作する以外何も特別なことではないため、任意の関数を使用してモデルを呼び出すことができます。
ファインチューニング中のレイヤーと変数を凍結するなど、様々な理由で、変数をトレーニング対象とするかどうかを設定することができます。
注意: tf.Module は tf.keras.layers.Layer と tf.keras.Model の基本クラスであるため、ここに説明されているすべての内容は Keras にも当てはまります。過去の互換性の理由から、Keras レイヤーはモジュールから変数を収集しないため、モデルはモジュールのみ、または Keras レイヤーのみを使用する必要があります。ただし、以下に示す変数の検査方法はどちらの場合も同じです。
tf.Module をサブクラス化することにより、このオブジェクトのプロパティに割り当てられた tf.Variable または tf.Module インスタンスが自動的に収集されます。これにより、変数の保存や読み込みのほか、tf.Module のコレクションを作成することができます。
End of explanation
class Dense(tf.Module):
def __init__(self, in_features, out_features, name=None):
super().__init__(name=name)
self.w = tf.Variable(
tf.random.normal([in_features, out_features]), name='w')
self.b = tf.Variable(tf.zeros([out_features]), name='b')
def __call__(self, x):
y = tf.matmul(x, self.w) + self.b
return tf.nn.relu(y)
Explanation: これは、モジュールで構成された 2 層線形レイヤーモデルの例です。
最初の高密度(線形)レイヤーは以下のとおりです。
End of explanation
class SequentialModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.dense_1 = Dense(in_features=3, out_features=3)
self.dense_2 = Dense(in_features=3, out_features=2)
def __call__(self, x):
x = self.dense_1(x)
return self.dense_2(x)
# You have made a model!
my_model = SequentialModule(name="the_model")
# Call it, with random results
print("Model results:", my_model(tf.constant([[2.0, 2.0, 2.0]])))
Explanation: 2 つのレイヤーインスタンスを作成して適用する完全なモデルは以下のとおりです。
End of explanation
print("Submodules:", my_model.submodules)
for var in my_model.variables:
print(var, "\n")
Explanation: tf.Module インスタンスは、それに割り当てられた tf.Variable または tf.Module インスタンスを再帰的に自動収集します。これにより、単一のモデルインスタンスで tf.Module のコレクションを管理し、モデル全体を保存して読み込むことができます。
End of explanation
class FlexibleDenseModule(tf.Module):
# Note: No need for `in_features`
def __init__(self, out_features, name=None):
super().__init__(name=name)
self.is_built = False
self.out_features = out_features
def __call__(self, x):
# Create variables on first call.
if not self.is_built:
self.w = tf.Variable(
tf.random.normal([x.shape[-1], self.out_features]), name='w')
self.b = tf.Variable(tf.zeros([self.out_features]), name='b')
self.is_built = True
y = tf.matmul(x, self.w) + self.b
return tf.nn.relu(y)
# Used in a module
class MySequentialModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.dense_1 = FlexibleDenseModule(out_features=3)
self.dense_2 = FlexibleDenseModule(out_features=2)
def __call__(self, x):
x = self.dense_1(x)
return self.dense_2(x)
my_model = MySequentialModule(name="the_model")
print("Model results:", my_model(tf.constant([[2.0, 2.0, 2.0]])))
Explanation: 変数の作成を延期する
ここで、レイヤーへの入力サイズと出力サイズの両方を定義する必要があることに気付いたかもしれません。これは、w 変数が既知の形状を持ち、割り当てることができるようにするためです。
モジュールが特定の入力形状で最初に呼び出されるまで変数の作成を延期することにより、入力サイズを事前に指定する必要がありません。
End of explanation
chkp_path = "my_checkpoint"
checkpoint = tf.train.Checkpoint(model=my_model)
checkpoint.write(chkp_path)
Explanation: この柔軟性のため、多くの場合、TensorFlow レイヤーは、出力の形状(tf.keras.layers.Dense)などを指定するだけで済みます。入出力サイズの両方を指定する必要はありません。
重みを保存する
tf.Module はチェックポイントと SavedModel の両方として保存できます。
チェックポイントは単なる重み(モジュールとそのサブモジュール内の変数のセットの値)です。
End of explanation
!ls my_checkpoint*
Explanation: チェックポイントは、データ自体とメタデータのインデックスファイルの 2 種類のファイルで構成されます。インデックスファイルは、実際に保存されているものとチェックポイントの番号を追跡し、チェックポイントデータには変数値とその属性ルックアップパスが含まれています。
End of explanation
tf.train.list_variables(chkp_path)
Explanation: チェックポイントの内部を調べると、変数のコレクション全体が保存されており、変数を含む Python オブジェクト別に並べ替えられていることを確認できます。
End of explanation
new_model = MySequentialModule()
new_checkpoint = tf.train.Checkpoint(model=new_model)
new_checkpoint.restore("my_checkpoint")
# Should be the same result as above
new_model(tf.constant([[2.0, 2.0, 2.0]]))
Explanation: 分散(マルチマシン)トレーニング中にシャーディングされる可能性があるため、番号が付けられています(「00000-of-00001」など)。ただし、この例の場合、シャードは 1 つしかありません。
モデルを再度読み込むと、Python オブジェクトの値が上書きされます。
End of explanation
class MySequentialModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.dense_1 = Dense(in_features=3, out_features=3)
self.dense_2 = Dense(in_features=3, out_features=2)
@tf.function
def __call__(self, x):
x = self.dense_1(x)
return self.dense_2(x)
# You have made a model with a graph!
my_model = MySequentialModule(name="the_model")
Explanation: 注意: チェックポイントは長いトレーニングワークフローでは重要であり、tf.checkpoint.CheckpointManager はヘルパークラスとして、チェックポイント管理を大幅に簡単にすることができます。詳細については、トレーニングチェックポイントガイドをご覧ください。
関数の保存
TensorFlow は、TensorFlow Serving と TensorFlow Lite で見たように、元の Python オブジェクトなしでモデルを実行できます。また、TensorFlow Hub からトレーニング済みのモデルをダウンロードした場合でも同じです。
TensorFlow は、Pythonで説明されている計算の実行方法を認識する必要がありますが、元のコードは必要ありません。認識させるには、グラフを作成することができます。これについてはグラフと関数の入門ガイドをご覧ください。
このグラフには、関数を実装する演算が含まれています。
@tf.function デコレータを追加して、このコードをグラフとして実行する必要があることを示すことにより、上記のモデルでグラフを定義できます。
End of explanation
print(my_model([[2.0, 2.0, 2.0]]))
print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]]))
Explanation: 作成したモジュールは、前と全く同じように動作します。関数に渡される一意のシグネチャごとにグラフが作成されます。詳細については、グラフと関数の基礎ガイドをご覧ください。
End of explanation
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = "logs/func/%s" % stamp
writer = tf.summary.create_file_writer(logdir)
# Create a new model to get a fresh trace
# Otherwise the summary will not see the graph.
new_model = MySequentialModule()
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True)
tf.profiler.experimental.start(logdir)
# Call only one tf.function when tracing.
z = print(new_model(tf.constant([[2.0, 2.0, 2.0]])))
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
Explanation: TensorBoard のサマリー内でグラフをトレースすると、グラフを視覚化できます。
End of explanation
#docs_infra: no_execute
%tensorboard --logdir logs/func
Explanation: TensorBoard を起動して、トレースの結果を確認します。
End of explanation
tf.saved_model.save(my_model, "the_saved_model")
# Inspect the SavedModel in the directory
!ls -l the_saved_model
# The variables/ directory contains a checkpoint of the variables
!ls -l the_saved_model/variables
Explanation: SavedModel の作成
トレーニングが完了したモデルを共有するには、SavedModel の使用が推奨されます。SavedModel には関数のコレクションと重みのコレクションの両方が含まれています。
次のようにして、トレーニングしたモデルを保存することができます。
End of explanation
new_model = tf.saved_model.load("the_saved_model")
Explanation: saved_model.pb ファイルは、関数型の tf.Graph を記述するプロトコルバッファです。
モデルとレイヤーは、それを作成したクラスのインスタンスを実際に作成しなくても、この表現から読み込めます。これは、大規模なサービスやエッジデバイスでのサービスなど、Python インタープリタがない(または使用しない)場合や、元の Python コードが利用できないか実用的でない場合に有用です。
モデルを新しいオブジェクトとして読み込みます。
End of explanation
isinstance(new_model, SequentialModule)
Explanation: 保存したモデルを読み込んで作成された new_model は、クラスを認識しない内部の TensorFlow ユーザーオブジェクトです。SequentialModule ではありません。
End of explanation
print(my_model([[2.0, 2.0, 2.0]]))
print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]]))
Explanation: この新しいモデルは、すでに定義されている入力シグネチャで機能します。このように復元されたモデルにシグネチャを追加することはできません。
End of explanation
class MyDense(tf.keras.layers.Layer):
# Adding **kwargs to support base Keras layer arguments
def __init__(self, in_features, out_features, **kwargs):
super().__init__(**kwargs)
# This will soon move to the build step; see below
self.w = tf.Variable(
tf.random.normal([in_features, out_features]), name='w')
self.b = tf.Variable(tf.zeros([out_features]), name='b')
def call(self, x):
y = tf.matmul(x, self.w) + self.b
return tf.nn.relu(y)
simple_layer = MyDense(name="simple", in_features=3, out_features=3)
Explanation: したがって、SavedModel を使用すると、tf.Module を使用して TensorFlow の重みとグラフを保存し、それらを再度読み込むことができます。
Keras モデルとレイヤー
ここまでは、Keras に触れずに説明してきましたが、tf.Module の上に独自の高位 API を構築することは可能です。
このセクションでは、Keras が tf.Module をどのように使用するかを説明します。Keras モデルの完全なユーザーガイドは、Keras ガイドをご覧ください。
Keras レイヤー
tf.keras.layers.Layer はすべての Keras レイヤーの基本クラスであり、tf.Module から継承します。
親を交換してから、__call__ を call に変更するだけで、モジュールを Keras レイヤーに変換できます。
End of explanation
simple_layer([[2.0, 2.0, 2.0]])
Explanation: Keras レイヤーには独自の __call__ があり、次のセクションで説明する手順を実行してから、call() を呼び出します。動作には違いはありません。
End of explanation
class FlexibleDense(tf.keras.layers.Layer):
# Note the added `**kwargs`, as Keras supports many arguments
def __init__(self, out_features, **kwargs):
super().__init__(**kwargs)
self.out_features = out_features
def build(self, input_shape): # Create the state of the layer (weights)
self.w = tf.Variable(
tf.random.normal([input_shape[-1], self.out_features]), name='w')
self.b = tf.Variable(tf.zeros([self.out_features]), name='b')
def call(self, inputs): # Defines the computation from inputs to outputs
return tf.matmul(inputs, self.w) + self.b
# Create the instance of the layer
flexible_dense = FlexibleDense(out_features=3)
Explanation: build ステップ
前述のように、多くの場合都合よく、入力形状が確定するまで変数の作成を延期できます。
Keras レイヤーには追加のライフサイクルステップがあり、レイヤーをより柔軟に定義することができます。このステップは、build() 関数で定義されます。
build は 1 回だけ呼び出され、入力形状で呼び出されます。通常、変数(重み)を作成するために使用されます。
上記の MyDense レイヤーを、入力のサイズに柔軟に合わせられるように書き換えることができます。
End of explanation
flexible_dense.variables
Explanation: この時点では、モデルは構築されていないため、変数も存在しません。
End of explanation
# Call it, with predictably random results
print("Model results:", flexible_dense(tf.constant([[2.0, 2.0, 2.0], [3.0, 3.0, 3.0]])))
flexible_dense.variables
Explanation: 関数を呼び出すと、適切なサイズの変数が割り当てられます。
End of explanation
try:
print("Model results:", flexible_dense(tf.constant([[2.0, 2.0, 2.0, 2.0]])))
except tf.errors.InvalidArgumentError as e:
print("Failed:", e)
Explanation: buildは 1 回しか呼び出されないため、入力形状がレイヤーの変数と互換性がない場合、入力は拒否されます。
End of explanation
class MySequentialModel(tf.keras.Model):
def __init__(self, name=None, **kwargs):
super().__init__(**kwargs)
self.dense_1 = FlexibleDense(out_features=3)
self.dense_2 = FlexibleDense(out_features=2)
def call(self, x):
x = self.dense_1(x)
return self.dense_2(x)
# You have made a Keras model!
my_sequential_model = MySequentialModel(name="the_model")
# Call it on a tensor, with random results
print("Model results:", my_sequential_model(tf.constant([[2.0, 2.0, 2.0]])))
Explanation: Keras レイヤーには、次のような多くの追加機能があります。
オプションの損失
メトリクスのサポート
トレーニングと推論の使用を区別する、オプションの training 引数の組み込みサポート
Python でモデルのクローンを作成するための構成を正確に保存する get_config と <code>from_config</code> メソッド
詳細は、カスタムレイヤーとモデルに関する完全ガイドをご覧ください。
Keras モデル
モデルはネストされた Keras レイヤーとして定義できます。
ただし、Keras は tf.keras.Model と呼ばれるフル機能のモデルクラスも提供します。Keras モデルは tf.keras.layers.Layer を継承しているため、 Keras レイヤーと同じ方法で使用、ネスト、保存することができます。Keras モデルには、トレーニング、評価、読み込み、保存、および複数のマシンでのトレーニングを容易にする追加機能があります。
上記の SequentialModule をほぼ同じコードで定義できます。先ほどと同じように、__call__ をcall() に変換して、親を変更します。
End of explanation
my_sequential_model.variables
my_sequential_model.submodules
Explanation: 追跡変数やサブモジュールなど、すべて同じ機能を利用できます。
注意: 上記の「注意」を繰り返すと、Keras レイヤーまたはモデル内にネストされた生の tf.Module は、トレーニングまたは保存のために変数を収集しません。代わりに、Keras レイヤーを Keras レイヤーの内側にネストします。
End of explanation
inputs = tf.keras.Input(shape=[3,])
x = FlexibleDense(3)(inputs)
x = FlexibleDense(2)(x)
my_functional_model = tf.keras.Model(inputs=inputs, outputs=x)
my_functional_model.summary()
my_functional_model(tf.constant([[2.0, 2.0, 2.0]]))
Explanation: 非常に Python 的なアプローチとして、tf.keras.Model をオーバーライドして TensorFlow モデルを構築することができます。ほかのフレームワークからモデルを移行する場合、これは非常に簡単な方法です。
モデルが既存のレイヤーと入力の単純な集合として構築されている場合は、モデルの再構築とアーキテクチャに関する追加機能を備えた Functional API を使用すると手間とスペースを節約できます。
以下は、Functional API を使用した同じモデルです。
End of explanation
my_sequential_model.save("exname_of_file")
Explanation: ここでの主な違いは、入力形状が関数構築プロセスの一部として事前に指定されることです。この場合、input_shape 引数を完全に指定する必要がないため、一部の次元を None のままにしておくことができます。
注意:サブクラス化されたモデルでは、input_shape や InputLayer を指定する必要はありません。これらの引数とレイヤーは無視されます。
Keras モデルの保存
Keras モデルでは tf.Moduleと同じようにチェックポイントを設定できます。
Keras モデルはモジュールであるため、tf.saved_models.save() を使用して保存することもできます。ただし、Keras モデルには便利なメソッドやその他の機能があります。
End of explanation
reconstructed_model = tf.keras.models.load_model("exname_of_file")
Explanation: このように簡単に、読み込み直すことができます。
End of explanation
reconstructed_model(tf.constant([[2.0, 2.0, 2.0]]))
Explanation: また、Keras SavedModel は、メトリクス、損失、およびオプティマイザの状態も保存します。
再構築されたこのモデルを使用すると、同じデータで呼び出されたときと同じ結果が得られます。
End of explanation |
8,691 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
What is an efficient way of splitting a column into multiple rows using dask dataframe? For example, let's say I have a csv file which I read using dask to produce the following dask dataframe: | Problem:
import pandas as pd
df = pd.DataFrame([["A", "Z-Y"], ["B", "X"], ["C", "W-U-V"]], index=[1,2,3], columns=['var1', 'var2'])
def g(df):
return df.join(pd.DataFrame(df.var2.str.split('-', expand=True).stack().reset_index(level=1, drop=True),columns=['var2 '])).\
drop('var2',1).rename(columns=str.strip).reset_index(drop=True)
result = g(df.copy()) |
8,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Authentication
Step2: Install the h5py to prepare sample dataset, and the grpcio-tools for querying against the index.
Step3: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step4: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API, and Service Networking API.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step5: Otherwise, set your project ID here.
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Import libraries and define constants
Import the Vertex AI (unified) client library into your Python environment.
Step10: Prepare the Data
The GloVe dataset consists of a set of pre-trained embeddings. The embeddings are split into a "train" split, and a "test" split.
We will create a vector search index from the "train" split, and use the embedding vectors in the "test" split as query vectors to test the vector search index.
NOTE
Step11: Read the data into memory.
Step12: Save the train split in JSONL format.
Step13: Upload the training data to GCS.
Step14: Create Indexes
Create ANN Index (for Production Usage)
Step15: Create the ANN index configuration
Step16: Create Brute Force Index (for Ground Truth)
The brute force index uses a naive brute force method to find the nearest neighbors. This method is not fast or efficient. Hence brute force indices are not recommended for production usage. They are to be used to find the "ground truth" set of neighbors, so that the "ground truth" set can be used to measure recall of the indices being tuned for production usage. To ensure an apples to apples comparison, the distanceMeasureType and featureNormType, dimensions of the brute force index should match those of the production indices being tuned.
Create the brute force index configuration
Step17: Update Indexes
Create incremental data file.
Step18: Copy the incremental data file to a new subdirectory.
Step19: Create update index request
Step20: Create an IndexEndpoint with VPC Network
Step21: Deploy Indexes
Deploy ANN Index
Step22: Deploy Brute Force Index
Step23: Create Online Queries
After you built your indexes, you may query against the deployed index through the online querying gRPC API (Match service) within the virtual machine instances from the same region (for example 'us-central1' in this tutorial).
The way a client uses this gRPC API is by folowing steps
Step24: Compile the protocol buffer, and then match_service_pb2.py and match_service_pb2_grpc.py are generated.
Step25: Obtain the Private Endpoint
Step26: Test your query
Step27: Batch Query
You can run multiple queries in a single RPC call using the BatchMatch API
Step28: Compute Recall
Use deployed brute force Index as the ground truth to calculate the recall of ANN Index
Step29: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also manually delete resources that you created by running the following code. | Python Code:
PROJECT_ID = "<your_project_id>" # @param {type:"string"}
NETWORK_NAME = "ucaip-haystack-vpc-network" # @param {type:"string"}
PEERING_RANGE_NAME = "ucaip-haystack-range"
# Create a VPC network
! gcloud compute networks create {NETWORK_NAME} --bgp-routing-mode=regional --subnet-mode=auto --project={PROJECT_ID}
# Add necessary firewall rules
! gcloud compute firewall-rules create {NETWORK_NAME}-allow-icmp --network {NETWORK_NAME} --priority 65534 --project {PROJECT_ID} --allow icmp
! gcloud compute firewall-rules create {NETWORK_NAME}-allow-internal --network {NETWORK_NAME} --priority 65534 --project {PROJECT_ID} --allow all --source-ranges 10.128.0.0/9
! gcloud compute firewall-rules create {NETWORK_NAME}-allow-rdp --network {NETWORK_NAME} --priority 65534 --project {PROJECT_ID} --allow tcp:3389
! gcloud compute firewall-rules create {NETWORK_NAME}-allow-ssh --network {NETWORK_NAME} --priority 65534 --project {PROJECT_ID} --allow tcp:22
# Reserve IP range
! gcloud compute addresses create {PEERING_RANGE_NAME} --global --prefix-length=16 --network={NETWORK_NAME} --purpose=VPC_PEERING --project={PROJECT_ID} --description="peering range for uCAIP Haystack."
# Set up peering with service networking
! gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --network={NETWORK_NAME} --ranges={PEERING_RANGE_NAME} --project={PROJECT_ID}
Explanation: <table align="left">
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/matching_engine/matching_engine_for_indexing.ipynb">
Run in Google Cloud Notebooks
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/matching_engine/matching_engine_for_indexing.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
This example demonstrates how to use the GCP ANN Service. It is a high scale, low latency solution, to find similar vectors (or more specifically "embeddings") for a large corpus. Moreover, it is a fully managed offering, further reducing operational overhead. It is built upon Approximate Nearest Neighbor (ANN) technology developed by Google Research.
Dataset
The dataset used for this tutorial is the GloVe dataset.
Objective
In this notebook, you will learn how to create Approximate Nearest Neighbor (ANN) Index, query against indexes, and validate the performance of the index.
The steps performed include:
Create ANN Index and Brute Force Index
Create an IndexEndpoint with VPC Network
Deploy ANN Index and Brute Force Index
Perform online query
Compute recall
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Before you begin
Prepare a VPC network. To reduce any network overhead that might lead to unnecessary increase in overhead latency, it is best to call the ANN endpoints from your VPC via a direct VPC Peering connection. The following section describes how to setup a VPC Peering connection if you don't have one. This is a one-time initial setup task. You can also reuse existing VPC network and skip this section.
WARNING: The match service gRPC API (to create online queries against your deployed index) has to be executed in a Google Cloud Notebook instance that is created with the following requirements:
In the same region as where your ANN service is deployed (for example, if you set REGION = "us-central1" as same as the tutorial, the notebook instance has to be in us-central1).
Make sure you select the VPC network you created for ANN service (instead of using the "default" one). That is, you will have to create the VPC network below and then create a new notebook instance that uses that VPC.
If you run it in the colab or a Google Cloud Notebook instance in a different VPC network or region, the gRPC API will fail to peer the network (InactiveRPCError).
End of explanation
! pip install -U git+https://github.com/googleapis/python-aiplatform.git@main-test --user
Explanation: Authentication: $ gcloud auth login rerun this in Google Cloud Notebook terminal when you are logged out and need the credential again.
Installation
Download and install the latest (preview) version of the Vertex SDK for Python.
End of explanation
! pip install -U grpcio-tools --user
! pip install -U h5py --user
Explanation: Install the h5py to prepare sample dataset, and the grpcio-tools for querying against the index.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API, and Service Networking API.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "<your_project_id>" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
import grpc
import h5py
from google.cloud import aiplatform_v1beta1
from google.protobuf import struct_pb2
REGION = "us-central1"
ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
NETWORK_NAME = "ucaip-haystack-vpc-network" # @param {type:"string"}
AUTH_TOKEN = !gcloud auth print-access-token
PROJECT_NUMBER = !gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'
PROJECT_NUMBER = PROJECT_NUMBER[0]
PARENT = "projects/{}/locations/{}".format(PROJECT_ID, REGION)
print("ENDPOINT: {}".format(ENDPOINT))
print("PROJECT_ID: {}".format(PROJECT_ID))
print("REGION: {}".format(REGION))
!gcloud config set project {PROJECT_ID}
!gcloud config set ai_platform/region {REGION}
Explanation: Import libraries and define constants
Import the Vertex AI (unified) client library into your Python environment.
End of explanation
! gsutil cp gs://cloud-samples-data/vertex-ai/matching_engine/glove-100-angular.hdf5 .
Explanation: Prepare the Data
The GloVe dataset consists of a set of pre-trained embeddings. The embeddings are split into a "train" split, and a "test" split.
We will create a vector search index from the "train" split, and use the embedding vectors in the "test" split as query vectors to test the vector search index.
NOTE: While the data split uses the term "train", these are pre-trained embeddings and thus are ready to be indexed for search. The terms "train" and "test" split are used just to be consistent with usual machine learning terminology.
Download the GloVe dataset.
End of explanation
# The number of nearest neighbors to be retrieved from database for each query.
k = 10
h5 = h5py.File("glove-100-angular.hdf5", "r")
train = h5["train"]
test = h5["test"]
train[0]
Explanation: Read the data into memory.
End of explanation
with open("glove100.json", "w") as f:
for i in range(len(train)):
f.write('{"id":"' + str(i) + '",')
f.write('"embedding":[' + ",".join(str(x) for x in train[i]) + "]}")
f.write("\n")
Explanation: Save the train split in JSONL format.
End of explanation
# NOTE: Everything in this GCS DIR will be DELETED before uploading the data.
! gsutil rm -rf {BUCKET_NAME}/*
! gsutil cp glove100.json {BUCKET_NAME}/glove100.json
! gsutil ls {BUCKET_NAME}
Explanation: Upload the training data to GCS.
End of explanation
index_client = aiplatform_v1beta1.IndexServiceClient(
client_options=dict(api_endpoint=ENDPOINT)
)
DIMENSIONS = 100
DISPLAY_NAME = "glove_100_1"
DISPLAY_NAME_BRUTE_FORCE = DISPLAY_NAME + "_brute_force"
Explanation: Create Indexes
Create ANN Index (for Production Usage)
End of explanation
treeAhConfig = struct_pb2.Struct(
fields={
"leafNodeEmbeddingCount": struct_pb2.Value(number_value=500),
"leafNodesToSearchPercent": struct_pb2.Value(number_value=7),
}
)
algorithmConfig = struct_pb2.Struct(
fields={"treeAhConfig": struct_pb2.Value(struct_value=treeAhConfig)}
)
config = struct_pb2.Struct(
fields={
"dimensions": struct_pb2.Value(number_value=DIMENSIONS),
"approximateNeighborsCount": struct_pb2.Value(number_value=150),
"distanceMeasureType": struct_pb2.Value(string_value="DOT_PRODUCT_DISTANCE"),
"algorithmConfig": struct_pb2.Value(struct_value=algorithmConfig),
}
)
metadata = struct_pb2.Struct(
fields={
"config": struct_pb2.Value(struct_value=config),
"contentsDeltaUri": struct_pb2.Value(string_value=BUCKET_NAME),
}
)
ann_index = {
"display_name": DISPLAY_NAME,
"description": "Glove 100 ANN index",
"metadata": struct_pb2.Value(struct_value=metadata),
}
ann_index = index_client.create_index(parent=PARENT, index=ann_index)
# Poll the operation until it's done successfullly.
# This will take ~45 min.
while True:
if ann_index.done():
break
print("Poll the operation to create index...")
time.sleep(60)
INDEX_RESOURCE_NAME = ann_index.result().name
INDEX_RESOURCE_NAME
Explanation: Create the ANN index configuration:
Please read the documentation to understand the various configuration parameters that can be used to tune the index
End of explanation
from google.protobuf import *
algorithmConfig = struct_pb2.Struct(
fields={"bruteForceConfig": struct_pb2.Value(struct_value=struct_pb2.Struct())}
)
config = struct_pb2.Struct(
fields={
"dimensions": struct_pb2.Value(number_value=DIMENSIONS),
"approximateNeighborsCount": struct_pb2.Value(number_value=150),
"distanceMeasureType": struct_pb2.Value(string_value="DOT_PRODUCT_DISTANCE"),
"algorithmConfig": struct_pb2.Value(struct_value=algorithmConfig),
}
)
metadata = struct_pb2.Struct(
fields={
"config": struct_pb2.Value(struct_value=config),
"contentsDeltaUri": struct_pb2.Value(string_value=BUCKET_NAME),
}
)
brute_force_index = {
"display_name": DISPLAY_NAME_BRUTE_FORCE,
"description": "Glove 100 index (brute force)",
"metadata": struct_pb2.Value(struct_value=metadata),
}
brute_force_index = index_client.create_index(parent=PARENT, index=brute_force_index)
# Poll the operation until it's done successfullly.
# This will take ~45 min.
while True:
if brute_force_index.done():
break
print("Poll the operation to create index...")
time.sleep(60)
INDEX_BRUTE_FORCE_RESOURCE_NAME = brute_force_index.result().name
INDEX_BRUTE_FORCE_RESOURCE_NAME
Explanation: Create Brute Force Index (for Ground Truth)
The brute force index uses a naive brute force method to find the nearest neighbors. This method is not fast or efficient. Hence brute force indices are not recommended for production usage. They are to be used to find the "ground truth" set of neighbors, so that the "ground truth" set can be used to measure recall of the indices being tuned for production usage. To ensure an apples to apples comparison, the distanceMeasureType and featureNormType, dimensions of the brute force index should match those of the production indices being tuned.
Create the brute force index configuration:
End of explanation
with open("glove100_incremental.json", "w") as f:
f.write(
'{"id":"0","embedding":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]}\n'
)
Explanation: Update Indexes
Create incremental data file.
End of explanation
! gsutil cp glove100_incremental.json {BUCKET_NAME}/incremental/glove100.json
Explanation: Copy the incremental data file to a new subdirectory.
End of explanation
metadata = struct_pb2.Struct(
fields={
"contentsDeltaUri": struct_pb2.Value(string_value=BUCKET_NAME + "/incremental"),
}
)
ann_index = {
"name": INDEX_RESOURCE_NAME,
"display_name": DISPLAY_NAME,
"description": "Glove 100 ANN index",
"metadata": struct_pb2.Value(struct_value=metadata),
}
ann_index = index_client.update_index(index=ann_index)
# Poll the operation until it's done successfullly.
# This will take ~45 min.
while True:
if ann_index.done():
break
print("Poll the operation to update index...")
time.sleep(60)
INDEX_RESOURCE_NAME = ann_index.result().name
INDEX_RESOURCE_NAME
Explanation: Create update index request
End of explanation
index_endpoint_client = aiplatform_v1beta1.IndexEndpointServiceClient(
client_options=dict(api_endpoint=ENDPOINT)
)
VPC_NETWORK_NAME = "projects/{}/global/networks/{}".format(PROJECT_NUMBER, NETWORK_NAME)
VPC_NETWORK_NAME
index_endpoint = {
"display_name": "index_endpoint_for_demo",
"network": VPC_NETWORK_NAME,
}
r = index_endpoint_client.create_index_endpoint(
parent=PARENT, index_endpoint=index_endpoint
)
r.result()
INDEX_ENDPOINT_NAME = r.result().name
INDEX_ENDPOINT_NAME
Explanation: Create an IndexEndpoint with VPC Network
End of explanation
DEPLOYED_INDEX_ID = "ann_glove_deployed"
deploy_ann_index = {
"id": DEPLOYED_INDEX_ID,
"display_name": DEPLOYED_INDEX_ID,
"index": INDEX_RESOURCE_NAME,
}
r = index_endpoint_client.deploy_index(
index_endpoint=INDEX_ENDPOINT_NAME, deployed_index=deploy_ann_index
)
# Poll the operation until it's done successfullly.
while True:
if r.done():
break
print("Poll the operation to deploy index...")
time.sleep(60)
r.result()
Explanation: Deploy Indexes
Deploy ANN Index
End of explanation
DEPLOYED_BRUTE_FORCE_INDEX_ID = "glove_brute_force_deployed"
deploy_brute_force_index = {
"id": DEPLOYED_BRUTE_FORCE_INDEX_ID,
"display_name": DEPLOYED_BRUTE_FORCE_INDEX_ID,
"index": INDEX_BRUTE_FORCE_RESOURCE_NAME,
}
r = index_endpoint_client.deploy_index(
index_endpoint=INDEX_ENDPOINT_NAME, deployed_index=deploy_brute_force_index
)
# Poll the operation until it's done successfullly.
while True:
if r.done():
break
print("Poll the operation to deploy index...")
time.sleep(60)
r.result()
Explanation: Deploy Brute Force Index
End of explanation
%%writefile match_service.proto
syntax = "proto3";
package google.cloud.aiplatform.container.v1beta1;
import "google/rpc/status.proto";
// MatchService is a Google managed service for efficient vector similarity
// search at scale.
service MatchService {
// Returns the nearest neighbors for the query. If it is a sharded
// deployment, calls the other shards and aggregates the responses.
rpc Match(MatchRequest) returns (MatchResponse) {}
// Returns the nearest neighbors for batch queries. If it is a sharded
// deployment, calls the other shards and aggregates the responses.
rpc BatchMatch(BatchMatchRequest) returns (BatchMatchResponse) {}
}
// Parameters for a match query.
message MatchRequest {
// The ID of the DeploydIndex that will serve the request.
// This MatchRequest is sent to a specific IndexEndpoint of the Control API,
// as per the IndexEndpoint.network. That IndexEndpoint also has
// IndexEndpoint.deployed_indexes, and each such index has an
// DeployedIndex.id field.
// The value of the field below must equal one of the DeployedIndex.id
// fields of the IndexEndpoint that is being called for this request.
string deployed_index_id = 1;
// The embedding values.
repeated float float_val = 2;
// The number of nearest neighbors to be retrieved from database for
// each query. If not set, will use the default from
// the service configuration.
int32 num_neighbors = 3;
// The list of restricts.
repeated Namespace restricts = 4;
// Crowding is a constraint on a neighbor list produced by nearest neighbor
// search requiring that no more than some value k' of the k neighbors
// returned have the same value of crowding_attribute.
// It's used for improving result diversity.
// This field is the maximum number of matches with the same crowding tag.
int32 per_crowding_attribute_num_neighbors = 5;
// The number of neighbors to find via approximate search before
// exact reordering is performed. If not set, the default value from scam
// config is used; if set, this value must be > 0.
int32 approx_num_neighbors = 6;
// The fraction of the number of leaves to search, set at query time allows
// user to tune search performance. This value increase result in both search
// accuracy and latency increase. The value should be between 0.0 and 1.0. If
// not set or set to 0.0, query uses the default value specified in
// NearestNeighborSearchConfig.TreeAHConfig.leaf_nodes_to_search_percent.
int32 leaf_nodes_to_search_percent_override = 7;
}
// Response of a match query.
message MatchResponse {
message Neighbor {
// The ids of the matches.
string id = 1;
// The distances of the matches.
double distance = 2;
}
// All its neighbors.
repeated Neighbor neighbor = 1;
}
// Parameters for a batch match query.
message BatchMatchRequest {
// Batched requests against one index.
message BatchMatchRequestPerIndex {
// The ID of the DeploydIndex that will serve the request.
string deployed_index_id = 1;
// The requests against the index identified by the above deployed_index_id.
repeated MatchRequest requests = 2;
// Selects the optimal batch size to use for low-level batching. Queries
// within each low level batch are executed sequentially while low level
// batches are executed in parallel.
// This field is optional, defaults to 0 if not set. A non-positive number
// disables low level batching, i.e. all queries are executed sequentially.
int32 low_level_batch_size = 3;
}
// The batch requests grouped by indexes.
repeated BatchMatchRequestPerIndex requests = 1;
}
// Response of a batch match query.
message BatchMatchResponse {
// Batched responses for one index.
message BatchMatchResponsePerIndex {
// The ID of the DeployedIndex that produced the responses.
string deployed_index_id = 1;
// The match responses produced by the index identified by the above
// deployed_index_id. This field is set only when the query against that
// index succeed.
repeated MatchResponse responses = 2;
// The status of response for the batch query identified by the above
// deployed_index_id.
google.rpc.Status status = 3;
}
// The batched responses grouped by indexes.
repeated BatchMatchResponsePerIndex responses = 1;
}
// Namespace specifies the rules for determining the datapoints that are
// eligible for each matching query, overall query is an AND across namespaces.
message Namespace {
// The string name of the namespace that this proto is specifying,
// such as "color", "shape", "geo", or "tags".
string name = 1;
// The allowed tokens in the namespace.
repeated string allow_tokens = 2;
// The denied tokens in the namespace.
// The denied tokens have exactly the same format as the token fields, but
// represents a negation. When a token is denied, then matches will be
// excluded whenever the other datapoint has that token.
//
// For example, if a query specifies {color: red, blue, !purple}, then that
// query will match datapoints that are red or blue, but if those points are
// also purple, then they will be excluded even if they are red/blue.
repeated string deny_tokens = 3;
}
Explanation: Create Online Queries
After you built your indexes, you may query against the deployed index through the online querying gRPC API (Match service) within the virtual machine instances from the same region (for example 'us-central1' in this tutorial).
The way a client uses this gRPC API is by folowing steps:
Write match_service.proto locally
Clone the repository that contains the dependencies of match_service.proto in the Terminal:
$ mkdir third_party && cd third_party
$ git clone https://github.com/googleapis/googleapis.git
Compile the protocal buffer (see below)
Obtain the index endpoint
Use a code-generated stub to make the call, passing the parameter values
End of explanation
! python -m grpc_tools.protoc -I=. --proto_path=third_party/googleapis --python_out=. --grpc_python_out=. match_service.proto
Explanation: Compile the protocol buffer, and then match_service_pb2.py and match_service_pb2_grpc.py are generated.
End of explanation
DEPLOYED_INDEX_SERVER_IP = (
list(index_endpoint_client.list_index_endpoints(parent=PARENT))[0]
.deployed_indexes[0]
.private_endpoints.match_grpc_address
)
DEPLOYED_INDEX_SERVER_IP
Explanation: Obtain the Private Endpoint:
End of explanation
import match_service_pb2
import match_service_pb2_grpc
channel = grpc.insecure_channel("{}:10000".format(DEPLOYED_INDEX_SERVER_IP))
stub = match_service_pb2_grpc.MatchServiceStub(channel)
# Test query
query = [
-0.11333,
0.48402,
0.090771,
-0.22439,
0.034206,
-0.55831,
0.041849,
-0.53573,
0.18809,
-0.58722,
0.015313,
-0.014555,
0.80842,
-0.038519,
0.75348,
0.70502,
-0.17863,
0.3222,
0.67575,
0.67198,
0.26044,
0.4187,
-0.34122,
0.2286,
-0.53529,
1.2582,
-0.091543,
0.19716,
-0.037454,
-0.3336,
0.31399,
0.36488,
0.71263,
0.1307,
-0.24654,
-0.52445,
-0.036091,
0.55068,
0.10017,
0.48095,
0.71104,
-0.053462,
0.22325,
0.30917,
-0.39926,
0.036634,
-0.35431,
-0.42795,
0.46444,
0.25586,
0.68257,
-0.20821,
0.38433,
0.055773,
-0.2539,
-0.20804,
0.52522,
-0.11399,
-0.3253,
-0.44104,
0.17528,
0.62255,
0.50237,
-0.7607,
-0.071786,
0.0080131,
-0.13286,
0.50097,
0.18824,
-0.54722,
-0.42664,
0.4292,
0.14877,
-0.0072514,
-0.16484,
-0.059798,
0.9895,
-0.61738,
0.054169,
0.48424,
-0.35084,
-0.27053,
0.37829,
0.11503,
-0.39613,
0.24266,
0.39147,
-0.075256,
0.65093,
-0.20822,
-0.17456,
0.53571,
-0.16537,
0.13582,
-0.56016,
0.016964,
0.1277,
0.94071,
-0.22608,
-0.021106,
]
request = match_service_pb2.MatchRequest()
request.deployed_index_id = DEPLOYED_INDEX_ID
for val in query:
request.float_val.append(val)
response = stub.Match(request)
response
Explanation: Test your query:
End of explanation
def get_request(embedding, deployed_index_id):
request = match_service_pb2.MatchRequest(num_neighbors=k)
request.deployed_index_id = deployed_index_id
for val in embedding:
request.float_val.append(val)
return request
# Test query
queries = [
[
-0.11333,
0.48402,
0.090771,
-0.22439,
0.034206,
-0.55831,
0.041849,
-0.53573,
0.18809,
-0.58722,
0.015313,
-0.014555,
0.80842,
-0.038519,
0.75348,
0.70502,
-0.17863,
0.3222,
0.67575,
0.67198,
0.26044,
0.4187,
-0.34122,
0.2286,
-0.53529,
1.2582,
-0.091543,
0.19716,
-0.037454,
-0.3336,
0.31399,
0.36488,
0.71263,
0.1307,
-0.24654,
-0.52445,
-0.036091,
0.55068,
0.10017,
0.48095,
0.71104,
-0.053462,
0.22325,
0.30917,
-0.39926,
0.036634,
-0.35431,
-0.42795,
0.46444,
0.25586,
0.68257,
-0.20821,
0.38433,
0.055773,
-0.2539,
-0.20804,
0.52522,
-0.11399,
-0.3253,
-0.44104,
0.17528,
0.62255,
0.50237,
-0.7607,
-0.071786,
0.0080131,
-0.13286,
0.50097,
0.18824,
-0.54722,
-0.42664,
0.4292,
0.14877,
-0.0072514,
-0.16484,
-0.059798,
0.9895,
-0.61738,
0.054169,
0.48424,
-0.35084,
-0.27053,
0.37829,
0.11503,
-0.39613,
0.24266,
0.39147,
-0.075256,
0.65093,
-0.20822,
-0.17456,
0.53571,
-0.16537,
0.13582,
-0.56016,
0.016964,
0.1277,
0.94071,
-0.22608,
-0.021106,
],
[
-0.99544,
-2.3651,
-0.24332,
-1.0321,
0.42052,
-1.1817,
-0.16451,
-1.683,
0.49673,
-0.27258,
-0.025397,
0.34188,
1.5523,
1.3532,
0.33297,
-0.0056677,
-0.76525,
0.49587,
1.2211,
0.83394,
-0.20031,
-0.59657,
0.38485,
-0.23487,
-1.0725,
0.95856,
0.16161,
-1.2496,
1.6751,
0.73899,
0.051347,
-0.42702,
0.16257,
-0.16772,
0.40146,
0.29837,
0.96204,
-0.36232,
-0.47848,
0.78278,
0.14834,
1.3407,
0.47834,
-0.39083,
-1.037,
-0.24643,
-0.75841,
0.7669,
-0.37363,
0.52741,
0.018563,
-0.51301,
0.97674,
0.55232,
1.1584,
0.73715,
1.3055,
-0.44743,
-0.15961,
0.85006,
-0.34092,
-0.67667,
0.2317,
1.5582,
1.2308,
-0.62213,
-0.032801,
0.1206,
-0.25899,
-0.02756,
-0.52814,
-0.93523,
0.58434,
-0.24799,
0.37692,
0.86527,
0.069626,
1.3096,
0.29975,
-1.3651,
-0.32048,
-0.13741,
0.33329,
-1.9113,
-0.60222,
-0.23921,
0.12664,
-0.47961,
-0.89531,
0.62054,
0.40869,
-0.08503,
0.6413,
-0.84044,
-0.74325,
-0.19426,
0.098722,
0.32648,
-0.67621,
-0.62692,
],
]
batch_request = match_service_pb2.BatchMatchRequest()
batch_request_ann = match_service_pb2.BatchMatchRequest.BatchMatchRequestPerIndex()
batch_request_brute_force = (
match_service_pb2.BatchMatchRequest.BatchMatchRequestPerIndex()
)
batch_request_ann.deployed_index_id = DEPLOYED_INDEX_ID
batch_request_brute_force.deployed_index_id = DEPLOYED_BRUTE_FORCE_INDEX_ID
for query in queries:
batch_request_ann.requests.append(get_request(query, DEPLOYED_INDEX_ID))
batch_request_brute_force.requests.append(
get_request(query, DEPLOYED_BRUTE_FORCE_INDEX_ID)
)
batch_request.requests.append(batch_request_ann)
batch_request.requests.append(batch_request_brute_force)
response = stub.BatchMatch(batch_request)
response
Explanation: Batch Query
You can run multiple queries in a single RPC call using the BatchMatch API:
End of explanation
def get_neighbors(embedding, deployed_index_id):
request = match_service_pb2.MatchRequest(num_neighbors=k)
request.deployed_index_id = deployed_index_id
for val in embedding:
request.float_val.append(val)
response = stub.Match(request)
return [int(n.id) for n in response.neighbor]
# This will take 5-10 min
recall = sum(
[
len(
set(get_neighbors(test[i], DEPLOYED_BRUTE_FORCE_INDEX_ID)).intersection(
set(get_neighbors(test[i], DEPLOYED_INDEX_ID))
)
)
for i in range(len(test))
]
) / (1.0 * len(test) * k)
print("Recall: {}".format(recall))
Explanation: Compute Recall
Use deployed brute force Index as the ground truth to calculate the recall of ANN Index:
End of explanation
index_client.delete_index(name=INDEX_RESOURCE_NAME)
index_client.delete_index(name=INDEX_BRUTE_FORCE_RESOURCE_NAME)
index_endpoint_client.delete_index_endpoint(name=INDEX_ENDPOINT_NAME)
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also manually delete resources that you created by running the following code.
End of explanation |
8,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 02
Step1: The moving average is defined as follows
Step2: Here's what we care to visualize
Step3: Time to compute the moving averages. We'll also run the merged op to track how the values change
Step4: Check out the visualization by running TensorBoard from the terminal | Python Code:
import tensorflow as tf
import numpy as np
raw_data = np.random.normal(10, 1, 100)
Explanation: Ch 02: Concept 08
Using TensorBoard
TensorBoard is a great way to visualize what's happening behind the code.
In this example, we'll loop through some numbers to improve our guess of the average value. Then we can visualize the results on TensorBoard.
Let's just set ourselves up with some data to work with:
End of explanation
alpha = tf.constant(0.05)
curr_value = tf.placeholder(tf.float32)
prev_avg = tf.Variable(0.)
update_avg = alpha * curr_value + (1 - alpha) * prev_avg
Explanation: The moving average is defined as follows:
End of explanation
avg_hist = tf.summary.scalar("running_average", update_avg)
value_hist = tf.summary.scalar("incoming_values", curr_value)
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("./logs")
Explanation: Here's what we care to visualize:
End of explanation
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(len(raw_data)):
summary_str, curr_avg = sess.run([merged, update_avg], feed_dict={curr_value: raw_data[i]})
sess.run(tf.assign(prev_avg, curr_avg))
print(raw_data[i], curr_avg)
writer.add_summary(summary_str, i)
Explanation: Time to compute the moving averages. We'll also run the merged op to track how the values change:
End of explanation
#made the logs be written successfully
writer.close()
Explanation: Check out the visualization by running TensorBoard from the terminal:
$ tensorboard --logdir=path/to/logs
End of explanation |
8,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grove Gesture Example
This example shows how to use the
Grove gesture sensor on the board.
The gesture sensor can detect 10 gestures as follows
Step1: 1. Instantiate the sensor object
Step2: 2. Set speed
There are currently 2 modes available for users to use
Step3: 3. Read gestures
The following code will read 10 gestures within 30 seconds.
Try to change your gesture in front of the sensor and check the results. | Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
Explanation: Grove Gesture Example
This example shows how to use the
Grove gesture sensor on the board.
The gesture sensor can detect 10 gestures as follows:
| Raw value read by sensor | Gesture |
|--------------------------|--------------------|
| 0 | No detection |
| 1 | forward |
| 2 | backward |
| 3 | right |
| 4 | left |
| 5 | up |
| 6 | down |
| 7 | clockwise |
| 8 | counter-clockwise |
| 9 | wave |
For this notebook, a PYNQ Arduino shield is also required.
The grove gesture sensor is attached to the I2C interface on the shield.
This grove sensor should also work with PMOD interfaces on the board.
End of explanation
from pynq.lib.arduino import Grove_Gesture
from pynq.lib.arduino import ARDUINO_GROVE_I2C
sensor = Grove_Gesture(base.ARDUINO, ARDUINO_GROVE_I2C)
Explanation: 1. Instantiate the sensor object
End of explanation
sensor.set_speed(240)
Explanation: 2. Set speed
There are currently 2 modes available for users to use: far and near.
The corresponding fps are 120 and 240, respectively.
For more information, please refer to Grove gesture sensor.
End of explanation
from time import sleep
for i in range(10):
print(sensor.read_gesture())
sleep(3)
Explanation: 3. Read gestures
The following code will read 10 gestures within 30 seconds.
Try to change your gesture in front of the sensor and check the results.
End of explanation |
8,695 | Given the following text description, write Python code to implement the functionality described.
Description:
Sum of the series 1 + ( 1 + 2 ) + ( 1 + 2 + 3 ) + ( 1 + 2 + 3 + 4 ) + ... ... + ( 1 + 2 + 3 + 4 + ... + n )
Function to find sum of series ; Driver Code
| Python Code:
def sumOfSeries(n ) :
return sum([ i *(i + 1 ) / 2 for i in range(1 , n + 1 ) ] )
if __name__== "__main __":
n = 10
print(sumOfSeries(n ) )
|
8,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ClarityViz Tutorial
Overview
claritybase
Step1: 2) local histogram equilization
local histogram equilization is the process of enhancing low contrast images, and makes the pixel values easier to process
after you apply localeq, load the nii file generated from it
Step2: 3) Filter out the noise in the image to get only the points we actually want
for .calculatePoints
Step3: 4) Now that you've calculated the points, you have options.
generate_plotly_html() lets you generate your first graph, a basic plotting of the points using plotly. Fancier graphs come later, after edges are calculated and some other operations are performed.
savePoints() saves the points to a csv file
plot3d() calculates the edges between the nodes in the graph
graphmlconvert() uses the nodes and edges files generated in plot3d() to create a graphml file that can later be used for more advanced graphs
Step4: Sample output from generate_plotly_html()
Step5: Sample output from generate_density_graph() | Python Code:
from clarityviz import claritybase
token = 'Fear199'
source_directory = '/cis/home/alee/claritycontrol/code/data/raw'
# Initialize the claritybase object, the initial basis for all operations.
# After you initialize with a token and source directory, a folder will be created in your current directory
# with the token name, and all the output files will be stored there.
cb = claritybase(token, source_directory)
Explanation: ClarityViz Tutorial
Overview
claritybase: module loads all the initial img files, applies local histogram equilization, and generates a csv file for the points and enables plot generation and graphml generation
densitygraph: module does takes the graphml file generated from clarityviz and performs all the necessary caluclations and generates a graph with a color scheme representative of node density
atlasregiongraph: module takes a csv and generates a graph color coded by region according to the atlas
NOTE: the atlasregiongraph module is still currently in development and is not currently functional
I) Using claritybase
claritybase is used to produce the essential files. After you perform these operations, you can choose to choose to do the calculations and or display which ever graphs you want.
1) First you import the claritybase module and then you specify the token of the img file you're working with and the optional source directory.
If you're in the same directory as the img file you're working with, you only need to pass in the token.
This creates a directory with the same name as the given token in which all the intermediate and final result files will be output to.
End of explanation
cb.applyLocalEq()
cb.loadGeneratedNii()
Explanation: 2) local histogram equilization
local histogram equilization is the process of enhancing low contrast images, and makes the pixel values easier to process
after you apply localeq, load the nii file generated from it
End of explanation
cb.calculatePoints(threshold = 0.9, sample = 0.1)
Explanation: 3) Filter out the noise in the image to get only the points we actually want
for .calculatePoints:
threshold is the threshold of samples to filter out based on fraction of the maximum intensity.
e.g. let threshold = 0.9, if the sample's intensity is less than 0.9 * (maximum intensity) then it will be filtered out
sample is the fraction of the total number of samples to include in the graph.
End of explanation
cb.generate_plotly_html()
# savePoints generates the csv file of all the points in the graph.
cb.savePoints()
# plot3d calculates all the edges between the nodes.
cb.plot3d()
# graphmlconvert() creates a graphml file based on the nodes and edges file generated in plo3d.
cb.graphmlconvert()
Explanation: 4) Now that you've calculated the points, you have options.
generate_plotly_html() lets you generate your first graph, a basic plotting of the points using plotly. Fancier graphs come later, after edges are calculated and some other operations are performed.
savePoints() saves the points to a csv file
plot3d() calculates the edges between the nodes in the graph
graphmlconvert() uses the nodes and edges files generated in plot3d() to create a graphml file that can later be used for more advanced graphs
End of explanation
from clarityviz import densitygraph
# Uses the same token before, must be in the same directory as before.
dg = densitygraph(token)
# generates a 3d plotly with color representations of density
dg.generate_density_graph()
# generates a heat map, essentially a legend, telling how many edges a certain color represents,
# with number of edges representing how dense a certain node clustering may be.
dg.generate_heat_map()
Explanation: Sample output from generate_plotly_html():
Fear199:
https://neurodatadesign.github.io/seelviz/reveal/html/clarityvizhtmls/Fear199plotly.html
Cocaine174:
https://neurodatadesign.github.io/seelviz/reveal/html/Cocaine174localeq.html
Control181:
https://neurodatadesign.github.io/seelviz/reveal/html/Control181localeq.html
After you finish generating all the files from claritybase, you have two options for more advanced analysis: the densitygraph module and the atlasregiongraph module
II) Using densitygraph
Once the graphml file is generated with graphmlconvert(), you can use the densitygraph module.
The density graph module is used to visualize the density of nodes in the graph, i.e. the density of neurons, in a colored fashion.
End of explanation
from clarityviz import atlasregiongraph
regiongraph = atlasregiongraph(token)
regiongraph.generate_atlas_region_graph()
Explanation: Sample output from generate_density_graph():
Fear199:
https://neurodatadesign.github.io/seelviz/reveal/html/clarityvizhtmls/Fear199_density.html
Cocaine174:
https://neurodatadesign.github.io/seelviz/reveal/html/graphmlhtmls/Cocaine174localeq.5000.graphml.html
Sample output from generate_heat_map():
Fear199: https://neurodatadesign.github.io/seelviz/reveal/html/clarityvizhtmls/Fear199heatmap.html
Cocaine174:
https://neurodatadesign.github.io/seelviz/reveal/html/graphmlhtmls/Cocaine174localeq.5000.graphmlheatmap.html
III) Using atlasregiongraph
Once the csv file is generated with savePoints(), we can use the atlasregiongraph module.
This module creates a graph color coded to the different regions of the brain according to the atlas.
End of explanation |
8,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Temporary notebook to get the parameters for the MICE sims for the data ismael sent me.
Step1: mean number of central/satellites per bin. read with pd.read_csv(‘hod_redmagicMICE.csv’, sep=' ’). First columm is the bins in Mhalo, and then columm are n_cen/n_sat + redshift bin + catalog. Where catalog are hd=high density , hl=high luminosity, rl=higher luminosity.
Step2: parameters for function i sent before. read with pd.read_csv(‘hod_redmagicMICE_fit.csv’, sep=' ’). | Python Code:
import pandas as pd
import numpy as np
#fit given by Andres
#CENTRALS
def f_cen(logmhalo,logmmin,siglogm,fmaxcen,fmincen,k,logmdrop):
ncen = 0.5*( 1. + ss.erf((logmhalo[0,:] - logmmin)/siglogm) )
ncen = fmaxcen * ncen
ncen = ncen * (1.0 - (1.0-fmincen/fmaxcen)/(1.0 + 10**((2.0/k)*(logmhalo[0,:]-logmdrop))))
return log10(ncen)
#SATELLITES
def f_sat(logmhalo,logmmin,siglogm,logm1,alpha):
nsat = 0.5*( 1. + ss.erf((logmhalo[0,:] - logmmin)/siglogm) )
#nsat = 0.5*( 1. + ss.erf((logmhalo[0,:] - param_cen[0])/param_cen[1]) )
nsat = param_cen[2] * nsat
nsat = nsat * (10**logmhalo[0,:]/10**logm1)**alpha
return log10(nsat)
Explanation: Temporary notebook to get the parameters for the MICE sims for the data ismael sent me.
End of explanation
mean_gal_per_bin = pd.read_csv('hod_redmagicMICE.csv', sep=' ')
mean_gal_per_bin
Explanation: mean number of central/satellites per bin. read with pd.read_csv(‘hod_redmagicMICE.csv’, sep=' ’). First columm is the bins in Mhalo, and then columm are n_cen/n_sat + redshift bin + catalog. Where catalog are hd=high density , hl=high luminosity, rl=higher luminosity.
End of explanation
hod_fit = pd.read_csv('hod_redmagicMICE_fit.csv', sep=' ')
hod_fit
Explanation: parameters for function i sent before. read with pd.read_csv(‘hod_redmagicMICE_fit.csv’, sep=' ’).
End of explanation |
8,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Survival Analysis
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event.
In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions.
Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data.
As examples, we'll consider two applications that are a little less serious than life and death
Step2: As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$.
Step3: The result is an object that represents the distribution.
Here's what the Weibull CDF looks like with those parameters.
Step4: actual_dist provides rvs, which we can use to generate a random sample from this distribution.
Step5: So, given the parameters of the distribution, we can generate a sample.
Now let's see if we can go the other way
Step6: And a uniform prior for $k$
Step7: I'll use make_joint to make a joint prior distribution for the two parameters.
Step8: The result is a DataFrame that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows.
Now I'll use meshgrid to make a 3-D mesh with $\lambda$ on the first axis (axis=0), $k$ on the second axis (axis=1), and the data on the third axis (axis=2).
Step9: Now we can use weibull_dist to compute the PDF of the Weibull distribution for each pair of parameters and each data point.
Step10: The likelihood of the data is the product of the probability densities along axis=2.
Step11: Now we can compute the posterior distribution in the usual way.
Step13: The following function encapsulates these steps.
It takes a joint prior distribution and the data, and returns a joint posterior distribution.
Step14: Here's how we use it.
Step15: And here's a contour plot of the joint posterior distribution.
Step16: It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3.
And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8.
Marginal Distributions
To be more precise about these ranges, we can extract the marginal distributions
Step17: And compute the posterior means and 90% credible intervals.
Step18: The vertical gray line show the actual value of $\lambda$.
Here's the marginal posterior distribution for $k$.
Step19: The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely.
But for both parameters, the actual value falls in the credible interval.
Step20: Incomplete Data
In the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know).
But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future.
As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted.
Some dogs might be snapped up immediately; others might have to wait longer.
The people who operate the shelter might want to make inferences about the distribution of these residence times.
Suppose you monitor arrivals and departures over 8 weeks and 10 dogs arrive during that interval.
I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this.
Step21: Now let's suppose that the residence times follow the Weibull distribution we used in the previous example.
We can generate a sample from that distribution like this
Step22: I'll use these values to construct a DataFrame that contains the arrival and departure times for each dog, called start and end.
Step23: For display purposes, I'll sort the rows of the DataFrame by arrival time.
Step24: Notice that several of the lifelines extend past the observation window of 8 weeks.
So if we observed this system at the beginning of Week 8, we would have incomplete information.
Specifically, we would not know the future adoption times for Dogs 6, 7, and 8.
I'll simulate this incomplete data by identifying the lifelines that extend past the observation window
Step25: censored is a Boolean Series that is True for lifelines that extend past Week 8.
Data that is not available is sometimes called "censored" in the sense that it is hidden from us.
But in this case it is hidden because we don't know the future, not because someone is censoring it.
For the lifelines that are censored, I'll modify end to indicate when they are last observed and status to indicate that the observation is incomplete.
Step27: Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line.
Step28: And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines.
Step29: What we have simulated is the data that would be available at the beginning of Week 8.
Using Incomplete Data
Now, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times.
First I'll split the data into two sets
Step30: For the complete data, we can use update_weibull, which uses the PDF of the Weibull distribution to compute the likelihood of the data.
Step32: For the incomplete data, we have to think a little harder.
At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than T.
And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds T.
The following function is identical to update_weibull except that it uses sf, which computes the survival function, rather than pdf.
Step33: Here's the update with the incomplete data.
Step34: And here's what the joint posterior distribution looks like after both updates.
Step35: Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider.
We can see that more clearly by looking at the marginal distributions.
Step36: Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data.
Step37: The distribution with some incomplete data is substantially wider.
As an aside, notice that the posterior distribution does not come all the way to 0 on the right side.
That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter.
If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior.
Here's the posterior marginal distribution for $k$
Step38: In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider.
In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored.
In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty.
This example is based on data I generated; in the next section we'll do a similar analysis with real data.
Light Bulbs
In 2007 researchers ran an experiment to characterize the distribution of lifetimes for light bulbs.
Here is their description of the experiment
Step39: We can load the data into a DataFrame like this
Step40: Column h contains the times when bulbs failed in hours; Column f contains the number of bulbs that failed at each time.
We can represent these values and frequencies using a Pmf, like this
Step41: Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously.
The average lifetime is about 1400 h.
Step42: Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data.
Again, I'll start with uniform priors for $\lambda$ and $k$
Step43: For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations.
They will run faster with fewer values, but the results will be less precise.
As usual, we can use make_joint to make the prior joint distribution.
Step44: Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times.
We can use np.repeat to transform the data.
Step45: Now we can use update_weibull to do the update.
Step46: Here's what the posterior joint distribution looks like
Step47: To summarize this joint posterior distribution, we'll compute the posterior mean lifetime.
Posterior Means
To compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$.
Step48: Now for each pair of parameters we'll use weibull_dist to compute the mean.
Step49: The result is an array with the same dimensions as the joint distribution.
Now we need to weight each mean with the corresponding probability from the joint posterior.
Step50: Finally we compute the sum of the weighted means.
Step52: Based on the posterior distribution, we think the mean lifetime is about 1413 hours.
The following function encapsulates these steps
Step54: Incomplete Information
The previous update was not quite right, because it assumed each light bulb died at the instant we observed it.
According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check.
It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval.
Step55: The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval.
Here's how we run the update.
Step56: And here are the results.
Step57: Visually this result is almost identical to what we got using the PDF.
And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct.
To see whether it makes any difference at all, let's check the posterior means.
Step58: When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less.
And that makes sense
Step59: If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution.
Step60: And here's what it looks like.
Step61: But that's based on the assumption that we know $\lambda$ and $k$, and we don't.
Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities.
So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities.
We can use make_mixture to compute the posterior predictive distribution.
It doesn't work with joint distributions, but we can convert the DataFrame that represents a joint distribution to a Series, like this
Step62: The result is a Series with a MultiIndex that contains two "levels"
Step63: Now we can use make_mixture, passing as parameters the posterior probabilities in posterior_series and the sequence of binomial distributions in pmf_seq.
Step64: Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters.
Step65: The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs.
Summary
This chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains.
We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways
Step67: Exercise
Step68: Now we need some data.
The following cell downloads data I collected from the National Oceanic and Atmospheric Administration (NOAA) for Seattle, Washington in May 2020.
Step69: Now we can load it into a DataFrame
Step70: I'll make a Boolean Series to indicate which days it rained.
Step71: And select the total rainfall on the days it rained.
Step72: Here's what the CDF of the data looks like.
Step73: The maximum is 1.14 inches of rain is one day.
To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model.
I suggest you proceed in the following steps | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Survival Analysis
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
from scipy.stats import weibull_min
def weibull_dist(lam, k):
return weibull_min(k, scale=lam)
Explanation: This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event.
In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions.
Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data.
As examples, we'll consider two applications that are a little less serious than life and death: the time until light bulbs fail and the time until dogs in a shelter are adopted.
To describe these "survival times", we'll use the Weibull distribution.
The Weibull Distribution
The Weibull distribution is often used in survival analysis because it is a good model for the distribution of lifetimes for manufactured products, at least over some parts of the range.
SciPy provides several versions of the Weibull distribution; the one we'll use is called weibull_min.
To make the interface consistent with our notation, I'll wrap it in a function that takes as parameters $\lambda$, which mostly affects the location or "central tendency" of the distribution, and $k$, which affects the shape.
End of explanation
lam = 3
k = 0.8
actual_dist = weibull_dist(lam, k)
Explanation: As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$.
End of explanation
import numpy as np
from empiricaldist import Cdf
from utils import decorate
qs = np.linspace(0, 12, 101)
ps = actual_dist.cdf(qs)
cdf = Cdf(ps, qs)
cdf.plot()
decorate(xlabel='Duration in time',
ylabel='CDF',
title='CDF of a Weibull distribution')
Explanation: The result is an object that represents the distribution.
Here's what the Weibull CDF looks like with those parameters.
End of explanation
np.random.seed(17)
data = actual_dist.rvs(10)
data
Explanation: actual_dist provides rvs, which we can use to generate a random sample from this distribution.
End of explanation
from utils import make_uniform
lams = np.linspace(0.1, 10.1, num=101)
prior_lam = make_uniform(lams, name='lambda')
Explanation: So, given the parameters of the distribution, we can generate a sample.
Now let's see if we can go the other way: given the sample, we'll estimate the parameters.
Here's a uniform prior distribution for $\lambda$:
End of explanation
ks = np.linspace(0.1, 5.1, num=101)
prior_k = make_uniform(ks, name='k')
Explanation: And a uniform prior for $k$:
End of explanation
from utils import make_joint
prior = make_joint(prior_lam, prior_k)
Explanation: I'll use make_joint to make a joint prior distribution for the two parameters.
End of explanation
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
Explanation: The result is a DataFrame that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows.
Now I'll use meshgrid to make a 3-D mesh with $\lambda$ on the first axis (axis=0), $k$ on the second axis (axis=1), and the data on the third axis (axis=2).
End of explanation
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
densities.shape
Explanation: Now we can use weibull_dist to compute the PDF of the Weibull distribution for each pair of parameters and each data point.
End of explanation
likelihood = densities.prod(axis=2)
likelihood.sum()
Explanation: The likelihood of the data is the product of the probability densities along axis=2.
End of explanation
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
Explanation: Now we can compute the posterior distribution in the usual way.
End of explanation
def update_weibull(prior, data):
Update the prior based on data.
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
Explanation: The following function encapsulates these steps.
It takes a joint prior distribution and the data, and returns a joint posterior distribution.
End of explanation
posterior = update_weibull(prior, data)
Explanation: Here's how we use it.
End of explanation
from utils import plot_contour
plot_contour(posterior)
decorate(title='Posterior joint distribution of Weibull parameters')
Explanation: And here's a contour plot of the joint posterior distribution.
End of explanation
from utils import marginal
posterior_lam = marginal(posterior, 0)
posterior_k = marginal(posterior, 1)
Explanation: It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3.
And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8.
Marginal Distributions
To be more precise about these ranges, we can extract the marginal distributions:
End of explanation
import matplotlib.pyplot as plt
plt.axvline(3, color='C5')
posterior_lam.plot(color='C4', label='lambda')
decorate(xlabel='lam',
ylabel='PDF',
title='Posterior marginal distribution of lam')
Explanation: And compute the posterior means and 90% credible intervals.
End of explanation
plt.axvline(0.8, color='C5')
posterior_k.plot(color='C12', label='k')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
Explanation: The vertical gray line show the actual value of $\lambda$.
Here's the marginal posterior distribution for $k$.
End of explanation
print(lam, posterior_lam.credible_interval(0.9))
print(k, posterior_k.credible_interval(0.9))
Explanation: The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely.
But for both parameters, the actual value falls in the credible interval.
End of explanation
np.random.seed(19)
start = np.random.uniform(0, 8, size=10)
start
Explanation: Incomplete Data
In the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know).
But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future.
As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted.
Some dogs might be snapped up immediately; others might have to wait longer.
The people who operate the shelter might want to make inferences about the distribution of these residence times.
Suppose you monitor arrivals and departures over 8 weeks and 10 dogs arrive during that interval.
I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this.
End of explanation
np.random.seed(17)
duration = actual_dist.rvs(10)
duration
Explanation: Now let's suppose that the residence times follow the Weibull distribution we used in the previous example.
We can generate a sample from that distribution like this:
End of explanation
import pandas as pd
d = dict(start=start, end=start+duration)
obs = pd.DataFrame(d)
Explanation: I'll use these values to construct a DataFrame that contains the arrival and departure times for each dog, called start and end.
End of explanation
obs = obs.sort_values(by='start', ignore_index=True)
obs
Explanation: For display purposes, I'll sort the rows of the DataFrame by arrival time.
End of explanation
censored = obs['end'] > 8
Explanation: Notice that several of the lifelines extend past the observation window of 8 weeks.
So if we observed this system at the beginning of Week 8, we would have incomplete information.
Specifically, we would not know the future adoption times for Dogs 6, 7, and 8.
I'll simulate this incomplete data by identifying the lifelines that extend past the observation window:
End of explanation
obs.loc[censored, 'end'] = 8
obs.loc[censored, 'status'] = 0
Explanation: censored is a Boolean Series that is True for lifelines that extend past Week 8.
Data that is not available is sometimes called "censored" in the sense that it is hidden from us.
But in this case it is hidden because we don't know the future, not because someone is censoring it.
For the lifelines that are censored, I'll modify end to indicate when they are last observed and status to indicate that the observation is incomplete.
End of explanation
def plot_lifelines(obs):
Plot a line for each observation.
obs: DataFrame
for y, row in obs.iterrows():
start = row['start']
end = row['end']
status = row['status']
if status == 0:
# ongoing
plt.hlines(y, start, end, color='C0')
else:
# complete
plt.hlines(y, start, end, color='C1')
plt.plot(end, y, marker='o', color='C1')
decorate(xlabel='Time (weeks)',
ylabel='Dog index',
title='Lifelines showing censored and uncensored observations')
plt.gca().invert_yaxis()
plot_lifelines(obs)
Explanation: Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line.
End of explanation
obs['T'] = obs['end'] - obs['start']
Explanation: And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines.
End of explanation
data1 = obs.loc[~censored, 'T']
data2 = obs.loc[censored, 'T']
data1
data2
Explanation: What we have simulated is the data that would be available at the beginning of Week 8.
Using Incomplete Data
Now, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times.
First I'll split the data into two sets: data1 contains residence times for dogs whose arrival and departure times are known; data2 contains incomplete residence times for dogs who were not adopted during the observation interval.
End of explanation
posterior1 = update_weibull(prior, data1)
Explanation: For the complete data, we can use update_weibull, which uses the PDF of the Weibull distribution to compute the likelihood of the data.
End of explanation
def update_weibull_incomplete(prior, data):
Update the prior using incomplete data.
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
# evaluate the survival function
probs = weibull_dist(lam_mesh, k_mesh).sf(data_mesh)
likelihood = probs.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
Explanation: For the incomplete data, we have to think a little harder.
At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than T.
And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds T.
The following function is identical to update_weibull except that it uses sf, which computes the survival function, rather than pdf.
End of explanation
posterior2 = update_weibull_incomplete(posterior1, data2)
Explanation: Here's the update with the incomplete data.
End of explanation
plot_contour(posterior2)
decorate(title='Posterior joint distribution, incomplete data')
Explanation: And here's what the joint posterior distribution looks like after both updates.
End of explanation
posterior_lam2 = marginal(posterior2, 0)
posterior_k2 = marginal(posterior2, 1)
Explanation: Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider.
We can see that more clearly by looking at the marginal distributions.
End of explanation
posterior_lam.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_lam2.plot(color='C2', label='Some censored')
decorate(xlabel='lambda',
ylabel='PDF',
title='Marginal posterior distribution of lambda')
Explanation: Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data.
End of explanation
posterior_k.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_k2.plot(color='C12', label='Some censored')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
Explanation: The distribution with some incomplete data is substantially wider.
As an aside, notice that the posterior distribution does not come all the way to 0 on the right side.
That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter.
If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior.
Here's the posterior marginal distribution for $k$:
End of explanation
download('https://gist.github.com/epogrebnyak/7933e16c0ad215742c4c104be4fbdeb1/raw/c932bc5b6aa6317770c4cbf43eb591511fec08f9/lamps.csv')
Explanation: In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider.
In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored.
In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty.
This example is based on data I generated; in the next section we'll do a similar analysis with real data.
Light Bulbs
In 2007 researchers ran an experiment to characterize the distribution of lifetimes for light bulbs.
Here is their description of the experiment:
An assembly of 50 new Philips (India) lamps with the rating 40 W, 220 V (AC) was taken and installed in the horizontal orientation and uniformly distributed over a lab area 11 m x 7 m.
The assembly was monitored at regular intervals of 12 h to look for failures. The instants of recorded failures were [recorded] and a total of 32 data points were obtained such that even the last bulb failed.
End of explanation
df = pd.read_csv('lamps.csv', index_col=0)
df.head()
Explanation: We can load the data into a DataFrame like this:
End of explanation
from empiricaldist import Pmf
pmf_bulb = Pmf(df['f'].to_numpy(), df['h'])
pmf_bulb.normalize()
Explanation: Column h contains the times when bulbs failed in hours; Column f contains the number of bulbs that failed at each time.
We can represent these values and frequencies using a Pmf, like this:
End of explanation
pmf_bulb.mean()
Explanation: Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously.
The average lifetime is about 1400 h.
End of explanation
lams = np.linspace(1000, 2000, num=51)
prior_lam = make_uniform(lams, name='lambda')
ks = np.linspace(1, 10, num=51)
prior_k = make_uniform(ks, name='k')
Explanation: Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data.
Again, I'll start with uniform priors for $\lambda$ and $k$:
End of explanation
prior_bulb = make_joint(prior_lam, prior_k)
Explanation: For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations.
They will run faster with fewer values, but the results will be less precise.
As usual, we can use make_joint to make the prior joint distribution.
End of explanation
data_bulb = np.repeat(df['h'], df['f'])
len(data_bulb)
Explanation: Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times.
We can use np.repeat to transform the data.
End of explanation
posterior_bulb = update_weibull(prior_bulb, data_bulb)
Explanation: Now we can use update_weibull to do the update.
End of explanation
plot_contour(posterior_bulb)
decorate(title='Joint posterior distribution, light bulbs')
Explanation: Here's what the posterior joint distribution looks like:
End of explanation
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
Explanation: To summarize this joint posterior distribution, we'll compute the posterior mean lifetime.
Posterior Means
To compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$.
End of explanation
means = weibull_dist(lam_mesh, k_mesh).mean()
means.shape
Explanation: Now for each pair of parameters we'll use weibull_dist to compute the mean.
End of explanation
prod = means * posterior_bulb
Explanation: The result is an array with the same dimensions as the joint distribution.
Now we need to weight each mean with the corresponding probability from the joint posterior.
End of explanation
prod.to_numpy().sum()
Explanation: Finally we compute the sum of the weighted means.
End of explanation
def joint_weibull_mean(joint):
Compute the mean of a joint distribution of Weibulls.
lam_mesh, k_mesh = np.meshgrid(
joint.columns, joint.index)
means = weibull_dist(lam_mesh, k_mesh).mean()
prod = means * joint
return prod.to_numpy().sum()
Explanation: Based on the posterior distribution, we think the mean lifetime is about 1413 hours.
The following function encapsulates these steps:
End of explanation
def update_weibull_between(prior, data, dt=12):
Update the prior based on data.
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
dist = weibull_dist(lam_mesh, k_mesh)
cdf1 = dist.cdf(data_mesh)
cdf2 = dist.cdf(data_mesh-12)
likelihood = (cdf1 - cdf2).prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
Explanation: Incomplete Information
The previous update was not quite right, because it assumed each light bulb died at the instant we observed it.
According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check.
It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval.
End of explanation
posterior_bulb2 = update_weibull_between(prior_bulb, data_bulb)
Explanation: The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval.
Here's how we run the update.
End of explanation
plot_contour(posterior_bulb2)
decorate(title='Joint posterior distribution, light bulbs')
Explanation: And here are the results.
End of explanation
joint_weibull_mean(posterior_bulb)
joint_weibull_mean(posterior_bulb2)
Explanation: Visually this result is almost identical to what we got using the PDF.
And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct.
To see whether it makes any difference at all, let's check the posterior means.
End of explanation
lam = 1550
k = 4.25
t = 1000
prob_dead = weibull_dist(lam, k).cdf(t)
prob_dead
Explanation: When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less.
And that makes sense: if we assume that a bulb is equally likely to expire at any point in the interval, the average would be the midpoint of the interval.
Posterior Predictive Distribution
Suppose you install 100 light bulbs of the kind in the previous section, and you come back to check on them after 1000 hours. Based on the posterior distribution we just computed, what is the distribution of the number of bulbs you find dead?
If we knew the parameters of the Weibull distribution for sure, the answer would be a binomial distribution.
For example, if we know that $\lambda=1550$ and $k=4.25$, we can use weibull_dist to compute the probability that a bulb dies before you return:
End of explanation
from utils import make_binomial
n = 100
p = prob_dead
dist_num_dead = make_binomial(n, p)
Explanation: If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution.
End of explanation
dist_num_dead.plot(label='known parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Predictive distribution with known parameters')
Explanation: And here's what it looks like.
End of explanation
posterior_series = posterior_bulb.stack()
posterior_series.head()
Explanation: But that's based on the assumption that we know $\lambda$ and $k$, and we don't.
Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities.
So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities.
We can use make_mixture to compute the posterior predictive distribution.
It doesn't work with joint distributions, but we can convert the DataFrame that represents a joint distribution to a Series, like this:
End of explanation
pmf_seq = []
for (k, lam) in posterior_series.index:
prob_dead = weibull_dist(lam, k).cdf(t)
pmf = make_binomial(n, prob_dead)
pmf_seq.append(pmf)
Explanation: The result is a Series with a MultiIndex that contains two "levels": the first level contains the values of k; the second contains the values of lam.
With the posterior in this form, we can iterate through the possible parameters and compute a predictive distribution for each pair.
End of explanation
from utils import make_mixture
post_pred = make_mixture(posterior_series, pmf_seq)
Explanation: Now we can use make_mixture, passing as parameters the posterior probabilities in posterior_series and the sequence of binomial distributions in pmf_seq.
End of explanation
dist_num_dead.plot(label='known parameters')
post_pred.plot(label='unknown parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Posterior predictive distribution')
Explanation: Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters.
End of explanation
# Solution
t = 1000
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
prob_dead = weibull_dist(lam_mesh, k_mesh).cdf(t)
prob_dead.shape
# Solution
from scipy.stats import binom
k = 20
n = 100
likelihood = binom(n, prob_dead).pmf(k)
likelihood.shape
# Solution
posterior_bulb3 = posterior_bulb * likelihood
normalize(posterior_bulb3)
plot_contour(posterior_bulb3)
decorate(title='Joint posterior distribution with k=20')
# Solution
# Since there were more dead bulbs than expected,
# the posterior mean is a bit less after the update.
joint_weibull_mean(posterior_bulb3)
Explanation: The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs.
Summary
This chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains.
We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways: knowing the exact duration of a lifetime, knowing a lower bound, and knowing that a lifetime fell in a given interval.
These examples demonstrate a feature of Bayesian methods: they can be adapted to handle incomplete, or "censored", data with only small changes. As an exercise, you'll have a chance to work with one more type of censored data, when we are given an upper bound on a lifetime.
The methods in this chapter work with any distribution with two parameters.
In the exercises, you'll have a chance to estimate the parameters of a two-parameter gamma distribution, which is used to describe a variety of natural phenomena.
And in the next chapter we'll move on to models with three parameters!
Exercises
Exercise: Using data about the lifetimes of light bulbs, we computed the posterior distribution from the parameters of a Weibull distribution, $\lambda$ and $k$, and the posterior predictive distribution for the number of dead bulbs, out of 100, after 1000 hours.
Now suppose you do the experiment: You install 100 light bulbs, come back after 1000 hours, and find 20 dead light bulbs.
Update the posterior distribution based on this data.
How much does it change the posterior mean?
Suggestions:
Use a mesh grid to compute the probability of finding a bulb dead after 1000 hours for each pair of parameters.
For each of those probabilities, compute the likelihood of finding 20 dead bulbs out of 100.
Use those likelihoods to update the posterior distribution.
End of explanation
import scipy.stats
def gamma_dist(k, theta):
Makes a gamma object.
k: shape parameter
theta: scale parameter
returns: gamma object
return scipy.stats.gamma(k, scale=theta)
Explanation: Exercise: In this exercise, we'll use one month of data to estimate the parameters of a distribution that describes daily rainfall in Seattle.
Then we'll compute the posterior predictive distribution for daily rainfall and use it to estimate the probability of a rare event, like more than 1.5 inches of rain in a day.
According to hydrologists, the distribution of total daily rainfall (for days with rain) is well modeled by a two-parameter
gamma distribution.
When we worked with the one-parameter gamma distribution in <<_TheGammaDistribution>>, we used the Greek letter $\alpha$ for the parameter.
For the two-parameter gamma distribution, we will use $k$ for the "shape parameter", which determines the shape of the distribution, and the Greek letter $\theta$ or theta for the "scale parameter".
The following function takes these parameters and returns a gamma object from SciPy.
End of explanation
# Load the data file
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2203951.csv')
Explanation: Now we need some data.
The following cell downloads data I collected from the National Oceanic and Atmospheric Administration (NOAA) for Seattle, Washington in May 2020.
End of explanation
weather = pd.read_csv('2203951.csv')
weather.head()
Explanation: Now we can load it into a DataFrame:
End of explanation
rained = weather['PRCP'] > 0
rained.sum()
Explanation: I'll make a Boolean Series to indicate which days it rained.
End of explanation
prcp = weather.loc[rained, 'PRCP']
prcp.describe()
Explanation: And select the total rainfall on the days it rained.
End of explanation
cdf_data = Cdf.from_seq(prcp)
cdf_data.plot()
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Distribution of rainfall on days it rained')
Explanation: Here's what the CDF of the data looks like.
End of explanation
# Solution
# I'll use the MLE parameters of the gamma distribution
# to help me choose priors
k_est, _, theta_est = scipy.stats.gamma.fit(prcp, floc=0)
k_est, theta_est
# Solution
# I'll use uniform priors for the parameters.
# I chose the upper bounds by trial and error.
ks = np.linspace(0.01, 2, num=51)
prior_k = make_uniform(ks, name='k')
# Solution
thetas = np.linspace(0.01, 1.5, num=51)
prior_theta = make_uniform(thetas, name='theta')
# Solution
# Here's the joint prior
prior = make_joint(prior_k, prior_theta)
# Solution
# I'll use a grid to compute the densities
k_mesh, theta_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, prcp)
# Solution
# Here's the 3-D array of densities
densities = gamma_dist(k_mesh, theta_mesh).pdf(data_mesh)
densities.shape
# Solution
# Which we reduce by multiplying along axis 2
likelihood = densities.prod(axis=2)
likelihood.sum()
# Solution
# Now we can do the update in the usual way
posterior = prior * likelihood
normalize(posterior)
# Solution
# And here's what the posterior looks like
plot_contour(posterior)
decorate(title='Posterior distribution, parameters of a gamma distribution')
# Solution
# I'll check the marginal distributions to make sure the
# range of the priors is wide enough
from utils import marginal
posterior_k = marginal(posterior, 0)
posterior_theta = marginal(posterior, 1)
# Solution
# The marginal distribution for k is close to 0 at both ends
posterior_k.plot(color='C4')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
# Solution
posterior_k.mean(), posterior_k.credible_interval(0.9)
# Solution
# Same with the marginal distribution of theta
posterior_theta.plot(color='C2')
decorate(xlabel='theta',
ylabel='PDF',
title='Posterior marginal distribution of theta')
# Solution
posterior_theta.mean(), posterior_theta.credible_interval(0.9)
# Solution
# To compute the posterior predictive distribution,
# I'll stack the joint posterior to make a Series
# with a MultiIndex
posterior_series = posterior.stack()
posterior_series.head()
# Solution
# I'll extend the predictive distribution up to 2 inches
low, high = 0.01, 2
# Solution
# Now we can iterate through `posterior_series`
# and make a sequence of predictive Pmfs, one
# for each possible pair of parameters
from utils import pmf_from_dist
qs = np.linspace(low, high, num=101)
pmf_seq = []
for (theta, k) in posterior_series.index:
dist = gamma_dist(k, theta)
pmf = pmf_from_dist(dist, qs)
pmf_seq.append(pmf)
# Solution
# And we can use `make_mixture` to make the posterior predictive
# distribution
post_pred = make_mixture(posterior_series, pmf_seq)
# Solution
# Here's what it looks like.
post_pred.make_cdf().plot(label='rainfall')
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Posterior predictive distribution of rainfall')
# Solution
# The probability of more than 1.5 inches of rain is small
cdf = post_pred.make_cdf()
p_gt = 1 - cdf(1.5)
p_gt
# Solution
# So it's easier to interpret as the number of rainy
# days between events, on average
1 / p_gt
Explanation: The maximum is 1.14 inches of rain is one day.
To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model.
I suggest you proceed in the following steps:
Construct a prior distribution for the parameters of the gamma distribution. Note that $k$ and $\theta$ must be greater than 0.
Use the observed rainfalls to update the distribution of parameters.
Compute the posterior predictive distribution of rainfall, and use it to estimate the probability of getting more than 1.5 inches of rain in one day.
End of explanation |
8,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load the relevant libraries
Step1: set the stage for data visualization
Step2: Load the dataset and make a copy of it
Step3: List all the variables in the dataset
Step4: Data standardisation- Lowercase all the variable names
Step5: A quick peek at the dataset
Step6: Problem challenge # 1
Missing value treatment
A majority of the variables in this dataset are categorical. Therefore, I treat the missing values to the mode (replacement of missing values by most frequently occuring values)
Step7: Problem challenge # 2
Using one hot encoding to convert categorical variables of interest to dummy continuous numbers
For reference see this Kaggle post by Mark, https
Step8: Another quick peek at the dataset. Notice, variables of interest like 'permit', 'extraction_type', 'payment_type', 'quality_group' have now been assigned dummy codes as required
Step9: explanatory variable is also known as the 'Independent variable' and response variable is also known as the 'dependent variable'
Variables of interest for this study are;
status_group,extraction_type_class,payment_type,quality_group,quantity_group,waterpoint_type_group,source_class,permit,water_quality
Correlational Analysis for variables of interest
From a previous post(), I want to determine if there is any relationship between water quality, water quantity and water resource characterstics. So I use the regression methods as shown below to determine this.
Step10: Status group is the response or dependent variable and permit is the independent variable.
The number of observations show the no. of observations that had valid data and thus were included in the analysis.
The F-statistic is 66.57 and the p value is very small (Prob (F-statistic))= 3.44e-16 considerably less than our alpha level
of 0.05 which tell us that we can reject the null hypothesis and conclude that permit is significantly associated with water pump status group.
The linear regression equation Y = b0 + b1X where X is the explanatory variable or the independent variable and Y is the response or the dependent variable.--(EQN 1)
Note | Python Code:
import pandas as pd # for data import and dissection
import numpy as np # for data analysis
import statsmodels.formula.api as smf
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: Load the relevant libraries
End of explanation
plt.interactive(False)
sns.set(style="whitegrid",color_codes=True)
Explanation: set the stage for data visualization
End of explanation
# Reading the data where low_memory=False increases the program efficiency
data= pd.read_csv("data-taarifa.csv", low_memory=False)
sub1=data.copy()
Explanation: Load the dataset and make a copy of it
End of explanation
list(sub1.columns.values)
Explanation: List all the variables in the dataset
End of explanation
#lowercase all variables
sub1.columns = [x.lower() for x in sub1.columns]
Explanation: Data standardisation- Lowercase all the variable names
End of explanation
sub1.head(5)
Explanation: A quick peek at the dataset
End of explanation
## To fill every column with its own most frequent value you can use
sub1 = sub1.apply(lambda x:x.fillna(x.value_counts().index[0]))
Explanation: Problem challenge # 1
Missing value treatment
A majority of the variables in this dataset are categorical. Therefore, I treat the missing values to the mode (replacement of missing values by most frequently occuring values)
End of explanation
from sklearn import preprocessing
le_enc = preprocessing.LabelEncoder()
#to convert into numbers
sub1.permit = le_enc.fit_transform(sub1.permit)
sub1.extraction_type_class=le_enc.fit_transform(sub1.extraction_type_class)
sub1.payment_type=le_enc.fit_transform(sub1.payment_type)
sub1.quality_group=le_enc.fit_transform(sub1.quality_group)
sub1.quantity_group=le_enc.fit_transform(sub1.quantity_group)
sub1.waterpoint_type_group=le_enc.fit_transform(sub1.waterpoint_type_group)
sub1.water_quality=le_enc.fit_transform(sub1.water_quality)
sub1.source_class=le_enc.fit_transform(sub1.source_class)
sub1.status_group=le_enc.fit_transform(sub1.status_group)
Explanation: Problem challenge # 2
Using one hot encoding to convert categorical variables of interest to dummy continuous numbers
For reference see this Kaggle post by Mark, https://www.kaggle.com/c/titanic/forums/t/5379/handling-categorical-data-with-sklearn
End of explanation
sub1.head(5)
Explanation: Another quick peek at the dataset. Notice, variables of interest like 'permit', 'extraction_type', 'payment_type', 'quality_group' have now been assigned dummy codes as required
End of explanation
print ("OLS regresssion model for the association between water pump condition status and quality of water in it")
reg1=smf.ols('status_group~permit',data=sub1).fit()
print (reg1.summary())
Explanation: explanatory variable is also known as the 'Independent variable' and response variable is also known as the 'dependent variable'
Variables of interest for this study are;
status_group,extraction_type_class,payment_type,quality_group,quantity_group,waterpoint_type_group,source_class,permit,water_quality
Correlational Analysis for variables of interest
From a previous post(), I want to determine if there is any relationship between water quality, water quantity and water resource characterstics. So I use the regression methods as shown below to determine this.
End of explanation
# Now, I continue to add the variables to this model to check for any loss of significance
print ("OLS regresssion model for the association between status_group and other variables of interest")
reg1=smf.ols('status_group~quantity_group+extraction_type_class+waterpoint_type_group',data=sub1).fit()
print (reg1.summary())
print ("OLS regresssion model for the association between water pump status group and all variables of interest")
reg1=smf.ols('status_group~extraction_type_class+payment_type+quality_group+quantity_group+waterpoint_type_group+source_class+permit+water_quality',data=sub1).fit()
print (reg1.summary())
scat1 = sns.regplot(x="status_group", y="quality_group", order=2, scatter=True, data=sub1)
plt.xlabel('Water pump status')
plt.ylabel ('quality of the water')
plt.title ('Scatterplot for the association between water pump status and water quality')
#print scat1
plt.show()
Explanation: Status group is the response or dependent variable and permit is the independent variable.
The number of observations show the no. of observations that had valid data and thus were included in the analysis.
The F-statistic is 66.57 and the p value is very small (Prob (F-statistic))= 3.44e-16 considerably less than our alpha level
of 0.05 which tell us that we can reject the null hypothesis and conclude that permit is significantly associated with water pump status group.
The linear regression equation Y = b0 + b1X where X is the explanatory variable or the independent variable and Y is the response or the dependent variable.--(EQN 1)
Note: EQN 1 is significant because it can also help us in prediction of Y. Next, we look at the parameter estimates or the coeffecients or beta weights .
Thus the coeffecient for permit is -0.0697 and the intercept is 0.8903.
Than the best fit line for permit is; status_group=0.89+0.06*permit -- (EQN 2)
In the above example, lets say we are told that a country has 80% people with valid water permits than can we predict the status of the water pump device?
Yes, we plug the value of 80 in EQN 2 as given b0 = 0.89, b1 = 0.06 permit= 80
Than, y(hat) = 0.89+0.06*80 y(hat)= 5.69 or we can say that for 80% people with valid permits there will be approximately 6% water pumps that are functional
Also note the P>|t| value is very small for permit. It is 0.0 and that the R-squared value is 0.001.
We now know that this model accounts for 0.001% variability that we see in our response variable permit.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.