text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
input_dim = output_dim = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, input_dim), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, output_dim), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation_fn=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(inputs_, output_dim, activation_fn=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want to create a decision tree from the above training dataset. The first step for that is to encode the data into numeric values and bind them to Shogun's features and multiclass labels.
Step2: Next, we learn our decision tree using the features and labels created.
Step3: Our decision tree is ready now and we want to use it to make some predictions over test data. So, let us create some test data examples first.
Step4: Next, as with training data, we encode our test dataset and bind it to Shogun features. Then, we apply our decision tree to the test examples to obtain the predicted labels.
Step5: Finally let us tabulate the results obtained and compare them with our intuitive predictions.
Step6: So, do the predictions made by our decision tree match our inferences from training set? Yes! For example, from the training set we infer that the individual having low income has low usage and also all individuals going to college have medium usage. The decision tree predicts the same for both cases.
Step7: Next, let us read the file and form Shogun features and labels.
Step8: From the entire dataset, let us choose some test vectors to form our test dataset.
Step9: Next step is to train our decision tree using the training features and applying it to our test dataset to get predicted output classes.
Step10: Finally, let us compare our predicted labels with test labels to find out the percentage error of our classification model.
Step11: We see that the accuracy is moderately high. Thus our decision tree can evaluate any car given its features with a high success rate. As a final exercise, let us examine the effect of training dataset size on the accuracy of decision tree.
Step12: NOTE
Step13: In the above plot the training data points are are marked with different colours of crosses where each colour corresponds to a particular label. The test data points are marked by black circles. For us it is a trivial task to assign correct colours (i.e. labels) to the black points. Let us see how accurately C4.5 assigns colours to these test points.
Step14: Now that we have trained the decision tree, we can use it to classify our test vectors.
Step15: Let us use the output labels to colour our test data points to qualitatively judge the performance of the decision tree.
Step16: We see that the decision tree trained using the C4.5 algorithm works almost perfectly in this toy dataset. Now let us try this algorithm on a real world dataset.
Step17: Because there is no separate test dataset, we first divide the given dataset into training and testing subsets.
Step18: Before marching forward with applying C4.5, let us plot the data to get a better understanding. The given data points are 4-D and hence cannot be conveniently plotted. We need to reduce the number of dimensions to 2. This reduction can be achieved using any dimension reduction algorithm like PCA. However for the sake of brevity, let us just choose two highly correlated dimensions, petal width and petal length (see summary statistics), right away for plotting.
Step19: First, let us create Shogun features and labels from the given data.
Step20: We know for fact that decision trees tend to overfit. Hence pruning becomes a necessary step. In case of toy dataset, we skipped the pruning step because the dataset was simple and noise-free. But in case of a real dataset like the Iris dataset pruning cannot be skipped. So we have to partition the training dataset into the training subset and the validation subset.
Step21: Now we train the decision tree first, then prune it and finally use it to get output labels for test vectors.
Step22: Let us calculate the accuracy of the classification made by our tree as well as plot the results for qualitative evaluation.
Step23: From the evaluation of results, we infer that, with the help of a C4.5 trained decision tree, we can predict (with high accuracy) the type of Iris plant given its petal and sepal widths and lengths.
Step24: Next, we supply necessary parameters to the CART algorithm and use it train our decision tree.
Step25: In the above code snippet, we see four parameters being supplied to the CART tree object. feat_types supplies knowledge of attribute types of training data to the CART algorithm and problem_type specifies whether it is a multiclass classification problem (PT_MULTICLASS) or a regression problem (PT_REGRESSION). The boolean parameter use_cv_pruning switches on cross-validation pruning of the trained tree and num_folds specifies the number of folds of cross-validation to be applied while pruning. At this point, let us divert ourselves briefly towards undertanding what kind of pruning strategy is employed by Shogun's CART implementation. The CART algorithm uses the cost-complexity pruning strategy. Cost-Complexity pruning yields a list of subtrees of varying depths using complexity normalized resubstitution error, $R_\alpha(T)$. Resubstitution error, R(T), measures how well a decision tree fits the training data. But, this measure favours larger trees over smaller ones. Hence the complexity normalized resubstitution error metric is used which adds penalty for increased complexity and in-turn counters overfitting.
Step26: Regression example using toy data
Step27: Next, we train our CART-tree.
Step28: Now let us use the trained decision tree to regress over the entire range of the previously depicted sinusoid.
Step29: As we can see from the above plot, CART-induced decision tree follows the reference sinusoid quite beautifully!
Step30: Next, we setup the model which is CART-tree in this case.
Step31: Finally we can use Shogun's cross-validation class to get performance.
Step32: We get a mean accuracy of about 0.93-0.94. This number essentially means that a CART-tree trained using this dataset is expected to classify Iris flowers, given their required attributes, with an accuracy of 93-94% in a real world scenario. The parameters required by Shogun's cross-validation class should be noted in the above code snippet. The class requires the model, training features, training labels, splitting strategy and evaluation method to be specified.
Step33: The servo dataset is a small training dataset (contains just 167 training vectors) with no separate test dataset, like the Iris dataset. Hence we will apply the same cross-validation strategy we applied in case of the Iris dataset. However, to make things interesting let us play around with a yet-untouched parameter of CART-induced tree i.e. the maximum allowed tree depth. As the tree depth increases, the tree becomes more complex and hence fits the training data more closely. By setting a maximum allowed tree depth, we restrict the complexity of trained tree and hence avoid over-fitting. But choosing a low value of the maximum allowed tree depth may lead to early stopping i.e. under-fitting. Let us explore how we can decide the appropriate value of the max-allowed-tree-depth parameter. Let us create a method, which takes max-allowed-tree-depth parameter as input and returns the corresponding cross-validated error as output.
Step34: Next, let us supply a range of max_depth values to the above method and plot the returned cross-validated errors.
Step35: From the above plot quite clearly gives us the most appropriate value of maximum allowed depth. We see that the first minima occurs at a maximum allowed depth of 6-8. Hence, one of these should be the desired value. It is to be noted that the error metric that we are discussing here is the mean squared error. Thus, from the above plot, we can also claim that, given required parameters, our CART-flavoured decision tree can predict the rise time within an average error range of $\pm0.5$ (i.e. square root of 0.25 which is the approximate minimum cross-validated error). The relative error i.e average_error/range_of_labels comes out to be ~30%.
Step36: Now, we set up our CHAID-tree with appropriate parameters and train over given data.
Step37: An important point to be noted in the above code snippet is that CHAID training modifies the training data. The actual continuous feature values are replaced by the discrete ordinal values obtained during continuous to ordinal conversion. Notice the difference between the original feature matrix and the updated matrix. The updated matrix contains only 10 distinct values denoting all values of the original matrix for feature dimension at row index 1.
Step38: Regression example with toy dataset
Step39: As usual, we start by setting up our decision tree and training it.
Step40: Next, we use the trained decision tree to follow the reference sinusoid.
Step41: A distinguishing feature about the predicted curve is the presence of steps. These steps are essentially an artifact of continuous to ordinal conversion. If we decrease the number of bins for the conversion the step widths will increase.
Step42: Like the case of CART, here we are also interested in finding out the approximate accuracy with which our CHAID tree trained on this dataset will perform in real world. Hence, we will apply the cross validation strategy. But first we specify the parameters of the CHAID tree.
Step43: Next we set up the cross-validation class and get back the error estimate we want i.e mean classification error.
Step44: Regression example using real dataset
Step45: Next, we set up the parameters for the CHAID tree as well as the cross-validation class.
|
<ASSISTANT_TASK:>
Python Code:
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../../data')
# training data
train_income=['Low','Medium','Low','High','Low','High','Medium','Medium','High','Low','Medium',
'Medium','High','Low','Medium']
train_age = ['Old','Young','Old','Young','Old','Young','Young','Old','Old','Old','Young','Old',
'Old','Old','Young']
train_education = ['University','College','University','University','University','College','College',
'High School','University','High School','College','High School','University','High School','College']
train_marital = ['Married','Single','Married','Single','Married','Single','Married','Single','Single',
'Married','Married','Single','Single','Married','Married']
train_usage = ['Low','Medium','Low','High','Low','Medium','Medium','Low','High','Low','Medium','Low',
'High','Low','Medium']
# print data
print 'Training Data Table : \n'
print 'Income \t\t Age \t\t Education \t\t Marital Status \t Usage'
for i in xrange(len(train_income)):
print train_income[i]+' \t\t '+train_age[i]+' \t\t '+train_education[i]+' \t\t '+train_marital[i]+' \t\t '+train_usage[i]
from modshogun import ID3ClassifierTree, RealFeatures, MulticlassLabels
from numpy import array, concatenate
# encoding dictionary
income = {'Low' : 1.0, 'Medium' : 2.0, 'High' : 3.0}
age = {'Young' : 1.0, 'Old' : 2.0}
education = {'High School' : 1.0, 'College' : 2.0, 'University' : 3.0}
marital_status = {'Married' : 1.0, 'Single' : 2.0}
usage = {'Low' : 1.0, 'Medium' : 2.0, 'High' : 3.0}
# encode training data
for i in xrange(len(train_income)):
train_income[i] = income[train_income[i]]
train_age[i] = age[train_age[i]]
train_education[i] = education[train_education[i]]
train_marital[i] = marital_status[train_marital[i]]
train_usage[i] = usage[train_usage[i]]
# form Shogun feature matrix
train_data = array([train_income, train_age, train_education, train_marital])
train_feats = RealFeatures(train_data);
# form Shogun multiclass labels
labels = MulticlassLabels(array(train_usage));
# create ID3ClassifierTree object
id3 = ID3ClassifierTree()
# set labels
id3.set_labels(labels)
# learn the tree from training features
is_successful = id3.train(train_feats)
# test data
test_income = ['Medium','Medium','Low','High','High']
test_age = ['Old','Young','Old','Young','Old']
test_education = ['University','College','High School','University','College']
test_marital = ['Married','Single','Married','Single','Married']
test_usage = ['Low','Medium','Low','High','High']
# tabulate test data
print 'Test Data Table : \n'
print 'Income \t\t Age \t\t Education \t\t Marital Status \t Usage'
for i in xrange(len(test_income)):
print test_income[i]+' \t\t '+test_age[i]+' \t\t '+test_education[i]+' \t\t '+test_marital[i]+' \t\t ?'
# encode test data
for i in xrange(len(test_income)):
test_income[i] = income[test_income[i]]
test_age[i] = age[test_age[i]]
test_education[i] = education[test_education[i]]
test_marital[i] = marital_status[test_marital[i]]
# bind to shogun features
test_data = array([test_income, test_age, test_education, test_marital])
test_feats = RealFeatures(test_data)
# apply decision tree classification
test_labels = id3.apply_multiclass(test_feats)
output = test_labels.get_labels();
output_labels=[0]*len(output)
# decode back test data for printing
for i in xrange(len(test_income)):
test_income[i]=income.keys()[income.values().index(test_income[i])]
test_age[i]=age.keys()[age.values().index(test_age[i])]
test_education[i]=education.keys()[education.values().index(test_education[i])]
test_marital[i]=marital_status.keys()[marital_status.values().index(test_marital[i])]
output_labels[i]=usage.keys()[usage.values().index(output[i])]
# print output data
print 'Final Test Data Table : \n'
print 'Income \t Age \t Education \t Marital Status \t Usage(predicted)'
for i in xrange(len(test_income)):
print test_income[i]+' \t '+test_age[i]+' \t '+test_education[i]+' \t '+test_marital[i]+' \t\t '+output_labels[i]
# class attribute
evaluation = {'unacc' : 1.0, 'acc' : 2.0, 'good' : 3.0, 'vgood' : 4.0}
# non-class attributes
buying = {'vhigh' : 1.0, 'high' : 2.0, 'med' : 3.0, 'low' : 4.0}
maint = {'vhigh' : 1.0, 'high' : 2.0, 'med' : 3.0, 'low' : 4.0}
doors = {'2' : 1.0, '3' : 2.0, '4' : 3.0, '5more' : 4.0}
persons = {'2' : 1.0, '4' : 2.0, 'more' : 3.0}
lug_boot = {'small' : 1.0, 'med' : 2.0, 'big' : 3.0}
safety = {'low' : 1.0, 'med' : 2.0, 'high' : 3.0}
f = open( os.path.join(SHOGUN_DATA_DIR, 'uci/car/car.data'), 'r')
features = []
labels = []
# read data from file and encode
for line in f:
words = line.rstrip().split(',')
words[0] = buying[words[0]]
words[1] = maint[words[1]]
words[2] = doors[words[2]]
words[3] = persons[words[3]]
words[4] = lug_boot[words[4]]
words[5] = safety[words[5]]
words[6] = evaluation[words[6]]
features.append(words[0:6])
labels.append(words[6])
f.close()
from numpy import random, delete
features = array(features)
labels = array(labels)
# number of test vectors
num_test_vectors = 200;
test_indices = random.randint(features.shape[0], size = num_test_vectors)
test_features = features[test_indices]
test_labels = labels[test_indices]
# remove test vectors from training set
features = delete(features,test_indices,0)
labels = delete(labels,test_indices,0)
# shogun test features and labels
test_feats = RealFeatures(test_features.T)
test_labels = MulticlassLabels(test_labels)
# method for id3 training and
def ID3_routine(features, labels):
# Shogun train features and labels
train_feats = RealFeatures(features.T)
train_lab = MulticlassLabels(labels)
# create ID3ClassifierTree object
id3 = ID3ClassifierTree()
# set labels
id3.set_labels(train_lab)
# learn the tree from training features
id3.train(train_feats)
# apply to test dataset
output = id3.apply_multiclass(test_feats)
return output
output = ID3_routine(features, labels)
from modshogun import MulticlassAccuracy
# Shogun object for calculating multiclass accuracy
accuracy = MulticlassAccuracy()
print 'Accuracy : ' + str(accuracy.evaluate(output, test_labels))
# list of error rates for all training dataset sizes
error_rate = []
# number of error rate readings taken for each value of dataset size
num_repetitions = 3
# loop over training dataset size
for i in range(500,1600,200):
indices = random.randint(features.shape[0], size = i)
train_features = features[indices]
train_labels = labels[indices]
average_error = 0
for i in xrange(num_repetitions):
output = ID3_routine(train_features, train_labels)
average_error = average_error + (1-accuracy.evaluate(output, test_labels))
error_rate.append(average_error/num_repetitions)
# plot the error rates
import matplotlib.pyplot as pyplot
% matplotlib inline
from scipy.interpolate import interp1d
from numpy import linspace, arange
fig,axis = pyplot.subplots(1,1)
x = arange(500,1600,200)
f = interp1d(x, error_rate)
xnew = linspace(500,1500,100)
pyplot.plot(x,error_rate,'o',xnew,f(xnew),'-')
pyplot.xlim([400,1600])
pyplot.xlabel('training dataset size')
pyplot.ylabel('Classification Error')
pyplot.title('Decision Tree Performance')
pyplot.show()
import matplotlib.pyplot as plt
from numpy import ones, zeros, random, concatenate
from modshogun import RealFeatures, MulticlassLabels
% matplotlib inline
def create_toy_classification_dataset(ncat,do_plot):
# create attribute values and labels for class 1
x = ones((1,ncat))
y = 1+random.rand(1,ncat)*4
lab = zeros(ncat)
# add attribute values and labels for class 2
x = concatenate((x,ones((1,ncat))),1)
y = concatenate((y,5+random.rand(1,ncat)*4),1)
lab = concatenate((lab,ones(ncat)))
# add attribute values and labels for class 3
x = concatenate((x,2*ones((1,ncat))),1)
y = concatenate((y,1+random.rand(1,ncat)*8),1)
lab = concatenate((lab,2*ones(ncat)))
# create test data
ntest = 20
x_t = concatenate((ones((1,3*ntest/4)),2*ones((1,ntest/4))),1)
y_t = 1+random.rand(1,ntest)*8
if do_plot:
# plot training data
c = ['r','g','b']
for i in range(3):
plt.scatter(x[0,lab==i],y[0,lab==i],color=c[i],marker='x',s=50)
# plot test data
plt.scatter(x_t[0,:],y_t[0,:],color='k',s=10,alpha=0.8)
plt.xlabel('attribute X')
plt.ylabel('attribute Y')
plt.show()
# form training feature matrix
train_feats = RealFeatures(concatenate((x,y),0))
# from training labels
train_labels = MulticlassLabels(lab)
# from test feature matrix
test_feats = RealFeatures(concatenate((x_t,y_t),0))
return (train_feats,train_labels,test_feats);
train_feats,train_labels,test_feats = create_toy_classification_dataset(20,True)
from numpy import array
from modshogun import C45ClassifierTree
# steps in C4.5 Tree training bundled together in a python method
def train_tree(feats,types,labels):
# C4.5 Tree object
tree = C45ClassifierTree()
# set labels
tree.set_labels(labels)
# supply attribute types
tree.set_feature_types(types)
# supply training matrix and train
tree.train(feats)
return tree
# specify attribute types X is categorical hence True, Y is continuous hence False
feat_types = array([True,False])
# get back trained tree
C45Tree = train_tree(train_feats,feat_types,train_labels)
def classify_data(tree,data):
# get classification labels
output = tree.apply_multiclass(data)
# get classification certainty
output_certainty=tree.get_certainty_vector()
return output,output_certainty
out_labels,out_certainty = classify_data(C45Tree,test_feats)
from numpy import int32
# plot results
def plot_toy_classification_results(train_feats,train_labels,test_feats,test_labels):
train = train_feats.get_feature_matrix()
lab = train_labels.get_labels()
test = test_feats.get_feature_matrix()
out_labels = test_labels.get_labels()
c = ['r','g','b']
for i in range(out_labels.size):
plt.scatter(test[0,i],test[1,i],color=c[int32(out_labels[i])],s=50)
# plot training dataset for visual comparison
for i in range(3):
plt.scatter(train[0,lab==i],train[1,lab==i],color=c[i],marker='x',s=30,alpha=0.7)
plt.show()
plot_toy_classification_results(train_feats,train_labels,test_feats,out_labels)
import csv
from numpy import array
# dictionary to encode class names to class labels
to_label = {'Iris-setosa' : 0.0, 'Iris-versicolor' : 1.0, 'Iris-virginica' : 2.0}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([float(i) for i in row[0:4]])
lab.append(to_label[row[4]])
lab = array(lab)
feat = array(feat).T
from numpy import int32, random
# no.of vectors in test dataset
ntest = 25
# no. of vectors in train dataset
ntrain = 150-ntest
# randomize the order of vectors
subset = int32(random.permutation(150))
# choose 1st ntrain from randomized set as training vectors
feats_train = feat[:,subset[0:ntrain]]
# form training labels correspondingly
train_labels = lab[subset[0:ntrain]]
# form test features and labels (for accuracy evaluations)
feats_test = feat[:,subset[ntrain:ntrain+ntest]]
test_labels = lab[subset[ntrain:ntrain+ntest]]
import matplotlib.pyplot as plt
% matplotlib inline
# plot training features
c = ['r', 'g', 'b']
for i in range(3):
plt.scatter(feats_train[2,train_labels==i],feats_train[3,train_labels==i],color=c[i],marker='x')
# plot test data points in black
plt.scatter(feats_test[2,:],feats_test[3,:],color='k',marker='o')
plt.show()
from modshogun import RealFeatures, MulticlassLabels
# training data
feats_train = RealFeatures(feats_train)
train_labels = MulticlassLabels(train_labels)
# test data
feats_test = RealFeatures(feats_test)
test_labels = MulticlassLabels(test_labels)
# randomize the order of vectors
subset = int32(random.permutation(ntrain))
nvalidation = 45
# form training subset and validation subset
train_subset = subset[0:ntrain-nvalidation]
validation_subset = subset[ntrain-nvalidation:ntrain]
# set attribute types - all continuous
feature_types = array([False, False, False, False])
# remove validation subset before training the tree
feats_train.add_subset(train_subset)
train_labels.add_subset(train_subset)
# train tree
C45Tree = train_tree(feats_train,feature_types,train_labels)
# bring back validation subset
feats_train.remove_subset()
train_labels.remove_subset()
# remove data belonging to training subset
feats_train.add_subset(validation_subset)
train_labels.add_subset(validation_subset)
# prune the tree
C45Tree.prune_tree(feats_train,train_labels)
# bring back training subset
feats_train.remove_subset()
train_labels.remove_subset()
# get results
output, output_certainty = classify_data(C45Tree,feats_test)
from modshogun import MulticlassAccuracy
# Shogun object for calculating multiclass accuracy
accuracy = MulticlassAccuracy()
print 'Accuracy : ' + str(accuracy.evaluate(output, test_labels))
# convert MulticlassLabels object to labels vector
output = output.get_labels()
test_labels = test_labels.get_labels()
train_labels = train_labels.get_labels()
# convert RealFeatures object to matrix
feats_test = feats_test.get_feature_matrix()
feats_train = feats_train.get_feature_matrix()
# plot ground truth
for i in range(3):
plt.scatter(feats_test[2,test_labels==i],feats_test[3,test_labels==i],color=c[i],marker='x',s=100)
# plot predicted labels
for i in range(output.size):
plt.scatter(feats_test[2,i],feats_test[3,i],color=c[int32(output[i])],marker='o',s=30*output_certainty[i])
plt.show()
train_feats,train_labels,test_feats=create_toy_classification_dataset(20,True)
from modshogun import PT_MULTICLASS, CARTree
from numpy import array
def train_carttree(feat_types,problem_type,num_folds,use_cv_pruning,labels,features):
# create CART tree object
c = CARTree(feat_types,problem_type,num_folds,use_cv_pruning)
# set training labels
c.set_labels(labels)
# train using training features
c.train(features)
return c
# form feature types True for nominal (attribute X), False for ordinal/continuous (attribute Y)
ft = array([True, False])
# get back trained tree
cart = train_carttree(ft, PT_MULTICLASS, 5, True, train_labels, train_feats)
from numpy import int32
# get output labels
output_labels = cart.apply_multiclass(test_feats)
plot_toy_classification_results(train_feats,train_labels,test_feats,output_labels)
from modshogun import RegressionLabels, RealFeatures
from numpy import random, sin, linspace
import matplotlib.pyplot as plt
% matplotlib inline
def create_toy_regression_dataset(nsamples,noise_var):
# randomly choose positions in X axis between 0 to 16
samples_x = random.rand(1,nsamples)*16
# find out y (=sin(x)) values for the sampled x positions and add noise to it
samples_y = sin(samples_x)+(random.rand(1,nsamples)-0.5)*noise_var
# plot the samples
plt.scatter(samples_x,samples_y,color='b',marker='x')
# create training features
train_feats = RealFeatures(samples_x)
# training labels
train_labels = RegressionLabels(samples_y[0,:])
return (train_feats,train_labels)
# plot the reference sinusoid
def plot_ref_sinusoid():
plot_x = linspace(-2,18,100)
plt.plot(plot_x,sin(plot_x),color='y',linewidth=1.5)
plt.xlabel('Feature values')
plt.ylabel('Labels')
plt.xlim([-3,19])
plt.ylim([-1.5,1.5])
# number of samples is 300, noise variance is 0.5
train_feats,train_labels = create_toy_regression_dataset(300,0.5)
plot_ref_sinusoid()
plt.show()
from modshogun import PT_REGRESSION
from numpy import array
# feature type - continuous
feat_type = array([False])
# get back trained tree
cart = train_carttree(feat_type, PT_REGRESSION, 5, True, train_labels, train_feats)
def plot_predicted_sinusoid(cart):
# regression range - 0 to 16
x_test = array([linspace(0,16,100)])
# form Shogun features
test_feats = RealFeatures(x_test)
# apply regression using our previously trained CART-tree
regression_output = cart.apply_regression(test_feats).get_labels()
# plot the result
plt.plot(x_test[0,:],regression_output,linewidth=2.0)
# plot reference sinusoid
plot_ref_sinusoid()
plt.show()
plot_predicted_sinusoid(cart)
import csv
from numpy import array
import matplotlib.pylab as plt
% matplotlib inline
# dictionary to encode class names to class labels
to_label = {'Iris-setosa' : 0.0, 'Iris-versicolor' : 1.0, 'Iris-virginica' : 2.0}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([float(i) for i in row[0:4]])
lab.append(to_label[row[4]])
lab = array(lab)
feat = array(feat).T
# plot the dataset using two highly correlated attributes
c = ['r', 'g', 'b']
for i in range(3):
plt.scatter(feat[2,lab==i],feat[3,lab==i],color=c[i],marker='x')
plt.show()
from modshogun import CARTree, PT_MULTICLASS
# set attribute types - all continuous
feature_types = array([False, False, False, False])
# setup CART-tree with cross validation pruning switched off
cart = CARTree(feature_types,PT_MULTICLASS,5,False)
from modshogun import RealFeatures, MulticlassLabels
from modshogun import CrossValidation, MulticlassAccuracy, CrossValidationSplitting, CrossValidationResult
# training features
feats_train = RealFeatures(feat)
# training labels
labels_train = MulticlassLabels(lab)
# set evaluation criteria - multiclass accuracy
accuracy = MulticlassAccuracy()
# set splitting criteria - 10 fold cross-validation
split = CrossValidationSplitting(labels_train,10)
# set cross-validation parameters
cross_val = CrossValidation(cart,feats_train,labels_train,split,accuracy,False)
# run cross-validation multiple times - to get better estimate of accuracy
cross_val.set_num_runs(10)
# get cross validation result
result = cross_val.evaluate()
# print result
print('Mean Accuracy : ' + str(CrossValidationResult.obtain_from_generic(result).mean))
from numpy import array
# dictionary to convert string features to integer values
to_int = {'A' : 1, 'B' : 2, 'C' : 3, 'D' : 4, 'E' : 5}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/servo/servo.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([to_int[row[0]], to_int[row[1]], float(row[2]), float(row[3])])
lab.append(float(row[4]))
lab = array(lab)
feat = array(feat).T
from modshogun import CARTree, RegressionLabels, PT_REGRESSION, MeanSquaredError
from modshogun import CrossValidation, CrossValidationSplitting, CrossValidationResult
# form training features
feats_train = RealFeatures(feat)
# form training labels
labels_train = RegressionLabels(lab)
def get_cv_error(max_depth):
# set attribute types - 2 nominal and 2 ordinal
feature_types = array([True, True, False, False])
# setup CART-tree with cross validation pruning switched off
cart = CARTree(feature_types,PT_REGRESSION,5,False)
# set max allowed depth
cart.set_max_depth(max_depth)
# set evaluation criteria - mean squared error
accuracy = MeanSquaredError()
# set splitting criteria - 10 fold cross-validation
split = CrossValidationSplitting(labels_train,10)
# set cross-validation parameters
cross_val = CrossValidation(cart,feats_train,labels_train,split,accuracy,False)
# run cross-validation multiple times
cross_val.set_num_runs(10)
# return cross validation result
return CrossValidationResult.obtain_from_generic(cross_val.evaluate()).mean
import matplotlib.pyplot as plt
cv_errors = [get_cv_error(i) for i in range(1,15)]
plt.plot(range(1,15),cv_errors,'bo',range(1,15),cv_errors,'k')
plt.xlabel('max_allowed_depth')
plt.ylabel('cross-validated error')
plt.ylim(0,1.2)
plt.show()
train_feats,train_labels,test_feats = create_toy_classification_dataset(20,True)
from modshogun import PT_MULTICLASS, CHAIDTree
from numpy import array, dtype, int32
def train_chaidtree(dependent_var_type,feature_types,num_bins,features,labels):
# create CHAID tree object
c = CHAIDTree(dependent_var_type,feature_types,num_bins)
# set training labels
c.set_labels(labels)
# train using training features
c.train(features)
return c
# form feature types 0 for nominal (attribute X), 2 for continuous (attribute Y)
ft = array([0, 2],dtype=int32)
# cache training matrix
train_feats_cache=RealFeatures(train_feats.get_feature_matrix())
# get back trained tree - dependent variable type is nominal (hence 0), number of bins for binning is 10
chaid = train_chaidtree(0,ft,10,train_feats,train_labels)
print('updated_matrix')
print(train_feats.get_feature_matrix())
print('')
print('original_matrix')
print(train_feats_cache.get_feature_matrix())
# get output labels
output_labels = chaid.apply_multiclass(test_feats)
plot_toy_classification_results(train_feats_cache,train_labels,test_feats,output_labels)
train_feats,train_labels = create_toy_regression_dataset(300,0.5)
plot_ref_sinusoid()
plt.show()
from numpy import dtype, int32, array
# feature type - continuous
feat_type = array([2],dtype=int32)
# get back trained tree
chaid = train_chaidtree(2,feat_type, 50, train_feats, train_labels)
plot_predicted_sinusoid(chaid)
from modshogun import CSVFile, RealFeatures, MulticlassLabels
train_feats=RealFeatures(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat')))
train_labels=MulticlassLabels(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat')))
from modshogun import CHAIDTree, MulticlassLabels
# set attribute types - all attributes are continuous(2)
feature_types = array([2 for i in range(13)],dtype=int32)
# setup CHAID tree - dependent variable is nominal(0), feature types set, number of bins(20)
chaid = CHAIDTree(0,feature_types,20)
# set up cross validation class
from modshogun import CrossValidation, CrossValidationSplitting, CrossValidationResult, MulticlassAccuracy
# set evaluation criteria - multiclass accuracy
accuracy = MulticlassAccuracy()
# set splitting criteria - 10 fold cross-validation
split = CrossValidationSplitting(train_labels,10)
# set cross-validation parameters
cross_val = CrossValidation(chaid,train_feats,train_labels,split,accuracy,False)
# run cross-validation multiple times
cross_val.set_num_runs(10)
print('Mean classification accuracy : '+str(CrossValidationResult.obtain_from_generic(cross_val.evaluate()).mean*100)+' %')
from modshogun import CSVFile, RealFeatures, RegressionLabels
from numpy import ptp
train_feats=RealFeatures(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
train_labels=RegressionLabels(CSVFile( os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
# print range of regression labels - this is useful for calculating relative deviation later
print('labels range : '+str(ptp(train_labels.get_labels())))
from modshogun import CHAIDTree, MeanSquaredError
from modshogun import CrossValidation, CrossValidationSplitting, CrossValidationResult
from numpy import array, dtype, int32
def get_cv_error(max_depth):
# set feature types - all continuous(2) except 4th column which is nominal(0)
feature_types = array([2]*13,dtype=int32)
feature_types[3]=0
feature_types[8]=1
feature_types[9]=1
# setup CHAID-tree
chaid = CHAIDTree(2,feature_types,10)
# set max allowed depth
chaid.set_max_tree_depth(max_depth)
# set evaluation criteria - mean squared error
accuracy = MeanSquaredError()
# set splitting criteria - 5 fold cross-validation
split = CrossValidationSplitting(train_labels,5)
# set cross-validation parameters
cross_val = CrossValidation(chaid,train_feats,train_labels,split,accuracy,False)
# run cross-validation multiple times
cross_val.set_num_runs(3)
# return cross validation result
return CrossValidationResult.obtain_from_generic(cross_val.evaluate()).mean
import matplotlib.pyplot as plt
% matplotlib inline
cv_errors = [get_cv_error(i) for i in range(1,10)]
plt.plot(range(1,10),cv_errors,'bo',range(1,10),cv_errors,'k')
plt.xlabel('max_allowed_depth')
plt.ylabel('cross-validated error')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Veja a seguir uma matriz bidimensional de dados ponto flutuante de 2 linhas e 3 colunas. Observe que a tupla do shape aumenta para a esquerda,
Step2: Manipulação de arrays
Step3: Note que o Numpy permite arrays n-dimensionais. Em imagens em níveis de cinza iremos trabalhar com matrizes bidimensionais mas
Step4: Veja que no último caso, usando o linspace, a sequência começa em 0 e termina em 2 e deve possuir 5 elementos. Veja que
Step5: Exemplo simples de fatiamento
Step6: Exemplo de fatiamento com indices negativos
Step7: Inversão do array com step negativo (step = -1)
Step8: Fatiamento avançado
Step9: Supressão do indice limite superior
Step10: Supressão do indice do step
Step11: Todos os elementos com passo unitário
Step12: Fatiamento no ndarray bidimensional
Step13: Fatiamento de linhas e colunas de um array
Step14: Fatiamento de elementos específicos de um array
Step15: Fatiamento com índices invertidos
Step16: Copiando variáveis ndarray
Step17: Observe que mesmo no retorno de uma função, a cópia explícita pode não acontecer. Veja o exemplo a
Step18: Cópia rasa
Step19: Slice - Fatiamento
Step20: Este outro exemplo é uma forma atraente de processar uma coluna de uma matriz bidimensional,
Step21: Transposto
Step22: Ravel
Step23: Cópia profunda
Step24: Operações matriciais
Step25: Multiplicação de array por escalar
Step26: Soma de arrays
Step27: Transposta de uma matriz
Step28: Multiplicação de matrizes
Step29: Linspace e Arange
Step30: Já na função arange, define-se o intervalo semi-aberto [inicio,fim) e o passo que será dado entre um elemento e outro.
Step31: Confirme que a principal diferença entre os dois que pode ser verificada nos exemplos acima é que
Step32: Note que a matriz r é uma matriz onde cada elemento é a sua coordenada linha e a matriz c é uma matriz onde cada elemento é
Step33: Ou ainda a função diferença entre a coordenada linha e coluna $f(r,c) = r - c$
Step34: Ou ainda a função $f(r,c) = (r + c) \% 2$ onde % é operador módulo. Esta função retorna 1 se a soma das coordenadas for ímpar e 0 caso contrário.
Step35: Ou ainda a função de uma reta $f(r,c) = (r = \frac{1}{2}c)$
Step36: Ou ainda a função parabólica dada pela soma do quadrado de suas coordenadas $f(r,c) = r^2 + c^2$
Step37: Ou ainda a função do círculo de raio 4, com centro em (0,0) $f(r,c) = (r^2 + c^2 < 4^2)$
Step38: Operador indices em exemplo de imagens sintéticas
Step39: Soma
Step40: Subtração
Step41: Xadrez
Step42: Reta
Step43: Parábola
Step44: Círculo
Step45: Meshgrid
Step46: Gerando os vetores com linspace
Step47: Usando os dois vetores gerados pelo linspace no meshgrid
Step48: Podemos agora gerar uma matriz ou imagem que seja função destes valores. Por exemplo ser o produto deles
Step49: Exemplo na geração da imagem sinc com meshgrid
Step50: Exemplo na geração da imagem sinc com indices
Step51: Verificando que as duas funções são iguais
Step52: Para usuários avançados
Step53: Exemplo unidimensional - replicando as linhas
Step54: Exemplo bidimensional - replicando as colunas
Step55: Exemplo bidimensional - replicando as linhas
Step56: Exemplo bidimensional - replicando as linhas e colunas simultaneamente
Step57: Resize
Step58: Clip
Step59: Exemplo com ponto flutuante
Step60: Formatando arrays para impressão
Step61: É possível diminuir o número de casas decimais e suprimir a notação exponencial utilizando
Step62: Imprimindo arrays binários
Step63: Para facilitar a visualização destes arrays, é possível converter os valores para inteiros utilizando
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array( [2,3,4,-1,-2] )
print('Dimensões: a.shape=', a.shape )
print('Tipo dos elementos: a.dtype=', a.dtype )
print('Imprimindo o array completo:\n a=',a )
b = np.array( [ [1.5, 2.3, 5.2],
[4.2, 5.6, 4.4] ] )
print('Um array bidimensional, dimensões:(b.shape=', b.shape )
print('Tipo dos elementos: b.dtype', b.dtype )
print('Número de colunas:', b.shape[-1] )
print('Número de linhas:', b.shape[-2] )
print('Elementos, b=\n', b )
d = np.zeros( (2,4) )
print('Array de 0s: \n', d )
d = np.ones( (3,2,5), dtype='int16' )
print('\n\nArray de 1s: \n', d )
d = np.empty( (2,3), 'bool' )
print('Array não inicializado (vazio):\n', d )
print('np.arange( 10) = ', np.arange(10) )
print('np.arange( 3, 8) = ', np.arange(3,8) )
print('np.arange( 0, 2, 0.5) = ', np.arange(0, 2, 0.5) )
print('np.linspace( 0, 2, 5 ) = ', np.linspace( 0, 2, 5 ) )
a = np.arange(20) # a é um vetor de dimensão 20
print('a = \n', a )
a = np.arange(20)
print('Resultado da operação a[1:15:2]' )
print(a[1:15:2] )
a = np.arange(20)
print('Resultado da operação a[1:-1:2]' )
print(a[1:-1:2] )
print('Note que o fatiamento termina antes do último elemento (-1)' )
a = np.arange(20)
print('Resultado da operação a[-3:2:-1]' )
print(a[-3:2:-1] )
print( 'Note que o fatiamento retorna o array invertido' )
print( 'Antepenúltimo até o terceiro elemento com step = -1' )
a = np.arange(20)
print('Resultado da operação a[:15:2]' )
print(a[:15:2] )
print('Note que o fatiamento inicia do primeiro elemento' )
print('Primeiro elemento até antes do 15o com passo duplo' )
a = np.arange(20)
print('Resultado da operação a[1::2]' )
print(a[1::2] )
print('Note que o fatiamento termina último elemento' )
print('Primeiro elemento até o último com passo duplo' )
a = np.arange(20)
print('Resultado da operação a[1:15]' )
print(a[1:15] )
print('Note que o fatiamento tem step unitário' )
print('Primeiro elemento até antes do 15o com passo um' )
a = np.arange(20)
print('Resultado da operação a[:]' )
print(a[:] )
print('Todos os elementos com passo unitário' )
a = np.arange(20) # a é um vetor unidimensional de 20 elementos
print('a = \n', a )
a = a.reshape(4,5) # a agora é um array 4x5 (4 linhas por 5 colunas)
print('a.reshape(4,5) = \n', a )
print('A segunda linha do array: \n', a[1,:] ) # A segunda linha é o índice 1
print(' A primeira coluna do array: \n', a[:,0] ) # A primeira coluna corresponde ao índice 0
print('Acessando as linhas do array de 2 em 2 começando pelo índice 0: \n', a[0::2,:] )
print(' Acessando as linhas e colunas do array de 2 em 2 começando \
pela linha 0 e coluna 1: \n', a[0::2,1::2] )
b = a[-1:-3:-1,:]
print('Acesso as duas últimas linhas do array em ordem reversa, \
b = a[-1:-3:-1,:] = \n',a[-1:-3:-1,:] )
print('Acesso elemento na última linha e coluna do array, a[-1,-1] =', a[-1,-1] )
c = a[::-1,:]
print('Invertendo a ordem das linhas do array: c = a[::-1,:] = \n', a[::-1,:] )
a = np.arange(6)
b = a
print("a =\n",a )
print("b =\n",b )
b.shape = (2,3) # mudança no shape de b,
print("\na shape =",a.shape ) # altera o shape de a
b[0,0] = -1 # mudança no conteúdo de b
print("a =\n",a ) # altera o conteudo de a
print("\nid de a = ",id(a) ) # id é um identificador único de objeto
print("id de b = ",id(b) ) # a e b possuem o mesmo id
print('np.may_share_memory(a,b):',np.may_share_memory(a,b) )
def cc(a):
return a
b = cc(a)
print("id de a = ",id(a) )
print("id de b = ",id(b) )
print('np.may_share_memory(a,b):',np.may_share_memory(a,b) )
a = np.arange(30)
print("a =\n", a )
b = a.reshape( (5, 6))
print("b =\n", b )
b[:, 0] = -1
print("a =\n", a )
c = a.reshape( (2, 3, 5) )
print("c =\n", c )
print('c.base is a:',c.base is a )
print('np.may_share_memory(a,c):',np.may_share_memory(a,c) )
a = np.zeros( (5, 6))
print('%s %s %s %s %s' % (type(a), np.shape(a), a.dtype, a.min(), a.max()) )
b = a[::2,::2]
print('%s %s %s %s %s' % (type(b), np.shape(b), b.dtype, b.min(), b.max()) )
b[:,:] = 1.
print('b=\n', b )
print('a=\n', a )
print('b.base is a:',b.base is a )
print('np.may_share_memory(a,b):',np.may_share_memory(a,b) )
a = np.arange(25).reshape((5,5))
print('a=\n',a )
b = a[:,0]
print('b=',b )
b[:] = np.arange(5)
print('b=',b )
print('a=\n',a )
a = np.arange(24).reshape((4,6))
print('a:\n',a )
print('a.T:\n',a.T )
print('np.may_share_memory(a,a.T):',np.may_share_memory(a,a.T) )
a = np.arange(24).reshape((4,6))
print('a:\n',a )
av = a.ravel()
print('av.shape:',av.shape )
print('av:\n',av )
print('np.may_share_memory(a,av):',np.may_share_memory(a,av) )
b = a.copy()
c = np.array(a, copy=True)
print("id de a = ",id(a) )
print("id de b = ",id(b) )
print("id de c = ",id(c) )
a = np.arange(20).reshape(5,4)
b = 2 * np.ones((5,4))
c = np.arange(12,0,-1).reshape(4,3)
print('a=\n', a )
print('b=\n', b )
print('c=\n', c )
b5 = 5 * b
print('b5=\n', b5 )
amb = a + b
print('amb=\n', amb )
at = a.T
print('a.shape=',a.shape )
print('a.T.shape=',a.T.shape )
print('a=\n', a )
print('at=\n', at )
ac = a.dot(c)
print('a.shape:',a.shape )
print('c.shape:',c.shape )
print('a=\n',a )
print('c=\n',c )
print('ac=\n', ac )
print('ac.shape:',ac.shape )
# gera um numpy.array de 10 elementos, linearmente espaçados entre 0 a 1
print(np.linspace(0, 1.0, num=10).round(2) )
# gera um numpy.array linearmente espaçados entre 0 a 1 com passo 0.1
print(np.arange(0, 1.0, 0.1) )
r,c = np.indices( (5, 10) )
print('r=\n', r )
print('c=\n', c )
f = r + c
print('f=\n', f )
f = r - c
print('f=\n', f )
f = (r + c) % 2
print('f=\n', f )
f = (r == c//2)
print('f=\n', f )
f = r**2 + c**2
print('f=\n', f )
f = ((r**2 + c**2) < 4**2)
print('f=\n', f * 1 )
# Diretiva para mostrar gráficos inline no notebook
%matplotlib inline
import matplotlib.pylab as plt
r,c = np.indices( (200, 300) )
plt.subplot(121)
plt.imshow(r,cmap = 'gray')
plt.title("linhas")
plt.axis('off')
plt.subplot(122)
plt.imshow(c,cmap = 'gray')
plt.axis('off')
plt.title("colunas");
f = r + c
plt.imshow(f,cmap = 'gray')
plt.title("r+c")
plt.axis("off")
f = r - c
plt.imshow(f,cmap = 'gray')
plt.title("r-c")
plt.axis("off")
f = (r//8 + c//8) % 2
plt.imshow(f,cmap = 'gray')
plt.title("(r+c)%2")
plt.axis("off")
f = (r == c//2)
plt.imshow(f,cmap = 'gray')
plt.title('r == c//2')
plt.axis("off")
f = r**2 + c**2
plt.imshow(f,cmap = 'gray')
plt.title('r**2 + c**2')
plt.axis("off")
f = (((r-100)**2 + (c-100)**2) < 19**2)
plt.imshow(f,cmap = 'gray')
plt.title('((r-100)**2 + (c-100)**2) < 19**2')
plt.axis("off")
import numpy as np
r, c = np.meshgrid( np.array([-1.5, -1.0, -0.5, 0.0, 0.5]),
np.array([-20, -10, 0, 10, 20, 30]), indexing='ij')
print('r=\n',r )
print('c=\n',c )
rows = np.linspace(-1.5, 0.5, 5)
cols = np.linspace(-20, 30, 6)
print('rows:', rows )
print('cols:', cols )
r, c = np.meshgrid(rows, cols, indexing='ij')
print('r = \n', r )
print('c = \n', c )
f = r * c
print('f=\n', f )
e = np.spacing(1) # epsilon to avoid 0/0
rows = np.linspace(-5.0, 5.0, 150) # coordenadas das linhas
cols = np.linspace(-6.0, 6.0, 180) # coordenadas das colunas
r, c = np.meshgrid(rows, cols, indexing='ij') # Grid de coordenadas estilo numpy
z = np.sin(r**2 + c**2 + e) / (r**2 + c**2 + e) # epsilon is added to avoid 0/0
plt.imshow(z,cmap = 'gray')
plt.title('Função sinc: sen(r² + c²)/(r²+c²) em duas dimensões')
plt.axis("off")
n_rows = len(rows)
n_cols = len(cols)
r,c = np.indices((n_rows,n_cols))
r = -5. + 10.*r.astype(float)/(n_rows-1)
c = -6. + 12.*c.astype(float)/(n_cols-1)
zi = np.sin(r**2 + c**2 + e) / (r**2 + c**2 + e) # epsilon is addes to avoid 0/0
plt.imshow(zi,cmap = 'gray')
plt.title('Função sinc: sin(r² + c²)/(r²+c²) em duas dimensões')
plt.axis("off")
print('Máxima diferença entre z e zi?', abs(z - zi).max() )
a = np.array([0, 1, 2])
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,2): \n',np.tile(a,2) )
a = np.array([0, 1, 2])
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,(2,1)): \n',np.tile(a,(2,1)) )
a = np.arange(4).reshape(2,2)
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,2): \n',np.tile(a,2) )
a = np.arange(4).reshape(2,2)
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,(3,1)): \n',np.tile(a,(3,1)) )
a = np.arange(4).reshape(2,2)
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,(2,2)): \n',np.tile(a,(2,2)) )
a = np.array([[0,1],[2,3]])
print('a = \n', a )
print()
print('np.resize(a,(1,7)) = \n', np.resize(a,(1,7)) )
print()
print('np.resize(a,(2,5)) = \n', np.resize(a,(2,5)) )
a = np.array([11,1,2,3,4,5,12,-3,-4,7,4])
print('a = ',a )
print('np.clip(a,0,10) = ', np.clip(a,0,10) )
a = np.arange(10).astype(np.int)
print('a=',a )
print('np.clip(a,2.5,7.5)=',np.clip(a,2.5,7.5) )
A = np.exp(np.linspace(0.1,10,32)).reshape(4,8)/3000.
print('A: \n', A )
np.set_printoptions(suppress=True, precision=3)
print('A: \n', A )
A = np.random.rand(5,10) > 0.5
print('A = \n', A )
print ('A = \n', A.astype(int))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Circular area-sink and well
Step2: Two layers
|
<ASSISTANT_TASK:>
Python Code:
N = 0.001
R = 100
ml = ModelMaq(kaq=5, z=[10, 0], Saq=2e-4, tmin=1e-3, tmax=1e4)
ca = CircAreaSink(ml, 0, 0, 100, tsandN=[(0, 0.001)])
ml.solve()
ml.xsection(-200, 200, 0, 0, t=[0.1, 1, 10], figsize=(12, 4), sstart=-200)
x = np.linspace(-200, 200, 200)
qx = np.zeros_like(x)
for t in [0.1, 1, 10]:
for i in range(len(x)):
qx[i], qy = ml.disvec(x[i], 1e-6, t)
plt.plot(x, qx, label='time is ' + str(t))
qxb = N * np.pi * R ** 2 / (2 * np.pi * R)
plt.axhline(qxb, color='r', ls='--')
plt.axhline(-qxb, color='r', ls='--')
plt.xlabel('x (m)')
plt.ylabel('Qx (m^2/d)')
plt.legend(loc='best');
N = 0.001
R = 100
Q = N * np.pi * R ** 2
ml = ModelMaq(kaq=5, z=[10, 0], Saq=2e-4, tmin=1e-3, tmax=1e4, M=10)
ca = CircAreaSink(ml, -200, 0, 100, tsandN=[(0, 0.001)])
w = Well(ml, 200, 0, rw=0.1, tsandQ=[(0, Q)])
ml.solve()
ml.xsection(-400, 300, 0, 0, t=[0.1, 1, 10, 100, 1000], figsize=(12, 4), sstart=-400)
t = np.logspace(-3, 4, 100)
h = ml.head(-200, 0, t)
plt.semilogx(t, h[0])
plt.xlabel('time')
plt.ylabel('head')
plt.title('head at center of area-sink');
N = 0.001
R = 100
Q = N * np.pi * R ** 2
ml = ModelMaq(kaq=5, z=[10, 0], Saq=2e-4, tmin=10, tmax=100, M=10)
ca = CircAreaSink(ml, -200, 0, 100, tsandN=[(0, 0.001)])
w = Well(ml, 200, 0, rw=0.1, tsandQ=[(0, Q)])
ml.solve()
ml.contour([-300, 300, -200, 200], ngr=40, t=20)
N = 0.001
R = 100
Q = N * np.pi * R ** 2
ml = ModelMaq(kaq=[5, 20], z=[20, 12, 10, 0], c=[1000], Saq=[2e-4, 1e-4], tmin=1e-3, tmax=1e4, M=10)
ca = CircAreaSink(ml, 0, 0, 100, tsandN=[(0, 0.001)])
w = Well(ml, 0, 0, rw=0.1, tsandQ=[(0, Q)], layers=1)
ml.solve()
ml.xsection(-200, 200, 0, 0, t=[0.1, 100], layers=[0, 1], sstart=-200)
ml.xsection(-500, 500, 0, 0, t=[0.1, 100, 1000], layers=[0, 1], sstart=-500)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Biasing Moves
Step2: Make 'moves with weights' dictionaries specialized for each parameter type
Step3: Make master dict-of-dicts so that parameter type can choose the correct movesWithWeights dict
Step4: Test to see if actual picks by PickMoveItemWithProb match target probabilities
Step5: Test GenerateMoveTree
Step6: Write a bunch of GenerateMoveTree moves to a file
|
<ASSISTANT_TASK:>
Python Code:
# generic scientific/ipython header
from __future__ import print_function
from __future__ import division
import os, sys
import copy
import numpy as np
# Parent dictionary of in-common Movetypes-with-Odds to be used as the basis for each parameter's moves
parentMovesWithOdds = {}
parentMovesWithOdds['atmOrBond'] = [ ('atom',10), ('bond',1)]
parentMovesWithOdds['actionChoices'] = [('add',1), ('swap',1), ('delete',1), ('joinAtom',1)]
parentMovesWithOdds['ORorANDType'] = [('ORtype',3), ('ANDtype',1)]
def movesWithWeightsFromOdds( MovesWithOdds):
'''Processes a dictionary of movesWithOdds (lists of string/integer tuples)
into a dictionary of movesWithWeights usable to perform weighted
random choices with numpy's random.choice() function.
Argument: a MovesWithOdds dictionary of lists of string/integer tuples
Returns: a MovesWithWeights dictionary of pairs of a moveType-list with a
probabilites-list, the latter used by numpy's random.choice() function.'''
movesWithWeights = {}
for key in MovesWithOdds.keys():
moves = [ item[0] for item in MovesWithOdds[key] ]
odds = [ item[1] for item in MovesWithOdds[key] ]
weights = odds/np.sum(odds)
#print( key, moves, odds, weights)
movesWithWeights[key] = ( moves, weights)
return movesWithWeights
# make 'moves with weights' dictionary for vdW
movesWithOddsVdW = copy.deepcopy( parentMovesWithOdds)
movesWithOddsVdW['atomLabel'] = [ ('unIndexed',10), ('atom1',1)]
movesWithOddsVdW['bondLabel'] = [ ('unIndexed',1)]
movesWithWeightsVdW = movesWithWeightsFromOdds( movesWithOddsVdW)
# make 'moves with weights' dictionary for bonds
movesWithOddsBonds = copy.deepcopy( parentMovesWithOdds)
movesWithOddsBonds['atomLabel'] = [ ('unIndexed',10), ('atom1',1),('atom2',1)]
movesWithOddsBonds['bondLabel'] = [ ('unIndexed',10), ('bond1',1)]
movesWithWeightsBonds = movesWithWeightsFromOdds( movesWithOddsBonds)
# make 'moves with weights' dictionary for angles
movesWithOddsAngles = copy.deepcopy( parentMovesWithOdds)
movesWithOddsAngles['atomLabel'] = [ ('unIndexed',20), ('atom1',10),('atom2',1), ('atom3',10)]
movesWithOddsAngles['bondLabel'] = [ ('unIndexed',10), ('bond1',1),('bond2',1)]
movesWithWeightsAngles = movesWithWeightsFromOdds( movesWithOddsAngles)
# make 'moves with weights' dictionary for torsions
movesWithOddsTorsions = copy.deepcopy( parentMovesWithOdds)
movesWithOddsTorsions['atomLabel'] = [ ('unIndexed',20), ('atom1',10),('atom2',1), ('atom3',1),('atom4',10)]
movesWithOddsTorsions['bondLabel'] = [ ('unIndexed',20), ('bond1',10),('bond2',1), ('bond3',10)]
movesWithWeightsTorsions = movesWithWeightsFromOdds( movesWithOddsTorsions)
# make 'moves with weights' dictionary for impropers
movesWithOddsImpropers = copy.deepcopy( parentMovesWithOdds)
movesWithOddsImpropers['atomLabel'] = [ ('unIndexed',20), ('atom1',10),('atom2',1), ('atom3',10),('atom4',10)]
movesWithOddsImpropers['bondLabel'] = [ ('unIndexed',20), ('bond1',1),('bond2',1), ('bond3',1)]
movesWithWeightsImpropers = movesWithWeightsFromOdds( movesWithOddsImpropers)
testWeights = movesWithWeightsImpropers
for key in testWeights.keys():
print( key, testWeights[key][0], testWeights[key][1])
# 'VdW', 'Bond', 'Angle', 'Torsion', 'Improper'
movesWithWeightsMaster = {}
movesWithWeightsMaster['VdW'] = movesWithWeightsVdW
movesWithWeightsMaster['Bond'] = movesWithWeightsBonds
movesWithWeightsMaster['Angle'] = movesWithWeightsAngles
movesWithWeightsMaster['Torsion'] = movesWithWeightsTorsions
movesWithWeightsMaster['Improper'] = movesWithWeightsImpropers
def PickMoveItemWithProb( moveType, moveWithWeights):
'''Picks a moveItem based on a moveType and a dictionary of moveTypes with associated probabilities
Arguments:
moveType: string corresponding to a key in the moveWithWeights dictionary, e.g. atomTor
moveWithWeights: a dictionary based on moveType keys which each point to a list of probabilites
associated with the position in the list
Returns:
the randomly-chosen position in the list, based on the probability, together with the probability'''
listOfIndexes = range(0, len( moveWithWeights[moveType][1]) )
listIndex = np.random.choice(listOfIndexes, p= moveWithWeights[moveType][1])
return moveWithWeights[moveType][0][listIndex], moveWithWeights[moveType][1][listIndex]
# NBVAL_SKIP
movesWithWeightsTest = movesWithWeightsMaster['Torsion']
key = np.random.choice( movesWithWeightsTest.keys() )
nSamples = 10000
print( nSamples, 'samples on moveType', key)
print( key, ' Moves: ', movesWithWeightsTest[key][0])
print( key, ' Weights:', movesWithWeightsTest[key][1])
counts = [0]*len(movesWithWeightsTest[key][1])
for turn in range(0, nSamples):
choice, prob = PickMoveItemWithProb( key, movesWithWeightsTest)
idx = movesWithWeightsTest[key][0].index(choice)
counts[ idx] += 1
print( key, ' Counts: ', counts)
def PropagateMoveTree( moveType, movesWithWeights, accumMoves, accumProb):
'''Expands a moveList by the input moveType, randomly picking a move
of that type from the list in movesWithWeights, biased by the weight
(probability) also from movesWithWeights. It incorporates that probability
into the accumulated probability that was passed it with the existing list
Arguments:
moveType: a string which is a key in the movesWithWeights dictionary
movesWithWeights: a dictionary of a list of allowed moves of a certain
moveType paired with a list of a probability associated with each move.
accumMoves: the list of moves (being strings) to be expanded by this function.
accumProb: the accumulated probability so far of the moves in accumMoves
Returns:
accumMoves: the list of moves (being strings) expanded by this function
accumProb: the revised accumulated probability of the moves in accumMoves
'''
choice, prob = PickMoveItemWithProb( moveType, movesWithWeights)
#print( 'before', choice, prob, accumProb)
accumMoves.append( choice)
accumProb *= prob
#print( 'after', choice, prob, accumProb)
return accumMoves, accumProb
def GenerateMoveTree( parameterType):
'''Generates a list of micro-moves describing how to attempt to change the chemical
graph associated with a parameter type. Each micro-move makes a weighted random
decision on some aspect of the overall move, which will be made by effecting each
of the micro-moves in the list.
Argument:
parameterType: this string refers to a force-field parameter type (e.g. 'Torsion')
and determines what kind of moveTypes, moves, and weights will be used in
weight random micro-moves
Returns:
moveTree: the list of micro-moves describing how to attempt to change the chemical
graph associated with a parameter type.
cumProb: the weights-biased probability of making the overall move, i.e. effecting
the list of micro-moves.'''
cumProb = 1.0
moveTree = []
paramType = parameterType
movesWithWeights = movesWithWeightsMaster[paramType]
cumProb = 1.0
moveTree = []
moveFlow = ['actionChoices', 'atmOrBond', 'whichLabel', 'ORorANDType']
for stage in moveFlow:
if stage=='whichLabel' and moveTree[-1]=='atom':
moveTree, cumProb = PropagateMoveTree( 'atomLabel', movesWithWeights, moveTree, cumProb)
elif stage=='whichLabel' and moveTree[-1]=='bond':
moveTree, cumProb = PropagateMoveTree( 'bondLabel', movesWithWeights, moveTree, cumProb)
else:
moveTree, cumProb = PropagateMoveTree( stage, movesWithWeights, moveTree, cumProb)
return moveTree, cumProb
parameterType = 'Torsion'
nSamples = 10
moveTree, cumProb = GenerateMoveTree( parameterType)
for i in range(0,nSamples):
print( GenerateMoveTree( parameterType) )
parameterType = 'Torsion'
nSamples = 10000
ofs = open('moveTrees.'+parameterType+'.txt','w')
moveTree, cumProb = GenerateMoveTree( parameterType)
for i in range(0,nSamples):
moveTree, prob = GenerateMoveTree( parameterType)
ofs.write( '%.6f ' % prob )
for microMove in moveTree:
ofs.write( '%s ' % microMove )
ofs.write( '\n' )
ofs.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we create a network to implement the policy. We begin with two convolutional layers to process
Step2: We will optimize the policy using the Asynchronous Advantage Actor Critic (A3C) algorithm. There are lots of hyperparameters we could specify at this point, but the default values for most of them work well on this problem. The only one we need to customize is the learning rate.
Step3: Optimize for as long as you have patience to. By 1 million steps you should see clear signs of learning. Around 3 million steps it should start to occasionally beat the game's built in AI. By 7 million steps it should be winning almost every time. Running on my laptop, training takes about 20 minutes for every million steps.
Step4: Let's watch it play and see how it does!
|
<ASSISTANT_TASK:>
Python Code:
import deepchem as dc
import numpy as np
class PongEnv(dc.rl.GymEnvironment):
def __init__(self):
super(PongEnv, self).__init__('Pong-v0')
self._state_shape = (80, 80)
@property
def state(self):
# Crop everything outside the play area, reduce the image size,
# and convert it to black and white.
cropped = np.array(self._state)[34:194, :, :]
reduced = cropped[0:-1:2, 0:-1:2]
grayscale = np.sum(reduced, axis=2)
bw = np.zeros(grayscale.shape)
bw[grayscale != 233] = 1
return bw
def __deepcopy__(self, memo):
return PongEnv()
env = PongEnv()
import deepchem.models.tensorgraph.layers as layers
import tensorflow as tf
class PongPolicy(dc.rl.Policy):
def create_layers(self, state, **kwargs):
conv1 = layers.Conv2D(num_outputs=16, in_layers=state, kernel_size=8, stride=4)
conv2 = layers.Conv2D(num_outputs=32, in_layers=conv1, kernel_size=4, stride=2)
dense = layers.Dense(out_channels=256, in_layers=layers.Flatten(in_layers=conv2), activation_fn=tf.nn.relu)
gru = layers.GRU(n_hidden=16, batch_size=1, in_layers=layers.Reshape(shape=(1, -1, 256), in_layers=dense))
concat = layers.Concat(in_layers=[dense, layers.Reshape(shape=(-1, 16), in_layers=gru)])
action_prob = layers.Dense(out_channels=env.n_actions, activation_fn=tf.nn.softmax, in_layers=concat)
value = layers.Dense(out_channels=1, in_layers=concat)
return {'action_prob':action_prob, 'value':value}
policy = PongPolicy()
from deepchem.models.tensorgraph.optimizers import Adam
a3c = dc.rl.A3C(env, policy, model_dir='model', optimizer=Adam(learning_rate=0.0002))
a3c.fit(1000)
# Change this for how long you have the patience
million = 10e6
num_rounds = 0
for round in range(num_rounds):
a3c.fit(million, restore=True)
from datetime import datetime
def render_env(env):
try:
env.env.render()
except Exception as e:
print(e)
a3c.restore()
env.reset()
start = datetime.now()
while (datetime.now() - start).total_seconds() < 120:
render_env(env)
env.step(a3c.select_action(env.state))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is a bad appoach b/c
Step2: uses a for loop to repeat operations
Step3: Let's trace the execution
Step4: finding the length of a string is such a common operation that Python actually has a built-in function to do it called len
Step5: for loop is a way to do operations many times, a list is a way to store many values
Step6: Loop challenge 2
Step7: Loop challenge 3
|
<ASSISTANT_TASK:>
Python Code:
#example task: print each character in a word
#one way to do is use a series of print statements
word = 'lead'
print(word[0])
print(word[1])
print(word[2])
print(word[3])
word = 'tin'
print(word[0])
print(word[1])
print(word[2])
print(word[3])
#better approach
word = 'lead'
for char in word:
print (char)
#better
word = 'supercalifragilisticexpialidocious'
for char in word:
print (char)
length = 0
for vowel in 'aeiou':
length = length + 1
#print(vowel, length)
print('There are', length, 'vowels')
#note loop var still exists after loop
letter = 'z'
for letter in 'abc':
print(letter)
print('after the loop, letter is', letter)
print(len('aeiou'))
# solution
for i in range(1, 4):
print(i)
result = 1
for i in range(0, 3):
result = result * 5
print(result)
for i in range(0,3):
print(i)
newstring = ''
oldstring = 'Newton'
length_old = len(oldstring)
for char_index in range(length_old):
newstring = newstring + oldstring[length_old - char_index - 1]
print(newstring)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can compute the eigenvectors and eigenvalues using the NumPy function linalg.eig
Step2: The matrix A is non-symmetric, hence it is no surprise that the eigenvalues and eigenvectors are complex. The $i$th column of evectors (A[
Step3: Note that the matrix Lambda ($\boldsymbol{\Lambda}$) is diagonal, and the diagonal entries are the eigenvalues.
Step4: The above matrix is diagonal, and the diagonal entries are the eigenvalues.
Step5: After just 10 iterations, the estimated eigenvalue is very accurate.
Step6: It is clear that in this case we have approached the second largest eigenvalue.
Step7: Sheep flock example
Step8: Next, we use power iterations
|
<ASSISTANT_TASK:>
Python Code:
# Import NumPy and seed random number generator to make generated matrices deterministic
import numpy as np
np.random.seed(2)
# Create a matrix with random entries
A = np.random.rand(4, 4)
print(A)
# Compute eigenvectors of A
evalues, evectors = np.linalg.eig(A)
print("Eigenvalues: {}".format(evalues))
print("Eigenvectors: {}".format(evectors))
Lambda = np.linalg.inv(evectors).dot(A.dot(evectors))
print(Lambda)
# Create a symmetric matrix
S = A + A.T
# Compute eigenvectors of S and print eigenvalues
lmbda, U = np.linalg.eig(S)
print(lmbda)
# R matrix
R = U.T
# Diagonalise S
Lambda = R.dot(S.dot(R.T))
print(Lambda)
# Create starting vector
x0 = np.random.rand(S.shape[0])
# Perform power iteration
for i in range(10):
x0 = S.dot(x0)
x0 = x0/np.linalg.norm(x0)
x1 = S.dot(x0)
# Get maxiumum exact eigenvalue (absolute value)
eval_max_index = abs(lmbda).argmax()
max_eig = lmbda[eval_max_index]
# Print estimated max eigenvalue and error
max_eig_est = np.sign(x1.dot(x0))*np.linalg.norm(x1)/np.linalg.norm(x0)
print("Estimate of largest eigenvalue: {}".format(max_eig_est))
print("Error: {}".format(abs(max_eig - max_eig_est)))
# Create starting vector
x0 = np.random.rand(S.shape[0])
# Get eigenvector associated with maxiumum eigenvalue
eval_max_index = abs(lmbda).argmax()
evec_max = U[:,eval_max_index]
# Make starting vector orthogonal to eigenvector associated with maximum
x0 = x0 - x0.dot(evec_max)*evec_max
# Perform power iteration
for i in range(10):
x0 = S.dot(x0)
x0 = x0/np.linalg.norm(x0)
x1 = S.dot(x0)
# Print estimated max eigenvalue and error
max_eig_est = np.sign(x1.dot(x0))*np.linalg.norm(x1)/np.linalg.norm(x0)
print("Estimate of largest eigenvalue: {}".format(max_eig_est))
print("Error: {}".format(abs(max_eig - max_eig_est)))
# Get second largest eigenvalue
print("Second largest eigenvalue (exact): {}".format(lmbda[np.argsort(abs(lmbda))[-2]]))
rayleigh_quotient = x1.dot(S).dot(x1)/(x1.dot(x1))
print("Rayleigh_quotient: {}".format(rayleigh_quotient))
A = np.array([[0, 2, 0.9663], [0.545, 0 ,0], [0, 0.8, 0]])
# Create starting vector
x0 = np.random.rand(A.shape[0])
# Perform power iteration
for i in range(10):
x0 = A.dot(x0)
x0 = x0/np.linalg.norm(x0)
# Normalise eigenvector using l1 norm
x0 = x0/np.linalg.norm(x0, 1)
# Print estimated eigenvector associated with largest eigenvalue
print("Estimate of eigenvector for the largest eigenvalue: {}".format(x0))
# Print estimated max eigenvalue (using Rayleigh quotient)
print("Estimate of largest eigenvalue: {}".format(x0.dot(A).dot(x0)/x0.dot(x0)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can inspect the contents of the data
Step2: All columns except the last one are the input parameters of the system, obtained from real vines obtained through physicochemical tests. The last column shows the output value, based on sensory tests.
Step3: 3) Linear regression training using gradient descend way
Step4: 3.2) Linear regression implementation in Python code
Step5: let's take a look at the mean square error w.r.t. each iteration, and the final one.
Step6: As the value to be estimated is a subjective metric between 3 and 8 we would be most interested in knowing what is the average deviation from the output value that our hypotheses values get. We can compute this with the L1 error
Step7: In this particular case we could consider that the outputs should be cathegorized after the regression. Of course, we could had approached the problem as a classification one, but this is next class...
Step8: A bit of improvement over standard L1 error.
Step9: Also, we can see in a scatter prot how the input vs output looks like
Step10: It is clear that our regressor is clearly under-shooting the hypothesized value, as it estimates values around 6 to 7 for known values of 8
Step11: As a reminder, here we want to fit the following
Step12: We can observe the estimated coefficients and the intercept (i.e. the $\theta_0$ parameter)
Step13: Let's use these parameters to predict the outputs on our input set
Step14: And we can compute the same metrics, which are consistent with our implementation above
Step15: 4) Train versus test datasets
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd #used for reading/writing data
import numpy as np #numeric library library
from matplotlib import pyplot as plt #used for plotting
import sklearn #machine learning library
wineData = pd.read_csv('data/winequality/winequality-red.csv', sep=';')
wineData.head(10)
wineData.describe()
wineDataInput = wineData.drop('quality', axis=1)
wineDataOutput = wineData['quality']
#prepare the data
from sklearn import preprocessing
#adapt output values to an array form
wineDataOutputArray = np.asarray(wineDataOutput.astype(float))
wineDataOutputArray = wineDataOutputArray[:, np.newaxis]
#adapt input features as an array form, with an extra '1' for \theta_0 term (bias term)
wineDataInputArray_original = np.asarray(wineDataInput.astype(float))
### comment the next line out to test regression on unnormalized data
wineDataInputArray = preprocessing.scale(wineDataInputArray_original)
wineDataInputArray = np.hstack((np.ones((wineDataInputArray.shape[0], 1)), wineDataInputArray))
#visualization of the normalized and unnormalized
f, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3, sharex='col', sharey='row')
f.set_figheight(7)
f.set_figwidth(15)
ax1.plot(wineDataInputArray_original[0:20, 0])
ax2.plot(wineDataInputArray_original[0:20, 1])
ax3.plot(wineDataInputArray_original[0:20, 2])
ax4.plot(wineDataInputArray[0:20, 1]) #note we added a column of 1's
ax5.plot(wineDataInputArray[0:20, 2])
ax6.plot(wineDataInputArray[0:20, 3])
%time
#Initialize theta to zeros
learning_rate = alpha = 0.05
iters = 1000
def mean_squared_error(theta,X,y):
m = y.shape[0]
return 1. / (2. * m) * np.sum((np.dot(X, theta) - y) ** 2.)
def l1_error(theta, X, y):
m = y.shape[0]
return 1. / m * np.sum(abs((np.dot(X, theta) - y)))
def gradient_update(theta,X,y):
m = y.shape[0]
return 1. / m * np.dot(X.T, (np.dot(X,theta) - y))
def gradient_descent(X,y,alpha,iters):
m = y.shape[0]
all_cost = []
#Initialize theta to zeros
all_theta = [np.zeros((X.shape[1],1))] #array of vectors
for i in range(iters):
cost = mean_squared_error(all_theta[-1], X, y)
all_cost.append(cost)
all_theta.append(all_theta[-1] - float(alpha) * gradient_update(all_theta[-1], X, y))
return all_theta,all_cost
#Perform linear regression via gradient descent
all_theta, all_cost = gradient_descent(wineDataInputArray, wineDataOutputArray, alpha, iters)
plt.plot(all_cost[0:100])
mean_squared_error(all_theta[-1], wineDataInputArray, wineDataOutputArray)
l1_error(all_theta[-1], wineDataInputArray, wineDataOutputArray)
def l1_error_int(theta, X, y):
m = y.shape[0]
return 1. / m * np.sum(abs((np.rint(np.dot(X, theta)) - y)))
l1_error_int(all_theta[-1], wineDataInputArray, wineDataOutputArray)
aa = list(map(list, zip(*all_theta)))
f, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3, sharex='col', sharey='row')
f.set_figheight(7)
f.set_figwidth(15)
ax1.plot(aa[1])
ax2.plot(aa[2])
ax3.plot(aa[3])
ax4.plot(aa[4])
ax5.plot(aa[5])
ax6.plot(aa[6])
plt.scatter(wineDataOutputArray, np.dot(wineDataInputArray, all_theta[-1]), color='black')
plt.ylabel('Hypothesis regression')
plt.xlabel('Known output value')
from sklearn.linear_model import LinearRegression
lm = LinearRegression(normalize=True)
%time
lm.fit(wineDataInput, wineDataOutput)
lm.intercept_
lm.coef_
predictedQuality = lm.predict(wineDataInput)
predictedQuality[0:10]
plt.scatter(wineDataOutput, predictedQuality, color='black')
plt.ylabel('Hypothesis regression')
plt.xlabel('Known output value')
print(lm.score(wineDataInput, wineDataOutput))
print(1. / wineDataOutput.shape[0] * np.sum(abs(predictedQuality - wineDataOutput)))
from sklearn.model_selection import train_test_split
wineDataInput_train, wineDataInput_test, wineDataOutput_train, wineDataOutput_test = train_test_split(wineDataInput, wineDataOutput, test_size=0.3, random_state=0)
lm2 = LinearRegression(normalize=True)
lm2.fit(wineDataInput_train, wineDataOutput_train)
predicted = lm2.predict(wineDataInput_test)
print(lm.score(wineDataInput_test, wineDataOutput_test))
print(1. / wineDataOutput_test.shape[0] * np.sum(abs(predicted - wineDataOutput_test)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lectura de mapas de direcciones y de elevación
Step2: Trazado de la cuenca y preparación de la misma
Step3: Parámetros físicos.
Step4: Preparación parámetros horizontales
Step5: A continuación se presentan los valores que tendrían los 3 tanques lineales en el modelo para el caso en que se usen los coeficientes de calibración R[5] = 1.0, R[6] = 1000 y R[7] = 1800
Step6: Preparación parámetros en el cauce
Step7: Prepara puntos de control y tipo de velocidad a usar
Step8: Ejecución modelo
Step9: Modelo por laderas
Step10: Obtención del ancho en función del Qmed.
Step11: Ensayo con velocidades horizontales
Step12: A cotninuación se pasan las velocidades que se encuentran en forma de celda a forma de ladera
Step13: Ahora se calcula como sería la cantidad de agua transportada por cada ladera con esas velocidades (asumiendo un omportamiento lineal)
Step14: Parametros de flujo horizontal en el cause
Step15: Ejecucion del modelo en ladera
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from wmf import wmf
from fwm import utils
import numpy as np
import pylab as pl
DEM=wmf.read_map_raster('raster/dem2.tif',True)
DIR=wmf.read_map_raster('raster/dir.tif',True)
wmf.cu.nodata=-9999.0; wmf.cu.dxp=30.0
DIR[DIR<=0]=wmf.cu.nodata.astype(int)
DIR=wmf.cu.dir_reclass(DIR,wmf.cu.ncols,wmf.cu.nrows)
# Trazado de la cuenca
cuSalgar = wmf.SimuBasin(-75.9808, 5.9647, DEM, DIR, name='Liboriana',
dt = 300)
# Establecer Parametros Geomorfo
cuSalgar.GetGeo_Cell_Basics()
cuSalgar.set_Geomorphology(stream_width=cuSalgar.CellLong)
# Evaporación en la cuenca estimada por Turc.
Evp=4.658*np.exp(-0.0002*cuSalgar.CellHeight)/288.0
# Infiltración Ks
Ks = 0.00028
# Percolación
Kp = Ks/100.0
# Perdidas se asumen iguales a cero
Kpp = 0.0
# Inclusión de los parámetros en el modelo
Lista=[Evp,Ks,Kp,Kpp]
for pos,var in enumerate(Lista):
cuSalgar.set_PhysicVariables('v_coef',var,pos)
# Velocidades superficial, sub-super, y subte.
v2 = 1.4 * cuSalgar.CellSlope**0.5
v3 = Ks * cuSalgar.CellSlope**0.5
v4 = Kp * cuSalgar.CellSlope**0.5
Lista=[v2,v3,v4]
for pos,var in enumerate(Lista):
cuSalgar.set_PhysicVariables('h_coef',var,pos)
v2.mean()
#Si uno calcula los coeficientes lineales de salida de cada tanque
calib = [0.1, 100, 1800]
hflux=[]
for c,v in zip(calib,Lista):
hflux.append(1.0 - (wmf.models.hill_long/(c*v*300 + wmf.models.hill_long)) )
for h in hflux:
utils.plot_basin_map(cuSalgar.structure,h,cuSalgar.ncells)
print 'Tasa transferencia Media [%]: ' + str(h.mean())
#Calcula variable para el coeficiente horizontal
area = cuSalgar.CellAcum * (30**2)
var,w1 = wmf.OCG_param(pend = cuSalgar.CellSlope, area = area)
cuSalgar.set_PhysicVariables('h_coef',var,3)
#Variable del exponente horizontal
cuSalgar.set_PhysicVariables('h_exp',w1,3)
cuSalgar.set_record()
cuSalgar.set_speed_type()
#wmf.models.storage = Res['Storage']
cuSalgar.set_storage(10,3)
Calibracion = [0.5, 1.0, 1.0, 0.0, 1.0, 1000, 2800, 0.999, 0.005, 1.0]
ruta_lluvia = 'lluvia/lluvia_201501010000_201501312355.bin'
Res = cuSalgar.run_shia(Calibracion,ruta_lluvia,100,7200)
pl.plot(Res['Qsim'][0])
cuSalgar.Plot_basin_fast(Res['Storage'][4],ZeroAsNaN='si')
pl.plot(Res['Balance'])
# Trazado de la cuenca
cuSalgar = wmf.SimuBasin(-75.9808, 5.9647, DEM, DIR, name='Liboriana',
dt = 300, modelType = 'hills',umbral=500)
cuSalgar.GetGeo_Cell_Basics()
#Calcula el caudal medio
cuSalgar.GetQ_Balance(3000,Tipo_ETR=1)
#Basado en el Q medio calcula el ancho
W = 2.261*cuSalgar.CellQmed**0.46
#Obtiene los parametros geomorfo
cuSalgar.set_Geomorphology(stream_width=W,umbrales=[30, 500])
# Inclusión de los parámetros en el modelo
Lista=[Evp,Ks,Kp,Kpp]
for pos,var in enumerate(Lista):
cuSalgar.set_PhysicVariables('v_coef',var,pos)
v2 = 1.4 * cuSalgar.CellSlope**0.5
v3 = Ks * cuSalgar.CellSlope**0.5
v4 = Kp * cuSalgar.CellSlope**0.5
Lista=[v2,v3,v4]
for pos,var in enumerate(Lista):
cuSalgar.set_PhysicVariables('h_coef',var,pos)
ListaHills=[]
for var in Lista:
ListaHills.append(cuSalgar.Transform_Basin2Hills(var))
#Si uno calcula los coeficientes lineales de salida de cada tanque
calib = [0.5, 500, 108000]
hflux=[]
for c,v in zip(calib,ListaHills):
hflux.append(1.0 - (wmf.models.hill_long/(c*v*300 + wmf.models.hill_long)) )
for h in hflux:
hBasin = cuSalgar.Transform_Hills2Basin(h[0])
cuSalgar.Plot_basin_fast(hBasin)
print 'Tasa transferencia Media [%]: ' + str(h.mean())
#Calcula variable para el coeficiente horizontal
area = cuSalgar.CellAcum * (30**2)
var,w1 = wmf.OCG_param(pend = cuSalgar.CellSlope, area = area)
cuSalgar.set_PhysicVariables('h_coef',var,3,mask=cuSalgar.CellCauce)
#Variable del exponente horizontal
cuSalgar.set_PhysicVariables('h_exp',w1,3,mask=cuSalgar.CellCauce)
a = cuSalgar.Transform_Hills2Basin(wmf.models.h_coef[3])
cuSalgar.Plot_basin_fast(a)
cuSalgar.Plot_basin_fast(var)
Calibracion = [0.8, 100, 100, 0.0, 20, 90, 50, 0.3, 1, 1]
ruta_lluvia = 'lluvia/lluvia_lad_201505010000_201505312355.bin'
Res = cuSalgar.run_shia(Calibracion,ruta_lluvia,2000,1500)
pl.plot(Res['Qsim'][0])
print Res['Qsim'][0][800:1000].mean()
for i in range(4):
cuSalgar.set_storage(Res['Storage'][i],i)
pos = 3
s = cuSalgar.Transform_Hills2Basin(Res['Storage'][pos])
cuSalgar.Plot_basin_fast(s)
pl.hist(Res['Storage'][pos])
print np.percentile(Res['Qsim'][0],25)
Kp
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: <a id='More_trigonometric_functions'></a>
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 3, figsize=(13,4))
x = np.linspace(0, 2*np.pi, 30*np.pi).astype(np.float32)
ax[0].plot(x, np.sin(x), label='sin')
ax[1].plot(x, np.cos(x), label='cos')
ax[2].plot(x, np.tan(x), label='tan')
ax[0].plot(x, np.arcsin(np.sin(x)), label='arcsin')
ax[1].plot(x, np.arccos(np.cos(x)), label='arccos')
ax[2].plot(x, np.arctan(np.tan(x)), label='arctan')
for axes in ax:
axes.grid(True)
axes.legend()
plt.show()
fig, ax = plt.subplots(1, 3, figsize=(13,4))
x = np.linspace(0, 2*np.pi, 20*np.pi)
ax[0].plot(x, 1/np.cos(x), label='$sec$')
ax[1].plot(x, 1/np.sin(x), label='cosec')
ax[2].plot(x, np.cos(x)/np.sin(x), label='cot')
for axes in ax:
axes.grid(True)
axes.set_ylim([-20,20])
axes.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In <<_TheEuroProblem>> I presented a problem from David MacKay's book, Information Theory, Inference, and Learning Algorithms
Step2: And we used the binomial distribution to compute the probability of the data for each possible value of $x$.
Step3: We computed the posterior distribution in the usual way.
Step4: And here's what it looks like.
Step5: Again, the posterior mean is about 0.56, with a 90% credible interval from 0.51 to 0.61.
Step6: The prior mean was 0.5, and the posterior mean is 0.56, so it seems like the data is evidence that the coin is biased.
Step7: That's the probability of the data, given that the coin is fair.
Step8: Now we can compute the likelihood ratio
Step9: The data are about 6 times more likely if the coin is biased, by this definition, than if it is fair.
Step10: To compute the total probability of the data under this hypothesis, we compute the conditional probability of the data for each value of $x$.
Step11: Then multiply by the prior probabilities and add up the products
Step12: So that's the probability of the data under the "biased uniform" hypothesis.
Step13: The data are about two times more likely if the coin is fair than if it is biased, by this definition of "biased".
Step14: Evidence that "moves the needle" from 50% to 68% is not very strong.
Step15: As we did with the uniform distribution, we can remove 50% as a possible value of $x$ (but it doesn't make much difference if we skip this detail).
Step16: Here's what the triangle prior looks like, compared to the uniform prior.
Step17: Exercise
Step18: Bayesian Hypothesis Testing
Step19: Supposing we are choosing from four slot machines, I'll make four copies of the prior, one for each machine.
Step20: This function displays four distributions in a grid.
Step21: Here's what the prior distributions look like for the four machines.
Step23: The Update
Step24: This function updates the prior distribution in place.
Step25: Here's what the posterior looks like.
Step26: Multiple Bandits
Step28: Remember that as a player, we don't know these probabilities.
Step29: counter is a Counter, which is a kind of dictionary we'll use to keep track of how many times each machine is played.
Step30: Each time through the inner loop, we play one machine and update our beliefs.
Step32: Here are the actual probabilities, posterior means, and 90% credible intervals.
Step33: We expect the credible intervals to contain the actual probabilities most of the time.
Step34: The result has 4 rows and 1000 columns. We can use argmax to find the index of the largest value in each column
Step35: The Pmf of these indices is the fraction of times each machine yielded the highest values.
Step36: These fractions approximate the probability of superiority for each machine. So we could choose the next machine by choosing a value from this Pmf.
Step38: But that's a lot of work to choose a single value, and it's not really necessary, because there's a shortcut.
Step39: This function chooses one value from the posterior distribution of each machine and then uses argmax to find the index of the machine that yielded the highest value.
Step41: The Strategy
Step42: To test it out, let's start again with a fresh set of beliefs and an empty Counter.
Step43: If we run the bandit algorithm 100 times, we can see how beliefs gets updated
Step44: The following table summarizes the results.
Step46: The credible intervals usually contain the actual probabilities of winning.
Step48: If things go according to plan, the machines with higher probabilities should get played more often.
Step49: I chose a to make the range of scores comparable to the SAT, which reports scores from 200 to 800.
Step51: Someone with ability=900 is nearly certain to get the right answer.
Step52: play uses prob_correct to compute the probability of a correct answer and np.random.random to generate a random value between 0 and 1. The return value is True for a correct response and False otherwise.
Step53: Suppose this person takes a test with 51 questions, all with the same difficulty, 500.
Step54: We expect them to get about 80% of the questions right.
Step55: And here's what it looks like.
Step57: The Update
Step58: data is a tuple that contains the difficulty of a question and the outcome
Step59: Here's what the posterior distribution looks like.
Step60: The posterior mean is pretty close to the test-taker's actual ability, which is 600.
Step62: If we run this simulation again, we'll get different results.
Step64: As parameters, choose takes i, which is the index of the question, and belief, which is a Pmf representing the posterior distribution of ability, based on responses to previous questions.
Step65: The return values are a Pmf representing the posterior distribution of ability and a DataFrame containing the difficulty of the questions and the outcomes.
Step66: We can use the trace to see how many responses were correct.
Step67: And here's what the posterior looks like.
Step68: Again, the posterior distribution represents a pretty good estimate of the test-taker's actual ability.
Step69: For an exam where all questions have the same difficulty, the precision of the estimate depends strongly on the ability of the test-taker. To show that, I'll loop through a range of abilities and simulate a test using the version of choice that always returns difficulty=500.
Step70: The following plot shows the standard deviation of the posterior distribution for one simulation at each level of ability.
Step72: The test is most precise for people with ability between 500 and 600, less precise for people at the high end of the range, and even worse for people at the low end.
Step73: Here are samples of scores for people with several levels of ability.
Step74: Here's what the distributions of scores look like.
Step75: On average, people with higher ability get higher scores, but anyone can have a bad day, or a good day, so there is some overlap between the distributions.
Step76: Between people with abilities 600 and 700, it is less certain.
Step77: And between people with abilities 700 and 800, it is not certain at all.
Step78: But remember that these results are based on a test where all questions are equally difficult.
|
<ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
import numpy as np
from empiricaldist import Pmf
xs = np.linspace(0, 1, 101)
uniform = Pmf(1, xs)
from scipy.stats import binom
k, n = 140, 250
likelihood = binom.pmf(k, n, xs)
posterior = uniform * likelihood
posterior.normalize()
from utils import decorate
posterior.plot(label='140 heads out of 250')
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title='Posterior distribution of x')
print(posterior.mean(),
posterior.credible_interval(0.9))
k = 140
n = 250
like_fair = binom.pmf(k, n, p=0.5)
like_fair
like_biased = binom.pmf(k, n, p=0.56)
like_biased
K = like_biased / like_fair
K
biased_uniform = uniform.copy()
biased_uniform[0.5] = 0
biased_uniform.normalize()
xs = biased_uniform.qs
likelihood = binom.pmf(k, n, xs)
like_uniform = np.sum(biased_uniform * likelihood)
like_uniform
K = like_fair / like_uniform
K
prior_odds = 1
posterior_odds = prior_odds * K
posterior_odds
def prob(o):
return o / (o+1)
posterior_probability = prob(posterior_odds)
posterior_probability
ramp_up = np.arange(50)
ramp_down = np.arange(50, -1, -1)
a = np.append(ramp_up, ramp_down)
triangle = Pmf(a, xs, name='triangle')
triangle.normalize()
biased_triangle = triangle.copy()
biased_triangle[0.5] = 0
biased_triangle.normalize()
biased_uniform.plot(label='uniform prior')
biased_triangle.plot(label='triangle prior')
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title='Uniform and triangle prior distributions')
# Solution goes here
# Solution goes here
# Solution goes here
xs = np.linspace(0, 1, 101)
prior = Pmf(1, xs)
prior.normalize()
beliefs = [prior.copy() for i in range(4)]
import matplotlib.pyplot as plt
options = dict(xticklabels='invisible', yticklabels='invisible')
def plot(beliefs, **options):
for i, pmf in enumerate(beliefs):
plt.subplot(2, 2, i+1)
pmf.plot(label='Machine %s' % i)
decorate(yticklabels=[])
if i in [0, 2]:
decorate(ylabel='PDF')
if i in [2, 3]:
decorate(xlabel='Probability of winning')
plt.tight_layout()
plot(beliefs)
likelihood = {
'W': xs,
'L': 1 - xs
}
def update(pmf, data):
Update the probability of winning.
pmf *= likelihood[data]
pmf.normalize()
np.random.seed(17)
bandit = prior.copy()
for outcome in 'WLLLLLLLLL':
update(bandit, outcome)
bandit.plot()
decorate(xlabel='Probability of winning',
ylabel='PDF',
title='Posterior distribution, nine losses, one win')
actual_probs = [0.10, 0.20, 0.30, 0.40]
from collections import Counter
# count how many times we've played each machine
counter = Counter()
def play(i):
Play machine i.
i: index of the machine to play
returns: string 'W' or 'L'
counter[i] += 1
p = actual_probs[i]
if np.random.random() < p:
return 'W'
else:
return 'L'
for i in range(4):
for _ in range(10):
outcome = play(i)
update(beliefs[i], outcome)
plot(beliefs)
import pandas as pd
def summarize_beliefs(beliefs):
Compute means and credible intervals.
beliefs: sequence of Pmf
returns: DataFrame
columns = ['Actual P(win)',
'Posterior mean',
'Credible interval']
df = pd.DataFrame(columns=columns)
for i, b in enumerate(beliefs):
mean = np.round(b.mean(), 3)
ci = b.credible_interval(0.9)
ci = np.round(ci, 3)
df.loc[i] = actual_probs[i], mean, ci
return df
summarize_beliefs(beliefs)
samples = np.array([b.choice(1000)
for b in beliefs])
samples.shape
indices = np.argmax(samples, axis=0)
indices.shape
pmf = Pmf.from_seq(indices)
pmf
pmf.choice()
def choose(beliefs):
Use Thompson sampling to choose a machine.
Draws a single sample from each distribution.
returns: index of the machine that yielded the highest value
ps = [b.choice() for b in beliefs]
return np.argmax(ps)
choose(beliefs)
def choose_play_update(beliefs):
Choose a machine, play it, and update beliefs.
# choose a machine
machine = choose(beliefs)
# play it
outcome = play(machine)
# update beliefs
update(beliefs[machine], outcome)
beliefs = [prior.copy() for i in range(4)]
counter = Counter()
num_plays = 100
for i in range(num_plays):
choose_play_update(beliefs)
plot(beliefs)
summarize_beliefs(beliefs)
def summarize_counter(counter):
Report the number of times each machine was played.
counter: Collections.Counter
returns: DataFrame
index = range(4)
columns = ['Actual P(win)', 'Times played']
df = pd.DataFrame(index=index, columns=columns)
for i, count in counter.items():
df.loc[i] = actual_probs[i], count
return df
summarize_counter(counter)
def prob_correct(ability, difficulty):
Probability of a correct response.
a = 100
c = 0.25
x = (ability - difficulty) / a
p = c + (1-c) / (1 + np.exp(-x))
return p
abilities = np.linspace(100, 900)
diff = 500
ps = prob_correct(abilities, diff)
plt.plot(abilities, ps)
decorate(xlabel='Ability',
ylabel='Probability correct',
title='Probability of correct answer, difficulty=500',
ylim=[0, 1.05])
def play(ability, difficulty):
Simulate a test-taker answering a question.
p = prob_correct(ability, difficulty)
return np.random.random() < p
prob_correct(600, 500)
np.random.seed(18)
num_questions = 51
outcomes = [play(600, 500) for _ in range(num_questions)]
np.mean(outcomes)
from scipy.stats import norm
mean = 500
std = 300
qs = np.linspace(0, 1000)
ps = norm(mean, std).pdf(qs)
prior = Pmf(ps, qs)
prior.normalize()
prior.plot(label='std=300', color='C5')
decorate(xlabel='Ability',
ylabel='PDF',
title='Prior distribution of ability',
ylim=[0, 0.032])
def update_ability(pmf, data):
Update the distribution of ability.
difficulty, outcome = data
abilities = pmf.qs
ps = prob_correct(abilities, difficulty)
if outcome:
pmf *= ps
else:
pmf *= 1 - ps
pmf.normalize()
actual_600 = prior.copy()
for outcome in outcomes:
data = (500, outcome)
update_ability(actual_600, data)
actual_600.plot(color='C4')
decorate(xlabel='Ability',
ylabel='PDF',
title='Posterior distribution of ability')
actual_600.mean()
def choose(i, belief):
Choose the difficulty of the next question.
return 500
def simulate_test(actual_ability):
Simulate a person taking a test.
belief = prior.copy()
trace = pd.DataFrame(columns=['difficulty', 'outcome'])
for i in range(num_questions):
difficulty = choose(i, belief)
outcome = play(actual_ability, difficulty)
data = (difficulty, outcome)
update_ability(belief, data)
trace.loc[i] = difficulty, outcome
return belief, trace
belief, trace = simulate_test(600)
trace['outcome'].sum()
belief.plot(color='C4', label='ability=600')
decorate(xlabel='Ability',
ylabel='PDF',
title='Posterior distribution of ability')
belief.mean(), belief.std()
actual_abilities = np.linspace(200, 800)
results = pd.DataFrame(columns=['ability', 'posterior_std'])
series = pd.Series(index=actual_abilities, dtype=float, name='std')
for actual_ability in actual_abilities:
belief, trace = simulate_test(actual_ability)
series[actual_ability] = belief.std()
from utils import plot_series_lowess
plot_series_lowess(series, 'C1')
decorate(xlabel='Actual ability',
ylabel='Standard deviation of posterior')
def sample_posterior(actual_ability, iters):
Simulate multiple tests and compute posterior means.
actual_ability: number
iters: number of simulated tests
returns: array of scores
scores = []
for i in range(iters):
belief, trace = simulate_test(actual_ability)
score = belief.mean()
scores.append(score)
return np.array(scores)
sample_500 = sample_posterior(500, iters=100)
sample_600 = sample_posterior(600, iters=100)
sample_700 = sample_posterior(700, iters=100)
sample_800 = sample_posterior(800, iters=100)
from empiricaldist import Cdf
cdf_500 = Cdf.from_seq(sample_500)
cdf_600 = Cdf.from_seq(sample_600)
cdf_700 = Cdf.from_seq(sample_700)
cdf_800 = Cdf.from_seq(sample_800)
cdf_500.plot(label='ability=500', color='C1',
linestyle='dashed')
cdf_600.plot(label='ability=600', color='C3')
cdf_700.plot(label='ability=700', color='C2',
linestyle='dashed')
cdf_800.plot(label='ability=800', color='C0')
decorate(xlabel='Test score',
ylabel='CDF',
title='Sampling distribution of test scores')
np.mean(sample_600 > sample_500)
np.mean(sample_700 > sample_600)
np.mean(sample_800 > sample_700)
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1b
Step2: Problem 2) $k$-means clustering
Step3: Problem 2b
Step4: Problem 2c
Step5: Problem 2d
Step6: That doesn't look right at all!
Step7: write your answer here
Step8: write your answer here
Step9: Problem 3b
Step10: write your answer here
Step12: Problem 4b
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
iris = # complete
fig, ax = plt.subplots()
ax.scatter( # complete
# complete
# complete
# complete
from sklearn.cluster import # complete
Kcluster = # complete
Kcluster.fit( # complete
fig, ax = plt.subplots()
ax.scatter( # complete
# complete
# complete
# complete
Kcluster = # complete
# complete
fig, ax = plt.subplots()
ax.scatter( # complete
rs = 14
Kcluster1 = KMeans( # complete
# complete
fig, ax = plt.subplots()
ax.scatter( # complete
# complete
# complete
# complete
# complete
# complete
# complete
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit( # complete
Kcluster = # complete
# complete
fig, ax = plt.subplots()
ax.scatter( # complete
from sklearn.cluster import DBSCAN
dbs = # complete
dbs.fit( # complete
fig, ax = plt.subplots()
ax.scatter( # complete
dbs = DBSCAN( # complete
dbs.fit(# complete
fig, ax = plt.subplots()
ax.scatter( # complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Random Forest seems to be giving the best results, so we'll stick with that for now
Step2: Maybe I should only use the AOD values since the sensor values are sporadic and don't add as much to the data anyway. Also, I could remove the rows were the sensor readings are zero. I could look through the Y rows and make a mask to apply to both X and Y rows.
Step3: Let's try Ridge again.
Step4: Not too good. Or at least not an improvement over itself even.
Step5: So, from the plot above, there are values above 0, even though my Y_train and Y_test are all below zero.
Step6: Try random forest with new X_hist
Step7: The score has improved (0.2) but the most remarkable is that all the predicted values are now below 0, as they should be.
Step8: Not much better than random forest regressor, but does offer multicore support which speeds things up significantly. Edit
Step9: Wow, even better, but is it good enough? If I include ratios between channels, spatial and other information will be implicitly collected.
Step10: Try it all again with just X, not X_reduced?
Step11: After pickling the sensor to power model, use it to predict power from satellite data!
Step12: Seems like dimensions are working out. Probably will need to pickle the sat to sensor model too and just feed in data start to finish and see how it does.
Step13: Nice, now we have predictions for every datetime! Now we have to check this.
Step14: Tres beau, call it a day! That's a pretty graph!
Step15: I shut down the nb, so restart!
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from data_helper_functions import *
from IPython.display import display
pd.options.display.max_columns = 999
%matplotlib inline
with np.load('data/X.npz') as data: #old X, don't use, start at "Now with all channels..."
X = data['X']
with np.load('data/Y.npz') as data: #old Y, don't use
Y = data['Y']
print X.shape
print Y.shape
from sklearn.ensemble import RandomForestRegressor
from sklearn.cross_validation import train_test_split
rfr = RandomForestRegressor(oob_score=True)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 14)
rfr.fit(X_train,Y_train)
rfr.score(X_test,Y_test)
Y_pred = rfr.predict(X_test)
from random import randint
val = randint(0,508)
print Y_pred[val]
print Y_test[val]
mask = []
for i,row in enumerate(Y):
if row[0] == 0:
mask.append(False)
else:
mask.append(True)
mask = np.array(mask)
X_reduced = X[mask]
Y_reduced = Y[mask][:,-7::]
print X_reduced.shape
print Y_reduced.shape
rfr = RandomForestRegressor(oob_score=True)
X_train, X_test, Y_train, Y_test = train_test_split(X_reduced, Y_reduced, test_size = 0.3, random_state = 14)
rfr.fit(X_train,Y_train)
print rfr.score(X_test,Y_test)
print rfr.oob_score_
Y_pred = rfr.predict(X_test)
from random import randint
val = randint(0,Y_pred.shape[0])
print Y_pred[val]
print Y_test[val]
from sklearn.linear_model import Ridge
ridge = Ridge(solver = 'auto')
X_train, X_test, Y_train, Y_test = train_test_split(X_reduced, Y_reduced, test_size = 0.5, random_state = 14)
ridge.fit(X_train,Y_train)
ridge.score(X_test,Y_test)
Y_pred = ridge.predict(X_test)
from random import randint
val = randint(0,Y_pred.shape[0]-1)
print Y_pred[val]
print Y_test[val]
plt.plot(Y_pred,'go');
val = randint(0,2301)
_ = plt.hist(X_reduced[:,0:1972][val], alpha = 0.5, normed=True, bins=25, label='visible',range=(0,25000)) #visible light histogram
_ = plt.hist(X_reduced[:,1973:2476][val], alpha = 0.5, normed=True, bins=25, label='IR',range=(0,25000))
plt.legend(loc='upper right');
X_hist = []
for i in xrange(X_reduced.shape[0]):
hist1, _ = np.histogram(X_reduced[:,0:1972][i], density=True, bins=25, range=(0,25000)) #vis
hist2, _ = np.histogram(X_reduced[:,1973:2476][i], density=True, bins=25, range=(0,25000)) #IR
X_hist.append(np.hstack((hist1,hist2)))
X_hist = np.array(X_hist)
rfr = RandomForestRegressor(oob_score=True)
X_train, X_test, Y_train, Y_test = train_test_split(X_hist, Y_reduced, test_size = 0.3, random_state = 14)
rfr.fit(X_train,Y_train)
print rfr.score(X_test,Y_test)
print rfr.oob_score_
from random import randint
val = randint(0,Y_test.shape[0])
print Y_pred[val]
print Y_test[val]
plt.plot(Y_pred,'go');
X_hist = []
for i in xrange(X_reduced.shape[0]):
hist1, _ = np.histogram(X_reduced[:,0:1972][i], density=True, bins=30, range=(0,25000)) #vis
hist2, _ = np.histogram(X_reduced[:,1973:2476][i], density=True, bins=30, range=(0,25000)) #IR
X_hist.append(np.hstack((hist1,hist2)))
X_hist = np.array(X_hist)
from sklearn.ensemble import ExtraTreesRegressor
etr = ExtraTreesRegressor(oob_score=True, bootstrap=True, n_jobs=-1, n_estimators=100) #njobs uses all cores!
X_train, X_test, Y_train, Y_test = train_test_split(X_hist, Y_reduced, test_size = 0.3, random_state = 14)
etr.fit(X_train,Y_train)
print etr.score(X_test,Y_test)
print etr.oob_score_
from random import randint
val = randint(0,Y_test.shape[0])
print Y_pred[val]
print Y_test[val]
plt.plot(Y_pred,'go');
with np.load('data/X_all_channels.npz') as data:
X = data['X']
with np.load('data/Y_all_channels.npz') as data:
Y = data['Y']
print X.shape
print Y.shape
mask = []
for i,row in enumerate(Y):
if row[0] == 0:
mask.append(False)
else:
mask.append(True)
mask = np.array(mask)
X_reduced = X[mask]
Y_reduced = Y[mask][:,-7::]
print X_reduced.shape
print Y_reduced.shape
val = randint(0,2301)
_ = plt.hist(X_reduced[:,0:1972][val], alpha = 0.2, normed=True, bins=30, label='CH1',range=(0,25000)) #visible light histogram
_ = plt.hist(X_reduced[:,1973:2476][val], alpha = 0.2, normed=True, bins=30, label='CH2',range=(0,25000))
_ = plt.hist(X_reduced[:,2477:2980][val], alpha = 0.2, normed=True, bins=30, label='CH3',range=(0,25000))
_ = plt.hist(X_reduced[:,2981:3484][val], alpha = 0.2, normed=True, bins=30, label='CH4',range=(0,25000))
_ = plt.hist(X_reduced[:,3484:3988][val], alpha = 0.2, normed=True, bins=30, label='CH6',range=(0,25000))
plt.legend(loc='upper right');
X_hist = []
bins = 20
for i in xrange(X_reduced.shape[0]):
hist1, _ = np.histogram(X_reduced[:,0:1972][i], density=True, bins=bins, range=(0,25000))
hist2, _ = np.histogram(X_reduced[:,1973:2476][i], density=True, bins=bins, range=(0,25000))
hist3, _ = np.histogram(X_reduced[:,2477:2980][i], density=True, bins=bins, range=(0,25000))
hist4, _ = np.histogram(X_reduced[:,2981:3484][i], density=True, bins=bins, range=(0,25000))
hist5, _ = np.histogram(X_reduced[:,3484:3988][i], density=True, bins=bins, range=(0,25000))
X_hist.append(np.hstack((hist1,hist2,hist3,hist4,hist5)))
X_hist = np.array(X_hist)
from sklearn.ensemble import ExtraTreesRegressor
etr = ExtraTreesRegressor(
oob_score=True, bootstrap=True, n_jobs=-1, n_estimators=1000
) #nj_obs uses all cores!
X_train, X_test, Y_train, Y_test = train_test_split(X_hist, Y_reduced, test_size = 0.25, random_state = 12)
etr.fit(X_train,Y_train)
print etr.score(X_test,Y_test)
print etr.oob_score_
from random import randint
val = randint(0,Y_test.shape[0])
print Y_pred[val]
print Y_test[val]
plt.plot(Y_pred,'go');
from scipy.ndimage import zoom
from __future__ import division
X_reduced_ratio_1_2 = []
for i in xrange(X_reduced.shape[0]):
CH1 = zoom(X_reduced[:,0:1972][i].reshape((29,68)),zoom=(0.48, 0.53), order=5)
CH2 = X_reduced[:,1972:2476][i].reshape((14,36))
X_reduced_ratio_1_2.append(25000* CH2 / (CH1 + CH2) )
X_reduced_ratio_1_2 = np.array(X_reduced_ratio_1_2)
X_reduced_ratio_1_6 = []
for i in xrange(X_reduced.shape[0]):
CH1 = zoom(X_reduced[:,0:1972][i].reshape((29,68)),zoom=(0.48, 0.53), order=5)
CH6 = X_reduced[:,3484:3988][i].reshape((14,36))
X_reduced_ratio_1_6.append(25000* CH6 / (CH1 + CH6) )
X_reduced_ratio_1_6 = np.array(X_reduced_ratio_1_6)
X_reduced_ratio_2_6 = []
for i in xrange(X_reduced.shape[0]):
CH2 = X_reduced[:,1972:2476][i].reshape((14,36))
CH6 = X_reduced[:,3484:3988][i].reshape((14,36))
X_reduced_ratio_2_6.append(25000* CH6 / (CH2 + CH6) )
X_reduced_ratio_2_6 = np.array(X_reduced_ratio_2_6)
val = randint(0,2301)
_ = plt.hist(X_reduced[:,0:1972][val], alpha = 0.2, normed=True, bins=30, label='CH1',range=(0,25000)) #visible light histogram
_ = plt.hist(X_reduced[:,1973:2476][val], alpha = 0.2, normed=True, bins=30, label='CH2',range=(0,25000))
_ = plt.hist(X_reduced[:,2477:2980][val], alpha = 0.2, normed=True, bins=30, label='CH3',range=(0,25000))
_ = plt.hist(X_reduced[:,2981:3484][val], alpha = 0.2, normed=True, bins=30, label='CH4',range=(0,25000))
_ = plt.hist(X_reduced[:,3484:3988][val], alpha = 0.2, normed=True, bins=30, label='CH6',range=(0,25000))
_ = plt.hist(np.ravel(X_reduced_ratio_1_2[val]), alpha = 0.2, normed=True, bins=30, label='CH2/CH1',range=(0,25000))
_ = plt.hist(np.ravel(X_reduced_ratio_1_6[val]), alpha = 0.2, normed=True, bins=30, label='CH6/CH1',range=(0,25000))
_ = plt.hist(np.ravel(X_reduced_ratio_2_6[val]), alpha = 0.2, normed=True, bins=30, label='CH6/CH2',range=(0,25000))
plt.legend(loc='upper right');
X_hist = []
bins = 30
for i in xrange(X_reduced.shape[0]):
hist1, _ = np.histogram(X_reduced[:,0:1972][i], density=True, bins=bins, range=(0,25000))
hist2, _ = np.histogram(X_reduced[:,1972:2476][i], density=True, bins=bins, range=(0,25000))
hist3, _ = np.histogram(X_reduced[:,2476:2980][i], density=True, bins=bins, range=(0,25000))
hist4, _ = np.histogram(X_reduced[:,2980:3484][i], density=True, bins=bins, range=(0,25000))
hist5, _ = np.histogram(X_reduced[:,3484:3988][i], density=True, bins=bins, range=(0,25000))
hist6, _ = np.histogram(np.ravel(X_reduced_ratio_1_2[i]), density=True, bins=bins, range=(0,25000))
hist7, _ = np.histogram(np.ravel(X_reduced_ratio_1_6[i]), density=True, bins=bins, range=(0,25000))
hist8, _ = np.histogram(np.ravel(X_reduced_ratio_2_6[i]), density=True, bins=bins, range=(0,25000))
X_hist.append(np.hstack((hist1,hist2,hist3,hist4,hist5,hist6,hist7,hist8)))
X_hist = np.array(X_hist)
from sklearn.ensemble import ExtraTreesRegressor
etr = ExtraTreesRegressor(oob_score=True, bootstrap=True,
n_jobs=-1, n_estimators=1000) #nj_obs uses all cores!
X_train, X_test, Y_train, Y_test = train_test_split(X_hist, Y_reduced, test_size = 0.2, random_state = 12)
etr.fit(X_train,Y_train)
print etr.score(X_test,Y_test)
print etr.oob_score_
from random import randint
val = randint(0,Y_test.shape[0])
print Y_pred[val]
print Y_test[val]
from scipy.ndimage import zoom
from __future__ import division
from random import randint
X_ratio_1_2 = []
for i in xrange(X.shape[0]):
CH1 = zoom(X[:,0:1972][i].reshape((29,68)),zoom=(0.48, 0.53), order=5)
CH2 = X[:,1972:2476][i].reshape((14,36))
X_ratio_1_2.append(25000* (CH2) / (CH1 + CH2+1.0) )
X_ratio_1_2 = np.array(X_ratio_1_2)
X_ratio_1_6 = []
for i in xrange(X.shape[0]):
CH1 = zoom(X[:,0:1972][i].reshape((29,68)),zoom=(0.48, 0.53), order=5)
CH6 = X[:,3484:3988][i].reshape((14,36))
X_ratio_1_6.append(25000* CH6 / (CH1 + CH6 + 0.1) )
X_ratio_1_6 = np.array(X_ratio_1_6)
X_ratio_2_6 = []
for i in xrange(X.shape[0]):
CH2 = X[:,1972:2476][i].reshape((14,36))
CH6 = X[:,3484:3988][i].reshape((14,36))
X_ratio_2_6.append(25000* CH6 / (CH2 + CH6 + 0.1) )
X_ratio_2_6 = np.array(X_ratio_2_6)
val = randint(0,2301)
_ = plt.hist(X[:,0:1972][val], alpha = 0.2, normed=True, bins=30, label='CH1',range=(0,25000)) #visible light histogram
_ = plt.hist(X[:,1973:2476][val], alpha = 0.2, normed=True, bins=30, label='CH2',range=(0,25000))
_ = plt.hist(X[:,2477:2980][val], alpha = 0.2, normed=True, bins=30, label='CH3',range=(0,25000))
_ = plt.hist(X[:,2981:3484][val], alpha = 0.2, normed=True, bins=30, label='CH4',range=(0,25000))
_ = plt.hist(X[:,3484:3988][val], alpha = 0.2, normed=True, bins=30, label='CH6',range=(0,25000))
_ = plt.hist(np.ravel(X_ratio_1_2[val]), alpha = 0.2, normed=True, bins=30, label='CH2/CH1',range=(0,25000))
_ = plt.hist(np.ravel(X_ratio_1_6[val]), alpha = 0.2, normed=True, bins=30, label='CH6/CH1',range=(0,25000))
_ = plt.hist(np.ravel(X_ratio_2_6[val]), alpha = 0.2, normed=True, bins=30, label='CH6/CH2',range=(0,25000))
plt.legend(loc='upper right');
import pandas as pd
X_hist = []
bins = 25
for i in xrange(X.shape[0]):
myval1 = pd.DataFrame(np.ravel(X_ratio_1_2[i])).fillna(np.mean).values.flatten();
myval2 = pd.DataFrame(np.ravel(X_ratio_1_6[i])).fillna(np.mean).values.flatten();
myval3 = pd.DataFrame(np.ravel(X_ratio_2_6[i])).fillna(np.mean).values.flatten();
hist1, _ = np.histogram(X[:,0:1972][i], density=True, bins=bins, range=(0,25000))
hist2, _ = np.histogram(X[:,1972:2476][i], density=True, bins=bins, range=(0,25000))
hist3, _ = np.histogram(X[:,2476:2980][i], density=True, bins=bins, range=(0,25000))
hist4, _ = np.histogram(X[:,2980:3484][i], density=True, bins=bins, range=(0,25000))
hist5, _ = np.histogram(X[:,3484:3988][i], density=True, bins=bins, range=(0,25000))
hist6, _ = np.histogram( myval1 , density=True, bins=bins, range=(0,25000) )
hist7, _ = np.histogram( myval2 , density=True, bins=bins, range=(0,25000))
hist8, _ = np.histogram( myval3, density=True, bins=bins, range=(0,25000))
X_hist.append(np.hstack((hist1,hist2,hist3,hist4,hist5,hist6,hist7,hist8)))
X_hist = np.array(X_hist)
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.cross_validation import train_test_split
etr = ExtraTreesRegressor(oob_score=True, bootstrap=True,
n_jobs=-1, n_estimators=500) #nj_obs uses all cores!
X_train, X_test, Y_train, Y_test = train_test_split(X_hist, Y, test_size = 0.2, random_state = 12)
etr.fit(X_train,Y_train) #pickle it!
from sklearn.externals import joblib
joblib.dump(etr, 'webapp/solarApp/models/sat-to-sensor-model/sat-to-sensor-model.pkl')
sat_to_sensor_model = joblib.load('webapp/solarApp/models/sat-to-sensor-model/sat-to-sensor-model.pkl')
sat_to_sensor_model.predict(X_hist).shape
X_hist.shape
print etr.score(X_test,Y_test)
print etr.oob_score_
Y_pred = etr.predict(X_test)
from random import randint
val = randint(0,Y_test.shape[0])
print Y_pred[val]
print Y_test[val]
from sklearn.externals import joblib
etr2 = joblib.load('data/sensor-to-power-model/sensor-to-power-model.pkl')
etr2.predict(Y).shape #power predictions
with np.load('data/y.npz') as data:
y = data['y']
y_pred2 = etr2.predict(Y_pred) #the predicted
y_pred2.shape
y.shape
with np.load('data/good_times.npz') as data:
good_times = data['good_times']
print good_times.shape
print etr2.predict(Y).shape
from datetime import datetime, timedelta, time
pvoutput_filefolder = 'data/pvoutput/pvoutput6months/'
datetime_index = 1747
print 'Predicted power for ' + str(good_times[datetime_index]) + \
' is ' + str(etr2.predict(Y[datetime_index])[0]) + ' W'
#pvoutput data
desired_datetime = good_times[datetime_index]
desired_date = (desired_datetime - timedelta(hours=6)).date() #make sure correct date
desired_date = datetime.combine(desired_date, time.min) #get into datetime format
pvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder)
df_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder)
try:
print "True power: " + df_pvoutput[df_pvoutput.index == desired_datetime].values[0][0].astype(str)
except:
print "0"
from datetime import datetime, timedelta, time
pvoutput_filefolder = 'data/pvoutput/pvoutput6months/'
predicted_powers = []
true_powers = []
for datetime_index in range(len(good_times)):
predicted_powers.append(str(etr2.predict(Y[datetime_index])[0]) )
#pvoutput data
desired_datetime = good_times[datetime_index]
desired_date = (desired_datetime - timedelta(hours=6)).date() #make sure correct date
desired_date = datetime.combine(desired_date, time.min) #get into datetime format
pvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder)
df_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder)
try:
true_powers.append(df_pvoutput[df_pvoutput.index == desired_datetime].values[0][0].astype(str))
except:
true_powers.append(0)
true_powers = np.array(true_powers).astype(float)
predicted_powers = np.array(predicted_powers).astype(float)
error_add = []
for datetime_index in range(len(good_times)):
error_add.append(
np.abs(true_powers[datetime_index]-predicted_powers[datetime_index]) \
/ true_powers[datetime_index]
)
error_sum = 0
for error in error_add:
if (np.isnan(error) or np.isinf(error)):
pass
else:
error_sum += error
error_sum/len(good_times)
x = np.linspace(0,13000,100)
y = 1.0 * x
plt.figure(figsize=(8,8))
plt.plot(true_powers,predicted_powers,'go', alpha=0.25)
plt.plot(x,y,'r',linewidth = 5, alpha = 0.5)
plt.xlabel('True Power (W)', fontsize = 16)
plt.xticks(size=16)
plt.ylabel('Predicted Power (W)', fontsize = 16)
plt.yticks(size=16)
np.savez_compressed('data/true_powers.npz',true_powers=true_powers) #save
np.savez_compressed('data/predicted_powers.npz',predicted_powers=predicted_powers) #save
import numpy as np
with np.load('data/true_powers.npz') as data:
true_powers = data['true_powers']
with np.load('data/predicted_powers.npz') as data:
predicted_powers = data['predicted_powers']
import matplotlib.pyplot as plt
%matplotlib inline
import statsmodels.api as sm
# Fit and summarize OLS model
mod = sm.OLS(predicted_powers,true_powers)
res = mod.fit()
print res.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples of streaming and non streaming inference with TF/TFlite
Step4: Load wav file
Step5: Prepare batched model
Step6: Run inference with TF
Step7: TF Run streaming inference with internal state
Step8: TF Run streaming inference with external state
Step9: Run inference with TFlite
Step10: Run streaming inference with TFLite
Step11: Run evaluation on all testing data
|
<ASSISTANT_TASK:>
Python Code:
!git clone https://github.com/google-research/google-research.git
import sys
import os
import tarfile
import urllib
import zipfile
sys.path.append('./google-research')
# TF streaming
from kws_streaming.models import models
from kws_streaming.models import utils
from kws_streaming.models import model_utils
from kws_streaming.layers.modes import Modes
import tensorflow as tf
import numpy as np
import tensorflow.compat.v1 as tf1
import logging
from kws_streaming.models import model_flags
from kws_streaming.models import model_params
from kws_streaming.train import inference
from kws_streaming.train import test
from kws_streaming.data import input_data
from kws_streaming.data import input_data_utils as du
tf1.disable_eager_execution()
config = tf1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf1.Session(config=config)
# general imports
import matplotlib.pyplot as plt
import os
import json
import numpy as np
import scipy as scipy
import scipy.io.wavfile as wav
import scipy.signal
tf.__version__
tf1.reset_default_graph()
sess = tf1.Session()
tf1.keras.backend.set_session(sess)
tf1.keras.backend.set_learning_phase(0)
def waveread_as_pcm16(filename):
Read in audio data from a wav file. Return d, sr.
samplerate, wave_data = wav.read(filename)
# Read in wav file.
return wave_data, samplerate
def wavread_as_float(filename, target_sample_rate=16000):
Read in audio data from a wav file. Return d, sr.
wave_data, samplerate = waveread_as_pcm16(filename)
desired_length = int(
round(float(len(wave_data)) / samplerate * target_sample_rate))
wave_data = scipy.signal.resample(wave_data, desired_length)
# Normalize short ints to floats in range [-1..1).
data = np.array(wave_data, np.float32) / 32768.0
return data, target_sample_rate
# set PATH to data sets (for example to speech commands V2):
# it can be downloaded from
# https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz
# if you run 00_check-data.ipynb then data2 should be located in the current folder
current_dir = os.getcwd()
DATA_PATH = os.path.join(current_dir, "data2/")
# Set path to wav file for testing.
wav_file = os.path.join(DATA_PATH, "left/012187a4_nohash_0.wav")
# read audio file
wav_data, samplerate = wavread_as_float(wav_file)
assert samplerate == 16000
plt.plot(wav_data)
# This notebook is configured to work with 'ds_tc_resnet' and 'svdf'.
MODEL_NAME = 'ds_tc_resnet'
# MODEL_NAME = 'svdf'
MODELS_PATH = os.path.join(current_dir, "models")
MODEL_PATH = os.path.join(MODELS_PATH, MODEL_NAME + "/")
MODEL_PATH
train_dir = os.path.join(MODELS_PATH, MODEL_NAME)
# below is another way of reading flags - through json
with tf.compat.v1.gfile.Open(os.path.join(train_dir, 'flags.json'), 'r') as fd:
flags_json = json.load(fd)
class DictStruct(object):
def __init__(self, **entries):
self.__dict__.update(entries)
flags = DictStruct(**flags_json)
flags.data_dir = DATA_PATH
# get total stride of the model
total_stride = 1
if MODEL_NAME == 'ds_tc_resnet':
# it can be automated by scanning layers of the model, but for now just use parameters of specific model
pools = model_utils.parse(flags.ds_pool)
strides = model_utils.parse(flags.ds_stride)
time_stride = [1]
for pool in pools:
if pool > 1:
time_stride.append(pool)
for stride in strides:
if stride > 1:
time_stride.append(stride)
total_stride = np.prod(time_stride)
# overide input data shape for streaming model with stride/pool
flags.data_stride = total_stride
flags.data_shape = (total_stride * flags.window_stride_samples,)
# prepare mapping of index to word
audio_processor = input_data.AudioProcessor(flags)
index_to_label = {}
# labels used for training
for word in audio_processor.word_to_index.keys():
if audio_processor.word_to_index[word] == du.SILENCE_INDEX:
index_to_label[audio_processor.word_to_index[word]] = du.SILENCE_LABEL
elif audio_processor.word_to_index[word] == du.UNKNOWN_WORD_INDEX:
index_to_label[audio_processor.word_to_index[word]] = du.UNKNOWN_WORD_LABEL
else:
index_to_label[audio_processor.word_to_index[word]] = word
# training labels
index_to_label
# pad input audio with zeros, so that audio len = flags.desired_samples
padded_wav = np.pad(wav_data, (0, flags.desired_samples-len(wav_data)), 'constant')
input_data = np.expand_dims(padded_wav, 0)
input_data.shape
# create model with flag's parameters
model_non_stream_batch = models.MODELS[flags.model_name](flags)
# load model's weights
weights_name = 'best_weights'
model_non_stream_batch.load_weights(os.path.join(train_dir, weights_name))
tf.keras.utils.plot_model(
model_non_stream_batch,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
# convert model to inference mode with batch one
inference_batch_size = 1
tf.keras.backend.set_learning_phase(0)
flags.batch_size = inference_batch_size # set batch size
model_non_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE)
#model_non_stream.summary()
tf.keras.utils.plot_model(
model_non_stream,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
predictions = model_non_stream.predict(input_data)
predicted_labels = np.argmax(predictions, axis=1)
predicted_labels
index_to_label[predicted_labels[0]]
# convert model to streaming mode
flags.batch_size = inference_batch_size # set batch size
model_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_INTERNAL_STATE_INFERENCE)
#model_stream.summary()
tf.keras.utils.plot_model(
model_stream,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
stream_output_prediction = inference.run_stream_inference_classification(flags, model_stream, input_data)
stream_output_arg = np.argmax(stream_output_prediction)
stream_output_arg
index_to_label[stream_output_arg]
# convert model to streaming mode
flags.batch_size = inference_batch_size # set batch size
model_stream_external = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE)
#model_stream.summary()
tf.keras.utils.plot_model(
model_stream_external,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
inputs = []
for s in range(len(model_stream_external.inputs)):
inputs.append(np.zeros(model_stream_external.inputs[s].shape, dtype=np.float32))
window_stride = flags.data_shape[0]
start = 0
end = window_stride
while end <= input_data.shape[1]:
# get new frame from stream of data
stream_update = input_data[:, start:end]
# update indexes of streamed updates
start = end
end = start + window_stride
# set input audio data (by default input data at index 0)
inputs[0] = stream_update
# run inference
outputs = model_stream_external.predict(inputs)
# get output states and set it back to input states
# which will be fed in the next inference cycle
for s in range(1, len(model_stream_external.inputs)):
inputs[s] = outputs[s]
stream_output_arg = np.argmax(outputs[0])
stream_output_arg
index_to_label[stream_output_arg]
tflite_non_streaming_model = utils.model_to_tflite(sess, model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE)
tflite_non_stream_fname = 'tflite_non_stream.tflite'
with open(os.path.join(MODEL_PATH, tflite_non_stream_fname), 'wb') as fd:
fd.write(tflite_non_streaming_model)
interpreter = tf.lite.Interpreter(model_content=tflite_non_streaming_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# set input audio data (by default input data at index 0)
interpreter.set_tensor(input_details[0]['index'], input_data.astype(np.float32))
# run inference
interpreter.invoke()
# get output: classification
out_tflite = interpreter.get_tensor(output_details[0]['index'])
out_tflite_argmax = np.argmax(out_tflite)
out_tflite_argmax
index_to_label[out_tflite_argmax]
tflite_streaming_model = utils.model_to_tflite(sess, model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE)
tflite_stream_fname = 'tflite_stream.tflite'
with open(os.path.join(MODEL_PATH, tflite_stream_fname), 'wb') as fd:
fd.write(tflite_streaming_model)
interpreter = tf.lite.Interpreter(model_content=tflite_streaming_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_states = []
for s in range(len(input_details)):
input_states.append(np.zeros(input_details[s]['shape'], dtype=np.float32))
out_tflite = inference.run_stream_inference_classification_tflite(flags, interpreter, input_data, input_states)
out_tflite_argmax = np.argmax(out_tflite[0])
index_to_label[out_tflite_argmax]
test.tflite_non_stream_model_accuracy(
flags,
MODEL_PATH,
tflite_model_name=tflite_non_stream_fname,
accuracy_name='tflite_non_stream_model_accuracy.txt')
test.tflite_stream_state_external_model_accuracy(
flags,
MODEL_PATH,
tflite_model_name=tflite_stream_fname,
accuracy_name='tflite_stream_state_external_model_accuracy.txt',
reset_state=True)
test.tflite_stream_state_external_model_accuracy(
flags,
MODEL_PATH,
tflite_model_name=tflite_stream_fname,
accuracy_name='tflite_stream_state_external_model_accuracy.txt',
reset_state=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A Proportional Selection Base Class
Step2: Linear Walk
Step3: Bisecting Search
Step4: Stochastic Acceptance
Step5: First Demonstration
Step6: Second Demonstration
Step7: The graph immediately below plots the kernel density estimation of timings for each algorithm. I truncate the results, limiting the range to everything less than the 90th percentile. (I'll explain why momentarily.) Bisecting search appears to be the fastest and the most stable. This makes sense. It has nice worse-case properties.
Step8: The relationship between algorithms remains the same. But, the difference between linear walk and stochastic acceptance grows. Over the entire distribution, stochastic acceptance lags both linear walk and bisecting search.
Step9: Third Demonstration
Step10: The following graph plots the average time as a function of n, the number of elements in the distribution. There is nothing unexpected. Linear walk gets increasingly terrible. It's $O(n)$. Bisecting search out-performs Stochastic acceptance. They appear to be converging. But, this convergence occurs at the extreme end of n. Few simulations sample over a distribution of 1,000,00 values.
Step11: Fourth Demonstration
|
<ASSISTANT_TASK:>
Python Code:
import random
from bisect import bisect_left
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
class PropSelection(object):
def __init__(self, n):
self._n = n
self._frequencies = [0] * n
def copy_from(self, values):
assert len(values) == self._n
for i, x in enumerate(values):
self[i] = x
def __getitem__(self, i):
return self._frequencies[i]
def normalize(self):
pass
class LinearWalk(PropSelection):
def __init__(self, n):
super(LinearWalk, self).__init__(n)
self._total = 0
def __setitem__(self, i, x):
self._total += (x - self._frequencies[i])
self._frequencies[i] = x
def sample(self):
terminal_cdf_point = random.randint(0, self._total - 1)
accumulator = 0
for i, k in enumerate(self._frequencies):
accumulator += k
if accumulator > terminal_cdf_point:
return i
class BisectingSearch(PropSelection):
def __init__(self, n):
super(BisectingSearch, self).__init__(n)
self._cdf = None
self._total = 0
def __setitem__(self, i, x):
self._total += (x - self._frequencies[i])
self._frequencies[i] = x
def normalize(self):
total = float(sum(self._frequencies))
cdf = []
accumulator = 0.0
for x in self._frequencies:
accumulator += (x / float(total))
cdf.append(accumulator)
self._cdf = cdf
def sample(self):
return bisect_left(self._cdf, random.random())
class StochasticAcceptance(PropSelection):
def __init__(self, n):
super(StochasticAcceptance, self).__init__(n)
self._max_value = 0
def __setitem__(self, i, x):
last_x = self._frequencies[i]
if x > self._max_value:
self._max_value = float(x)
elif last_x == self._max_value and x < last_x:
self._max_value = float(max(self._frequencies))
self._frequencies[i] = x
def sample(self):
n = self._n
max_value = self._max_value
freqs = self._frequencies
while True:
i = int(n * random.random())
if random.random() < freqs[i] / max_value:
return i
fig, ax = plt.subplots(1, 4, sharey=True, figsize=(10,2))
def plot_proportions(xs, ax, **kwargs):
xs = pd.Series(xs)
xs /= xs.sum()
return xs.plot(kind='bar', ax=ax, **kwargs)
def sample_and_plot(roulette_algo, ax, n_samples=10000, **kwargs):
samples = [roulette_algo.sample() for _ in range(n_samples)]
value_counts = pd.Series(samples).value_counts().sort_index()
props = (value_counts / value_counts.sum())
props.plot(kind='bar', ax=ax, **kwargs)
return samples
freqs = np.random.randint(1, 100, 10)
plot_proportions(freqs,
ax[0],
color=sns.color_palette()[1],
title="Target Distribution")
klasses = [LinearWalk, BisectingSearch, StochasticAcceptance]
for i, klass in enumerate(klasses):
algo = klass(len(freqs))
algo.copy_from(freqs)
algo.normalize()
name = algo.__class__.__name__
xs = sample_and_plot(algo, ax=plt.subplot(ax[i+1]), title=name)
import timeit
def sample_n_times(algo, n):
samples = []
for _ in range(n):
start = timeit.default_timer()
algo.sample()
samples.append(timeit.default_timer() - start)
return np.array(samples)
timings = []
for i, klass in enumerate(klasses):
algo = klass(len(freqs))
algo.copy_from(freqs)
algo.normalize()
name = algo.__class__.__name__
timings.append((name, sample_n_times(algo, 10000)))
values = np.vstack([times for _, times in timings]).T
values = values[np.all(values < np.percentile(values, 90, axis=0), axis=1)]
sns.boxplot(values, names=[name for name, _ in timings]);
values = np.vstack([times for _, times in timings]).T
values = values[np.all(values > np.percentile(values, 90, axis=0), axis=1)]
sns.boxplot(values, names=[name for name, _ in timings]);
import timeit
def sample_n_times(algo, n):
samples = []
for _ in range(n):
start = timeit.default_timer()
algo.sample()
samples.append(timeit.default_timer() - start)
return np.array(samples)
averages = []
for n in [10, 100, 1000, 10000, 100000, 1000000]:
row = {'n': n}
freqs = np.random.randint(1, 100, n)
for i, klass in enumerate(klasses):
algo = klass(len(freqs))
algo.copy_from(freqs)
algo.normalize()
name = algo.__class__.__name__
row[name] = np.mean(sample_n_times(algo, 10000))
averages.append(row)
averages_df = pd.DataFrame(averages).set_index('n')
averages_df.plot(logy=True, logx=True,
style={'BisectingSearch': 'o-',
'LinearWalk': 's--',
'StochasticAcceptance': 'd:'})#marker='o')
plt.ylabel('$Average runtime$');
import timeit
def normalize_sample_n_times(algo, n_samples, n):
samples = []
for _ in range(n_samples):
algo[random.randint(0, n-1)] = random.randint(1, 100)
start = timeit.default_timer()
algo.normalize()
algo.sample()
samples.append(timeit.default_timer() - start)
return np.array(samples)
averages = []
for n in [10, 100, 1000, 10000, 100000]:
row = {'n': n}
freqs = np.random.randint(1, 100, n)
for i, klass in enumerate(klasses):
algo = klass(len(freqs))
algo.copy_from(freqs)
algo.normalize()
name = algo.__class__.__name__
row[name] = np.mean(normalize_sample_n_times(algo, 1000, n))
averages.append(row)
averages_df = pd.DataFrame(averages).set_index('n')
averages_df.plot(logy=True, logx=True,
style={'BisectingSearch': 'o-',
'LinearWalk': 's--',
'StochasticAcceptance': 'd:'})#marker='o')
plt.ylabel('$Average runtime$');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
# Create your dictionary that maps vocab words to integers here
vocab = set(words)
vocab_to_int = {w: i+1 for i,w in enumerate(vocab)}
vocab_to_int[''] = len(vocab)+1
print('Vocab size: {}'.format(len(vocab)+1))
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = [[vocab_to_int[w] for w in review.split(' ') if review != ''] for review in reviews]
# Convert labels to 1s and 0s for 'positive' and 'negative'
print(labels[:30])
labels = labels.split('\n')
print(set(labels))
labels = [int(l == 'positive') for l in labels]
print(labels[:10])
print(labels[-1])
print(len(labels))
labels.pop(-1)
print(len(labels))
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
reviews_ints = [r for r in reviews_ints if r != []]
reviews_ints[:5]
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len),dtype=int)
for i,r in enumerate(reviews_ints):
if len(r) > seq_len:
r = r[:seq_len]
features[i, -len(reviews_ints[i]):] = r
features[:10,:100]
from sklearn.model_selection import train_test_split
print(features.shape)
print(len(labels))
split_frac = 0.8
labels = np.array(labels)
train_x, val_x, train_y, val_y = train_test_split(
features, labels, test_size=1.0-split_frac, random_state=42)
val_x, test_x, val_y, test_y = train_test_split(
val_x, val_y, test_size=0.5, random_state=42)
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab)+2
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, shape=[None,None],
name='inputs')
labels_ = tf.placeholder(tf.int32, shape=[None,None],
name = 'labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.truncated_normal((n_words, embed_size),
stddev=0.25))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
print(train_x)
print(train_x.shape)
print(np.array(train_y))
print(np.array(train_y).shape)
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
print(tf.__version__)
print()
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Intermediate Python - Objects
Step2: You can create your own object using the class keyword.
Step3: Why did we use the keyword class and not object? You can think of the class as a template for the object, and the object itself as an instance of the class. To create an object from a class, you use parentheses to instantiate the class.
Step4: Notice that Cow is a class and that elsie and annabelle are Cow objects. The text following at indicates where in memory these objects are stored. You might have to look closely, but elsie and annabelle are located at different locations in memory.
Step5: You can then call the method directly on the class.
Step6: While you can call talk() on the Cow class, you can't actually call talk() on any instances of Cow, such as elsie and annabelle.
Step7: Now talk can be called on objects of type Cow, but not on the Cow class itself.
Step8: Initialization
Step9: There are a few new concepts in the code above.
Step10: In the code above, we create an Animal class that has a generic implementation of the talk and eat functions that we created earlier. We then create a Cow object that implements its own talk function but relies on the Animal's eat function. We also create a Worm class that fully relies on Animal to provide talk and eat functions.
Step11: Exercises
Step12: Exercise 2
Step13: Exercise 3
Step14: Exercise 4
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
for data in (
1, # integer
3.5, # float
"Hello Python", # string
(1, "funny", "tuple"), # tuple
["a", "list"], # list
{"and": "a", "dict": 2} # dictionary
):
print("Is {} an object? {}".format(type(data), isinstance(data, object)))
class Cow:
pass
# Create an instance of Cow called elsie
elsie = Cow()
# Create an instance of Cow called annabelle
annabelle = Cow()
print(Cow)
print(elsie)
print(annabelle)
class Cow:
def talk():
print("Moo")
Cow.talk()
class Cow:
def talk(self):
print("Moo")
elsie = Cow()
elsie.talk()
class Cow:
def talk(self):
print("Moo")
def eat(self):
print("Crunch")
elsie = Cow()
elsie.eat()
elsie.talk()
class Cow:
def __init__(self, name):
self.__name = name
def talk(self):
print("{} says Moo".format(self.__name))
annie = Cow("Annabelle")
annie.talk()
elly = Cow("Elsie")
elly.talk()
class Animal:
def talk(self):
print("...") # The sound of silence
def eat(self):
print("crunch")
class Cow(Animal):
def talk(self):
print("Moo")
class Worm(Animal):
pass
cow = Cow()
worm = Worm()
cow.talk()
cow.eat()
worm.talk()
worm.eat()
class Animal:
def move(self):
pass
def eat(self):
pass
class Legless(Animal):
def move(self):
print("Wriggle wriggle")
class Legged(Animal):
def move(self):
print("Trot trot trot")
class Toothless(Animal):
def eat(self):
print("Slurp")
class Toothed(Animal):
def eat(self):
print("Chomp")
class Worm(Legless, Toothless):
pass
class Cow(Legged, Toothed):
pass
class Rock:
pass
def live(animal):
if isinstance(animal, Animal):
animal.move()
animal.eat()
w = Worm()
c = Cow()
r = Rock()
print("The worm goes...")
live(w)
print("The cow goes...")
live(c)
print("The rock goes...")
live(r)
# Your code goes here
# Your code goes here
# Your code goes here
# Your code goes here
class Vehicle:
def go():
pass
class Car:
def go():
print("Vroom!")
# No changes below here!
car = Car()
if isinstance(car, Vehicle):
car.go()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data from file into a data frame
Step2: Plot it
Step3: Other example
|
<ASSISTANT_TASK:>
Python Code:
from plotSlope import slope
data = pd.read_csv(os.path.join('data','EU_GDP_2007_2013.csv'),index_col=0,na_values='-')
(data/1000).head()
f = slope(data/1000,kind='interval',height= 12,width=20,font_size=12,dpi=150,savename='EU_interval.png',title = u'title')
color = {"France":'b','Germany':'r','Ireland':'chocolate','United Kingdom': 'purple'}
f = slope(data/1000, title = u'European GPD until 2010 and forecasts at market prices (billions of Euro) source : EUROSTAT',
kind='interval',height= 12,width=22,font_size=15,
savename='test.png',color=color,dpi=200)
f = slope(data/1000, title = u'European GPD until 2010 and forecasts at market prices (billions of Euro) source : EUROSTAT',
kind='interval',height= 12,width=30,font_size=20,
savename=None,color=color)
df = pd.DataFrame( np.random.normal(loc=np.ones(shape=[20,30])*np.arange(30)))
df.rename(columns = lambda el : str(el),index =lambda el : str(el),inplace=True)
f = slope(df.T,width =10,height= 8,kind='ordinal',savename=None,dpi=200,color={'10':'red','27':'blue'},marker=None)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_split, target_split = source_text.split('\n'), target_text.split('\n')
source_to_int, target_to_int = [], []
for source, target in zip(source_split, target_split):
source_to_int.append([source_vocab_to_int[word] for word in source.split()])
targets = [target_vocab_to_int[word] for word in target.split()]
targets.append((target_vocab_to_int['<EOS>']))
target_to_int.append(targets)
#print(source_to_int, target_to_int)
return source_to_int, target_to_int
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
#max_tar_seq_len = np.max([len(sentence) for sentence in target_int_text])
#max_sour_seq_len = np.max([len(sentence) for sentence in source_int_text])
#max_source_len = np.max([max_tar_seq_len, max_sour_seq_len])
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_probability = tf.placeholder(tf.float32, name='keep_prob')
target_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_seq_len = tf.reduce_max(target_seq_len, name='target_sequence_length')
source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs, targets, learning_rate, keep_probability, target_seq_len, max_target_seq_len, source_seq_len
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed_seq = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
def lstm_cell():
return tf.contrib.rnn.LSTMCell(rnn_size)
rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
output, state = tf.nn.dynamic_rnn(rnn, embed_seq, dtype=tf.float32)
return output, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
train_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(train_decoder, impute_finished=False, maximum_iterations=max_summary_length)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True, maximum_iterations=max_target_sequence_length)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
#embed_seq = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def lstm_cell():
return tf.contrib.rnn.LSTMCell(rnn_size)
rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_output = decoding_layer_train(encoder_state, rnn, dec_embed_input,
target_sequence_length, max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse=True):
inference_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_output, inference_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:max_target_sentence_length: Maximum target sequence lenght
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
_, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return training_output, inference_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 254
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.5
display_step = 10
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sentence = sentence.lower()
sentence_to_id = [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.split(' ')]
return sentence_to_id
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read raw data
Step2: Time-frequency beamforming based on DICS
|
<ASSISTANT_TASK:>
Python Code:
# Author: Roman Goj <roman.goj@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.event import make_fixed_length_events
from mne.datasets import sample
from mne.time_frequency import csd_epochs
from mne.beamformer import tf_dics
from mne.viz import plot_source_spectrogram
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
noise_fname = data_path + '/MEG/sample/ernoise_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Pick a selection of magnetometer channels. A subset of all channels was used
# to speed up the example. For a solution based on all MEG channels use
# meg=True, selection=None and add mag=4e-12 to the reject dictionary.
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads',
selection=left_temporal_channels)
raw.pick_channels([raw.ch_names[pick] for pick in picks])
reject = dict(mag=4e-12)
# Re-normalize our empty-room projectors, which should be fine after
# subselection
raw.info.normalize_proj()
# Setting time windows. Note that tmin and tmax are set so that time-frequency
# beamforming will be performed for a wider range of time points than will
# later be displayed on the final spectrogram. This ensures that all time bins
# displayed represent an average of an equal number of time windows.
tmin, tmax, tstep = -0.55, 0.75, 0.05 # s
tmin_plot, tmax_plot = -0.3, 0.5 # s
# Read epochs
event_id = 1
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=None, preload=True, proj=True, reject=reject)
# Read empty room noise raw data
raw_noise = mne.io.read_raw_fif(noise_fname, preload=True)
raw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
raw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])
raw_noise.info.normalize_proj()
# Create noise epochs and make sure the number of noise epochs corresponds to
# the number of data epochs
events_noise = make_fixed_length_events(raw_noise, event_id)
epochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,
tmax_plot, baseline=None, preload=True, proj=True,
reject=reject)
epochs_noise.info.normalize_proj()
epochs_noise.apply_proj()
# then make sure the number of epochs is the same
epochs_noise = epochs_noise[:len(epochs.events)]
# Read forward operator
forward = mne.read_forward_solution(fname_fwd, surf_ori=True)
# Read label
label = mne.read_label(fname_label)
# Setting frequency bins as in Dalal et al. 2008
freq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz
win_lengths = [0.3, 0.2, 0.15, 0.1] # s
# Then set FFTs length for each frequency range.
# Should be a power of 2 to be faster.
n_ffts = [256, 128, 128, 128]
# Subtract evoked response prior to computation?
subtract_evoked = False
# Calculating noise cross-spectral density from empty room noise for each
# frequency bin and the corresponding time window length. To calculate noise
# from the baseline period in the data, change epochs_noise to epochs
noise_csds = []
for freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):
noise_csd = csd_epochs(epochs_noise, mode='fourier',
fmin=freq_bin[0], fmax=freq_bin[1],
fsum=True, tmin=-win_length, tmax=0,
n_fft=n_fft)
noise_csds.append(noise_csd)
# Computing DICS solutions for time-frequency windows in a label in source
# space for faster computation, use label=None for full solution
stcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,
freq_bins=freq_bins, subtract_evoked=subtract_evoked,
n_ffts=n_ffts, reg=0.001, label=label)
# Plotting source spectrogram for source with maximum activity
# Note that tmin and tmax are set to display a time range that is smaller than
# the one for which beamforming estimates were calculated. This ensures that
# all time bins shown are a result of smoothing across an identical number of
# time windows.
plot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,
source_index=None, colorbar=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# all women have survived
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# All women and male passengers of age < 10 have survived
if passenger['Sex'] == 'female' or passenger['Sex'] == 'male' and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Age', ["Sex == 'female'", "Pclass == 3"])
import random
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# all women and passengers of age < 10 have survived except for
# women from 3rd Class who embarked the Titanic in Queenstown and
# women from 3rd class of age > 40 and < 60.
if passenger['Sex'] == 'female' and passenger['Embarked'] == 'Q' and passenger['Pclass'] == 3 \
and passenger['Age'] > 30 \
or passenger['Pclass'] == 3 and passenger['Sex'] == 'female' and passenger['Age'] > 40 and passenger['Age'] < 60:
predictions.append(0)
elif passenger['Sex'] == 'female' \
or passenger['Sex'] == 'male' and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we can import pypmj and numpy. Since the parent directory, which contains the pypmj module, is not automatically in our path, we need to append it before.
Step2: Load the materials extension.
Step3: What this extension is for
Step4: Note
Step5: We show the abilities using the gallium arsenide data set.
Step6: Get some metadata
Step7: The default for unitOfLength is 1., which defaults to meter. We can get refractive index data (here called $n$-$k$-data, where $n$ is the real and $k$ the imaginary part of the complex refractive index) for specific wavelengths like this
Step8: Or for multiple wavelengths values
Step9: Or we can get the permittivity.
Step10: We can also plot the complete known data to get an overview of the data set.
Step11: Or we can plot data in a specific wavelength range, thistime also showing the known (tabulated) points to show case the interpolation.
|
<ASSISTANT_TASK:>
Python Code:
import os
os.environ['PYPMJ_CONFIG_FILE'] = '/path/to/your/config.cfg'
import sys
sys.path.append('..')
import pypmj as jpy
import numpy as np
jpy.load_extension('materials')
jpy.MaterialData?
jpy.MaterialData.materials.keys()
GaAs = jpy.MaterialData(material = 'gallium_arsenide')
GaAs.getAllInfo()
wvl = 600.e-9 # = 600nm
GaAs.getNKdata(wvl)
wvls = np.linspace(600.e-9, 1000.e-9, 6) # = 600nm to 1000nm
GaAs.getNKdata(wvls)
GaAs.getPermittivity(wvls)
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(8,6))
GaAs.plotData()
plt.figure(figsize=(8,6))
GaAs.plotData(wvlRange=(200.e-9, 1000.e-9), plotKnownValues=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <hr>
Step2: Se obtuvo la isócrona de la imagen (línea verde). Al menos pasa cerca de los puntos obtenidos por fotometría.
Step3: <hr>
|
<ASSISTANT_TASK:>
Python Code:
from astropy.io import ascii
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
tbl = ascii.read('n121_match.cat')
def transform(v,i):
c1f555 = [-0.09,-0.124]
c2f555 = [0.034,0.018]
c1f814 = [0.06,0.001]
c2f814 = [-0.099,0.013]
for j in range(8):
tcol = v-i
v = np.where(tcol<0.6, (tbl['f555wMAG']+c1f555[0]*tcol+c2f555[0]*tcol*tcol), (tbl['f555wMAG']+c1f555[1]*tcol+c2f555[1]*tcol*tcol))
i = np.where(tcol<0.1, (tbl['f814wMAG']+c1f814[0]*tcol+c2f814[0]*tcol*tcol), (tbl['f814wMAG']+c1f814[1]*tcol+c2f814[1]*tcol*tcol))
return v,i
tbl['V'], tbl['I'] = transform(tbl['f555wMAG'],tbl['f814wMAG'])
# Genera catalogo
data = [tbl['V'],tbl['I'],tbl['f814wALPHA'],tbl['f814wDELTA']]
ascii.write(data,'n121_ubvri.cat',delimiter='\t')
# Genera plots
fig, ax = plt.subplots(2,1,figsize=(13,20))
for axis in ax:
axis.set_xlim(-0.5,1.5)
axis.set_ylim(24,16)
axis.set_xlabel('$V-I$',fontsize=20)
axis.set_ylabel('$V$',fontsize=20)
ax[0].plot(tbl['V']-tbl['I'],tbl['V'],'k.',alpha=0.4,ms=3)
hess = ax[1].hexbin(tbl['V']-tbl['I'],tbl['V'],bins='log',gridsize=220,mincnt=1)
leg = fig.colorbar(hess,ax=ax[1])
leg.set_label('Cuentas (log)')
plt.savefig('cmd_n121_ubvri', dpi=300)
plt.show()
plt.close()
from isochrones.dartmouth import Dartmouth_Isochrone
iso = Dartmouth_Isochrone(bands=['V','I'])
model = iso.isochrone(age=10.021189299,feh=-1.56,distance=61000,AV=0.1272,dm=1e-5)
fig, ax = plt.subplots(figsize=(10,10))
ax.set_xlim(-0.5,1.5)
ax.set_ylim(24,16)
ax.set_xlabel('$V-I$',fontsize=20)
ax.set_ylabel('$V$',fontsize=20)
ax.plot(tbl['V']-tbl['I'],tbl['V'],'k.',alpha=0.4,ms=3)
ax.plot(model.V_mag - model.I_mag,model.V_mag,'g',lw=2)
plt.show()
plt.close()
s = 0.22
x = []
y1 = np.arange(16.5,19.4,step=s)
y2 = np.arange(19.5,23.5,step=s)
y = np.append(y1,y2)
for i in y:
a = np.where((i<tbl['V']) & (tbl['V']<i+s) & (tbl['V']-tbl['I']>0.4))
x.append(np.median(tbl['V'][a]-tbl['I'][a]))
fig, ax = plt.subplots(figsize=(10,10))
ax.set_xlim(-0.5,1.5)
ax.set_ylim(24,16)
ax.set_xlabel('$V-I$',fontsize=20)
ax.set_ylabel('$V$',fontsize=20)
ax.plot(tbl['V']-tbl['I'],tbl['V'],'k.',alpha=0.4,ms=3)
# ajuste de polinomio a la ridge line
p = np.poly1d(np.polyfit(y,x,7))
ax.plot(p(y),y,'b',lw=2)
#nueva isocrona que se acerque a nuestra ridge line
iso = Dartmouth_Isochrone(bands=['V','I'])
model = iso.isochrone(age=9.89,feh=-1.55,distance=61000,dm=1e-5)
ax.plot(model.V_mag - model.I_mag,model.V_mag,'r',lw=2)
plt.show()
plt.close()
fig, ax = plt.subplots(figsize=(10,10))
ax.set_xlim(-0.5,1.5)
ax.set_ylim(24,16)
ax.set_xlabel('$V-I$',fontsize=20)
ax.set_ylabel('$V$',fontsize=20)
ax.plot(tbl['V']-tbl['I']+0.15,tbl['V']+0.15,'k.',alpha=0.4,ms=3)
#nueva isocrona que se acerque a nuestra ridge line
iso = Dartmouth_Isochrone(bands=['V','I'])
model = iso.isochrone(age=10.021189299,feh=-1.56,distance=61000,AV=0.1272,dm=1e-5)
ax.plot(model.V_mag - model.I_mag,model.V_mag,'r',lw=2)
plt.show()
plt.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0.0 Basic parameters
Step2: 1.0 Run information transfer mapping procedure
Step4: 1.2 Perform network-to-network information transfer mapping procedure using python module
Step5: 1.2.2 Run using multiprocessing (parallel processing) to speed-up computation
Step6: 2.0 Compute group statistics
Step7: 2.1 Perform multiple comparisons (using false discovery rate)
Step8: 2.2 Plot results
|
<ASSISTANT_TASK:>
Python Code:
import sys
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import multiprocessing as mp
%matplotlib inline
import os
os.environ['OMP_NUM_THREADS'] = str(1)
import warnings
warnings.filterwarnings('ignore')
import networkinformationtransfer as n2n
from matplotlib.colors import Normalize
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
# Set basic parameters
datadir = './data/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
# Load in network array
networkdef = np.loadtxt(datadir + 'network_array.csv', delimiter=',')
# Load in network keys (each network associated with a number in network array)
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud1':8, 'aud2':9, 'dan':11}
# Force aud2 key to be the same as aud1 (merging two auditory networks)
aud2_ind = np.where(networkdef==networkmappings['aud2'])[0]
networkdef[aud2_ind] = networkmappings['aud1']
# Redefine new network mappings with no aud1/aud2 distinction
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11}
fcmat = {}
for subj in subjNums:
fcmat[subj] = np.loadtxt('data/FC_Estimates/' + subj + '_multregconn_restfc.csv', delimiter=',')
def informationTransferMappingWrapper((subj,fcmat)):
A wrapper so we can use multiprocessing to run subjects in parallel
out = n2n.networkToNetworkInformationTransferMapping(subj,fcmat,null=False)
return out
inputs = []
for subj in subjNums:
inputs.append((subj,fcmat[subj]))
pool = mp.Pool(processes=32)
results = pool.map_async(informationTransferMappingWrapper,inputs).get()
pool.close()
pool.join()
# Collect results
ruledims = ['logic','sensory','motor']
ite_matrix = {}
for ruledim in ruledims:
ite_matrix[ruledim] = np.zeros((len(networkmappings),len(networkmappings),len(subjNums)))
scount = 0
for result in results:
for ruledim in ruledims:
ite_matrix[ruledim][:,:,scount] = result[ruledim]
scount += 1
# Create dictionary that reflects network ordering for matrix rows and columns
netkeys = {0:'vis',1:'smn',2:'con',3:'dmn',4:'fpn', 5:'aud', 6:'dan'}
num_networks=len(netkeys)
baseline = 0.0
avg_rho = {}
tstats = {}
pvals = {}
for ruledim in ruledims:
avg_rho[ruledim] = np.zeros((num_networks,num_networks))
tstats[ruledim] = np.zeros((num_networks,num_networks))
pvals[ruledim] = np.zeros((num_networks,num_networks))
for net1 in netkeys:
for net2 in netkeys:
# Skip if net1 and net2
if net1==net2:
avg_rho[ruledim][net1,net2] = np.nan
tstats[ruledim][net1,net2] = np.nan
pvals[ruledim][net1,net2] = np.nan
continue
# Store results
avg_rho[ruledim][net1,net2] = np.mean(ite_matrix[ruledim][net1,net2,:])
t, p = stats.ttest_1samp(ite_matrix[ruledim][net1,net2,:],0)
# One-sided t-test
tstats[ruledim][net1,net2] = t
if t>0:
p=p/2.0
else:
p = 1-p/2.0
pvals[ruledim][net1,net2] = p
# Compute group stats
baseline = 0.0
triu_indices = np.triu_indices(len(networkmappings),k=1)
tril_indices = np.tril_indices(len(networkmappings),k=-1)
qmat = {}
for ruledim in ruledims:
qmat[ruledim] = np.zeros((num_networks,num_networks))
tmpq = []
tmpq.extend(pvals[ruledim][triu_indices])
tmpq.extend(pvals[ruledim][tril_indices])
tmpq = mc.fdrcorrection0(tmpq)[1]
qmat[ruledim][triu_indices] = tmpq[0:len(triu_indices[0])]
qmat[ruledim][tril_indices] = tmpq[len(triu_indices[0]):]
for ruledim in ruledims:
plt.figure(figsize=(12,10))
# First visualize unthresholded results
plt.subplot(121)
plt.title('NetworkToNetwork Information Transfer Mapping\n' + ruledim + ' domain', fontsize=14, y=1.04)
mat = avg_rho[ruledim]
np.fill_diagonal(mat,0)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat, origin='lower',norm=norm, vmin=0, cmap='seismic', interpolation='none')
plt.xticks(netkeys.keys(),netkeys.values())
plt.yticks(netkeys.keys(),netkeys.values())
plt.ylabel('Source Network',fontsize=16)
plt.xlabel('Target Network',fontsize=16)
plt.colorbar(fraction=.046)
# Next visualize thresholded results (after multiple comparisons)
plt.subplot(122)
plt.title('NetworkToNetwork Information Transfer Mapping\n' + ruledim + ' domain' , fontsize=14, y=1.04)
mat = avg_rho[ruledim]
thresh = qmat[ruledim] < 0.05
# Threshold using q < 0.05
mat = np.multiply(mat,thresh)
np.fill_diagonal(mat,0)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat, origin='lower',norm=norm, vmin=0, cmap='seismic', interpolation='none')
plt.xticks(netkeys.keys(),netkeys.values())
plt.yticks(netkeys.keys(),netkeys.values())
plt.ylabel('Source Network',fontsize=16)
plt.xlabel('Target Network',fontsize=16)
plt.colorbar(fraction=.046)
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-2', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='loading'></a>
Step6: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
Step7: <a id='naive'></a>
Step8: <a id="multiscale"></a>
Step9: <a id="laplacian"></a>
Step10: <a id="playing"></a>
Step11: Lower layers produce features of lower complexity.
Step12: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
Step13: <a id="deepdream"></a>
Step14: Let's load some image and populate it with DogSlugs (in case you've missed them).
Step15: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
|
<ASSISTANT_TASK:>
Python Code:
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = bytes("<stripped %d bytes>"%size, 'utf-8')
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
render_lapnorm(T(layer)[:,:,:,65])
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
render_deepdream(T(layer)[:,:,:,139], img0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This part of the exercise is straight from the previous team project. We use the meetup.com api to load get the ChiPy members who RSVP-ed for one event.
Step2: Now lets load the data into pandas dataframe.
Step3: What does the first and last 10 rows of the dataset look like?
Step4: Test analyze_image with - http
Step5: Test determine_gender function
|
<ASSISTANT_TASK:>
Python Code:
!pip3 install meetup-api pandas pytest matplotlib clarifai
import meetup.api
import pandas as pd
API_KEY = ''
event_id=''
def get_members(event_id):
client = meetup.api.Client(API_KEY)
rsvps=client.GetRsvps(event_id=event_id, urlname='_ChiPy_')
member_id = ','.join([str(i['member']['member_id']) for i in rsvps.results])
return client.GetMembers(member_id=member_id)
def load_members_to_data_frame(event_id):
members = get_members(event_id=event_id)
columns=['name','id','thumb_link']
data = []
for member in members.results:
try:
data.append([member['name'], member['id'], member['photo']['thumb_link']])
except:
print('Discard incomplete profile')
return pd.DataFrame(data=data, columns=columns)
df=load_members_to_data_frame(event_id=event_id)
client_id, client_secret = '', '' #your keys here
from clarifai.rest import ClarifaiApp
def analyze_image(url):
app = ClarifaiApp(client_id, client_secret)
model = app.models.get("general-v1.3")
return model.predict_by_url(url=url)
def determine_gender(url):
return 'M'
assert determine_gender('iron_man') == 'M'
from IPython.display import Image, display, HTML
pd.set_option('display.max_colwidth', -1)
df['pic']=df.thumb_link.map(lambda x:'<img src="{0}" height=80 width=80 />'.format(x))
HTML(df[['name','pic']].to_html(escape=False))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computing the eigenvalues and the eigenvectors
Step2: The @ operator stands, in this context, for matrix multiplication.
Step3: Modal Response
Step4: The definition of time vector is a bit complicated...
Step5: The following code cell (that is executed before any other code cell by the notebook) loads libraries (or functions from libraries) and determines the style of plots and of the notebook itself. Besides the cell defines a function to format conveniently matrices and vectors.
|
<ASSISTANT_TASK:>
Python Code:
M = np.array(((2.0, 0.0), ( 0.0, 1.0)))
K = np.array(((3.0,-2.0), (-2.0, 2.0)))
p = np.array(( 0.0, 1.0)); w = 2.0
print_mat(M, pre='\\boldsymbol{M}=m\\,', fmt='%d')
print_mat(K, pre='\\boldsymbol{K}=k\\,', fmt='%d')
print_mat(p[:,None], pre=r'\boldsymbol{p}(t) = p_0\,', fmt='%d',
post='\\sin(%d\\omega_0t)'%w, mt='B')
evals, Psi = eigh(K, M)
Mstar = Psi.T@M@Psi
Kstar = Psi.T@K@Psi
pstar = Psi.T@p
print_mat(evals[None,:], mt='p', pre=r'\omega^2_i=\omega^2_o\,')
print_mat(Psi, pre=r'\boldsymbol{\Psi}=')
print_mat(Mstar, pre=r'\boldsymbol{M}^\star=m\,')
print_mat(Kstar, pre=r'\boldsymbol{K}^\star=k\,')
print_mat(pstar[:,None], pre=r'\boldsymbol{p}^\star=p_o\,', mt='B')
L = np.sqrt(evals)
DAF = 1.0/(L**2-w**2)
beta = w/L
t = np.linspace(0,60,601)[:,None]
q = pstar*DAF*(np.sin(w*t)-beta*np.sin(L*t))
curves = plt.plot(t,q)
plt.legend(curves,['q1', 'q2'])
plt.title('Modal Response')
plt.xlabel('$\omega_0t$')
plt.ylabel('$q_i/\Delta_{st}$');
x = (Psi@q.T).T
curves = plt.plot(t, x)
plt.legend(curves,['x1', 'x2'])
plt.title('Structural Response')
plt.xlabel('$\omega_0t$')
plt.ylabel('$X_i/\Delta_{st}$');
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use(['fivethirtyeight', '00_mplrc'])
import numpy as np ; from scipy.linalg import eigh
np.set_printoptions(suppress=False, linewidth=120)
from IPython.display import Latex
def print_mat(mat, pre='', post='', fmt='%.6f', mt='b'):
display(Latex(
r'$$' + pre + r'\begin{%smatrix}'%mt +
r'\\'.join('&'.join(fmt%x for x in row) for row in mat) +
r'\end{%smatrix}'%mt + post + r'$$'))
import sympy as sy
sy.init_printing(use_latex=1)
o = sy.symbols('Omega')
display(o)
sM = sy.Matrix(((2,0,),(0,1)))
sK = sy.Matrix(((3, -2),(-2,2)))
KooM = sK - o*sM
iKooM = KooM.inv()
sp = sy.Matrix(((0,),(1,)))
a,b=(iKooM*sp)
a.expand().simplify(), b.expand().simplify()
with plt.style.context('classic'):
plot = sy.plot(a, b, (o, 0, 5), ylim=(-10,10),
show=False, adaptive=False, nb_of_points=601)
plot[0].line_color = 'black'; plot[0].label = '$x_1$'
plot[1].line_color = 'red' ; plot[1].label = '$x_2$'
plot.xlabel = '$\\beta^2$'; plot.ylabel = r'$x_{i,\mathrm{ss}}/\Delta_\mathrm{st}$'
plot.legend = True
plot.show()
from IPython.display import HTML
HTML(open('00_custom.css').read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recrea la siguiente grafica de puntos de b contra a.
Step2: Crea un histograma de la columna 'a'.
Step3: Las graficas se ven muy bien, pero deseamos que se vean un poco mas profesional, asi que utiliza la hoja de estilo 'ggplot' y genera el histograma nuevamente, ademas investiga como agregar mas divisiones.
Step4: Crea una grafica de cajas comparando las columnas 'a' y 'b'.
Step5: Crea una grafica kde plot de la columna 'd'
Step6: Crea una grafica de area para todas las columnas, utilizando hasta 30 filas (tip
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
df3 = pd.read_csv('../data/df3')
%matplotlib inline
df3.plot.scatter(x='a',y='b',c='red',s=50
df3.info()
df3.head()
df3.plot.scatter(x='a',y='b',c='red',s=50,figsize=(12,3))
df3['a'].plot.hist()
plt.style.use('ggplot')
df3['a'].plot.hist(alpha=0.5,bins=25)
df3[['a','b']].plot.box()
df3['d'].plot.kde()
df3.loc[0:30].plot.area(alpha=0.4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Distribution of Classes
Step2: Maps of Classes
Step3: Here are the distribution map of galaxies, stars, and quasars, respectively.
Step4: Photometry vs Spectroscopy
Step5: PCA and Dimensionality Reduction
|
<ASSISTANT_TASK:>
Python Code:
# remove after testing
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from urllib.request import urlopen
from sklearn.decomposition import PCA
from mclearn.viz import (plot_class_distribution,
plot_hex_map,
plot_filters_and_spectrum,
plot_scatter_with_classes)
from mclearn.preprocessing import balanced_train_test_split
%matplotlib inline
sns.set_style('ticks')
fig_dir = '../thesis/figures/'
target_col = 'class'
sdss_features = ['psfMag_r_w14', 'psf_u_g_w14', 'psf_g_r_w14', 'psf_r_i_w14',
'psf_i_z_w14', 'petroMag_r_w14', 'petro_u_g_w14', 'petro_g_r_w14',
'petro_r_i_w14', 'petro_i_z_w14', 'petroRad_r']
vstatlas_features = ['rmagC', 'umg', 'gmr', 'rmi', 'imz', 'rmw1', 'w1m2']
sdss = pd.read_hdf('../data/sdss.h5', 'sdss')
vstatlas = pd.read_hdf('../data/vstatlas.h5', 'vstatlas')
fig = plt.figure(figsize=(5, 5))
ax = plot_class_distribution(sdss[target_col])
ax.tick_params(top='off', right='off')
fig.savefig(fig_dir + '2_astro/sdss_class_distribution.pdf', bbox_inches='tight')
fig = plt.figure(figsize=(5, 5))
ax = plot_class_distribution(vstatlas[target_col])
ax.tick_params(top='off', right='off')
fig.savefig(fig_dir + '2_astro/vstatlas_class_distribution.pdf', bbox_inches='tight')
sdss[target_col].value_counts()
0.3*(25604+ 6559+ 2303+590)
vstatlas[target_col].value_counts()
fig = plt.figure(figsize=(10,5))
zero_values = np.zeros(1)
ax = plot_hex_map(zero_values, zero_values, axisbg=None, colorbar=False, labels=True)
fig.savefig(fig_dir + '2_astro/mollweide_map.pdf', bbox_inches='tight')
# make Boolean index of each object
is_galaxy = sdss[target_col] == 'Galaxy'
is_star = sdss[target_col] == 'Star'
is_quasar = sdss[target_col] == 'Quasar'
# extract the coordinates of each object
galaxy_ra, galaxy_dec = sdss[is_galaxy]['ra'], sdss[is_galaxy]['dec']
star_ra, star_dec = sdss[is_star]['ra'], sdss[is_star]['dec']
quasar_ra, quasar_dec = sdss[is_quasar]['ra'], sdss[is_quasar]['dec']
# plot galaxy map
fig = plt.figure(figsize=(10,5))
ax = plot_hex_map(galaxy_ra, galaxy_dec)
fig.savefig(fig_dir + '4_expt1/sdss_train_galaxies.png', bbox_inches='tight', dpi=300)
# plot star map
fig = plt.figure(figsize=(10,5))
ax = plot_hex_map(star_ra, star_dec)
fig.savefig(fig_dir + '4_expt1/sdss_train_stars.png', bbox_inches='tight', dpi=300)
# plot quasar map
fig = plt.figure(figsize=(10,5))
ax = plot_hex_map(quasar_ra, quasar_dec)
fig.savefig(fig_dir + '4_expt1/sdss_train_quasars.png', bbox_inches='tight', dpi=300)
vega_url = 'http://www.astro.washington.edu/users/ivezic/DMbook/data/1732526_nic_002.ascii'
ugriz_filter_url = 'http://www.sdss.org/dr7/instruments/imager/filters/%s.dat'
filter_dir = '../data/filters'
spectra_dir = '../data/spectra'
fig = plt.figure(figsize=(10,5))
ax = plot_filters_and_spectrum(ugriz_filter_url, vega_url, filter_dir=filter_dir, spectra_dir=spectra_dir)
fig.savefig(fig_dir + '2_astro/vega_filters_and_spectrum.pdf', bbox_inches='tight')
X_train, X_test, y_train, y_test = balanced_train_test_split(
sdss[sdss_features], sdss[target_col], train_size=200000, test_size=100000, random_state=2)
pca = PCA(n_components=2)
projection = pca.fit_transform(X_train)
classes = ['Galaxy', 'Quasar', 'Star']
fig = plt.figure(figsize=(10, 5))
ax = plot_scatter_with_classes(projection, y_train, classes)
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
fig.savefig(fig_dir + '4_expt1/sdss_pca_all.png', bbox_inches='tight', dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
from scipy import stats
import numpy as np
np.random.seed(42)
x = np.random.normal(0, 1, 1000)
y = np.random.normal(0, 1, 1000)
statistic, p_value = stats.ks_2samp(x, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pairgrids
Step2: lmplot() for scatter and regression per category
Step3: FacetGrid
Step4: Suppose we want to visualize total_bill by time of day and wheather or not it was a smoker. You need to filter data out then make dist plots. YOu can do all of that in 1 step with FacetGrids.
Step5: Customizing grids
Step6: Fig and font size
Step7: Using seaborn context
Step8: Another way to set the size is to access the fig handle direclty
|
<ASSISTANT_TASK:>
Python Code:
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
iris= sns.load_dataset('iris')
iris.head()
grd = sns.PairGrid(data=iris)
#then you can assign what you want plotted for diagonal, above diagonal, below diagonal.
# when mapping, pass just function pointers, dont call the function itself.
grd.map_diag(sns.distplot)
grd.map_upper(plt.scatter)
grd.map_lower(sns.kdeplot)
pgrid = sns.lmplot(x='min_season', y='min_pressure_merged',
col='any_basin', # the column by which you need to split - needs to be categorical
data=set1,
col_wrap=3, # number of columns per row
sharex=False, sharey=False, # will repeat ticks, coords for each plot
line_kws={'color':'green'} # symbol for regression line
)
#load tips data
tips = sns.load_dataset('tips')
tips.head()
#for each unique value in `time` you get a row and
# each unique value in `smoker` you get a col
fg = sns.FacetGrid(data=tips, row='time', col='smoker')
#now map a plot for each of the grid
fg.map(sns.distplot, 'total_bill')
sns.set_style(style='ticks') #ticks, white, dark, darkgrid, whitegrid
#redraw the facet grid from above
fg = sns.FacetGrid(data=tips, row='time', col='smoker')
#now map a plot for each of the grid
fg.map(sns.distplot, 'total_bill')
plt.figure(figsize=(5,5)) #generate a fig, sns will piggyback this with the plot
sns.distplot(tips['total_bill'])
sns.set_context(context='poster', font_scale=0.8)
# valid contexts = paper, notebook, talk, poster -
# with notebook being 1:1 and paper being smaller and poster being largest
#draw the facet grid
fg = sns.FacetGrid(data=tips, row='smoker', col='time')
fg.map(sns.distplot, 'total_bill')
#draw the facet grid
fg = sns.FacetGrid(data=tips, row='smoker', col='time')
#set the size
fg.fig.set_size_inches(w=10, h=10)
#plot the fig
fg.map(sns.distplot, 'total_bill')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explain how the huberized hinge loss relates to the regular hinge loss and to the misclassification error loss.
Step2: 2.1.3 Numerical checks
Step3: 2.1.4 Gradient Descent
Step4: Generate a synthetic data for binary classification. Each class is modelled as a Gaussian distribution, with 500 examples for training and 500 for testing. Make sure
Step5: Normalize your data.
Step6: You will use here Linear SVM with huberized hinge loss, trained using your gradient descent algorithm. Write a function my-svm for Linear SVM, that can used for training (by calling my-gradient-descent) and testing.
Step7: Run experiments for various values of the fixed step-size η.
Step8: Visualise the linear separation learned by your Linear SVM
Step9: Plot the objective function vs the iterations, as the gradient
Step10: Implement backtracking line search (google it). Profile your
Step11: Add several options to my-svm that allow the user to choose between the different stopping
Step12: 2.1.5 Stochastic Gradient Descent
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.datasets import make_blobs
from sklearn.svm import LinearSVC
x = np.linspace(-2.0, 2.0, num=100)
def huberizedHingeLoss(x, h):
if x > 1+h:
return 0
elif abs(1-x) <= h:
return ((1+h-x)**2)/(4*h)
else:
return 1-x
def hingeLoss(x):
return max([0, 1-x])
def misclassLoss(x):
return 1 if x <= 0 else 0
hub, = plt.plot(x, [huberizedHingeLoss(i, 0.5) for i in x], 'b-', label='Huberized Hinge Loss')
hinge, = plt.plot(x, [hingeLoss(i) for i in x], 'k-', label='Hinge Loss')
mis, = plt.plot(x, [misclassLoss(i) for i in x], 'r-', label='Misclassification Error Loss')
plt.legend(handles=[hub,hinge,mis], loc=0)
plt.ylabel('Loss value')
plt.xlabel('Model output')
def compute_obj(x, y, w, C=1.0, h=0.5):
loss = np.vectorize(huberizedHingeLoss, excluded=['h'])
return np.dot(w, w) + (C/float(x.shape[0]))*sum(loss(y*np.dot(x,w), h))
def compute_grad(x, y, w, C=1.0, h=0.5):
p = y*np.dot(x, w)
gradW = np.zeros(w.shape[0], dtype=float)
def gradHuberHinge(i, j):
if p[i] > 1+h:
return 0
elif abs(1-p[i]) <= h:
return ((1+h-p[i])/(2*h))*(-y[i]*x[i][j])
else:
return (-y[i]*x[i][j])
for j in range(w.shape[0]):
sum_over_i = 0.0
for i in range(x.shape[0]):
sum_over_i += gradHuberHinge(i,j)
gradW[j] = 2*w[j] + (C/float(x.shape[0]))*sum_over_i
return gradW
def add_bias_column(x):
return np.append(x, np.ones(x.shape[0]).reshape(x.shape[0],1), axis=1)
n_samples = 1000
n_features = 30
x, y = make_blobs(n_samples, n_features, centers=2)
y[y==0] = -1
x = add_bias_column(x)
w = np.zeros(n_features+1)
compute_obj(x, y, w)
compute_grad(x, y, w)
def grad_checker(x, y, w, C=1.0, h=0.5, epsilon=1e-6):
orig_grad = compute_grad(x, y, w, C, h)
for i in range(w.shape[0]):
wplus = np.copy(w)
wneg = np.copy(w)
wplus[i] += epsilon
wneg[i] -= epsilon
new_grad = (compute_obj(x, y, wplus, C, h) - compute_obj(x, y, wneg, C, h))/(2*epsilon)
if abs(new_grad - orig_grad[i]) > epsilon:
print "Fails at weight ", i
print "gradient from input function ", orig_grad[i]
print "gradient from approximation", new_grad
return
print "compute_grad is correct"
grad_checker(x,y,w)
def my_gradient_descent(x, y, F, dF, eta=0.001, maxiter=1000):
w = np.zeros(x.shape[1])
for i in range(maxiter):
w = w - eta*dF(x,y,w)
return w
%time my_gradient_descent(x, y, compute_obj, compute_grad);
def dataset_fixed_cov(n,dim):
'''Generate 2 Gaussians samples with the same covariance matrix'''
C = np.array([[0., -0.23], [0.83, .23]])
X = np.r_[np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C) + np.array([1, 1])]
y = np.hstack((-np.ones(n), np.ones(n)))
return X, y
x_train,y_train = dataset_fixed_cov(250,2)
plt.plot(x_train[:250,0],x_train[:250,1], 'o', color='red')
plt.plot(x_train[250:,0],x_train[250:,1], 'o', color='blue')
x_test,y_test = dataset_fixed_cov(250,2)
plt.plot(x_test[:250,0],x_test[:250,1], 'o', color='red')
plt.plot(x_test[250:,0],x_test[250:,1], 'o', color='blue')
scaler = preprocessing.StandardScaler().fit(x_train)
scaler.transform(x_train)
scaler.transform(x_test);
class my_svm(object):
def __init__(self):
self.learnt_w = None
def fit(self, x_train, y_train, eta=0.01, max_iter=1000):
x_copy = add_bias_column(x_train)
self.learnt_w = my_gradient_descent(x_copy, y_train, compute_obj, compute_grad, eta, max_iter)
def predict(self, x_test):
x_copy = add_bias_column(x_test)
y = np.dot(x_copy, self.learnt_w)
y[y<0] = -1
y[y>0] = 1
return y
def score(self, x_test, y_test):
y_predict = self.predict(x_test)
bools = y_predict == y_test
accuracy = bools[bools == True].shape[0]/float(bools.shape[0])
return accuracy
svm = my_svm()
for k in range(0,9):
eta = 0.1**k
svm.fit(x_train, y_train, eta)
print eta, svm.score(x_test, y_test)
svm = my_svm()
svm.fit(x_train, y_train, 0.01)
line = svm.learnt_w
plt.plot(x_test[:250,0],x_test[:250,1], 'o', color='red')
plt.plot(x_test[250:,0],x_test[250:,1], 'o', color='blue')
xx = np.linspace(-3, 3)
yy = ((-line[0]/line[1])*xx)+(-line[2]/line[1]) # y = (-a/b)*x + (-c/b)
plt.plot(xx, yy)
def modified_gradient_descent(x, y, F, dF, eta=0.01, maxiter=1000):
w = np.zeros(x.shape[1])
F_vals = np.zeros(maxiter)
for i in range(maxiter):
w = w - eta*dF(x,y,w)
F_vals[i] = F(x,y,w)
return w, F_vals
x_copy = add_bias_column(x_train)
_, F_vals = modified_gradient_descent(x_copy, y_train, compute_obj, compute_grad)
iterations = np.arange(1000)
plt.plot(iterations, F_vals)
def backtracked_gradient_descent(x, y, F, dF, maxiter=100):
w = np.zeros(x.shape[1])
beta = 0.8
F_vals = np.zeros(maxiter)
for i in range(maxiter):
eta = 1
val = F(x,y,w)
grad = dF(x,y,w)
while F(x, y, (w - eta * grad)) > val - ((eta/2.) * grad.dot(grad)):
eta = beta * eta
#print eta
w = w - eta*grad
F_vals[i] = F(x,y,w)
return w, F_vals
x_copy = add_bias_column(x_train)
_, F_vals = backtracked_gradient_descent(x_copy, y_train, compute_obj, compute_grad)
iterations = np.arange(100)
plt.plot(iterations, F_vals)
class my_svm(object):
def huberizedHingeLoss(self, x, h):
if x > 1+h:
return 0
elif abs(1-x) <= h:
return ((1+h-x)**2)/(4*h)
else:
return 1-x
def add_bias_column(self, x):
return np.append(x, np.ones(x.shape[0]).reshape(x.shape[0],1), axis=1)
def compute_obj(self, x, y, w, C=1.0, h=0.5):
loss = np.vectorize(self.huberizedHingeLoss, excluded=['h'])
return np.dot(w, w) + (C/float(x.shape[0]))*sum(loss(y*np.dot(x,w), h))
def compute_grad(self, x, y, w, C=1.0, h=0.5):
p = y*np.dot(x, w)
gradW = np.zeros(w.shape[0], dtype=float)
def gradHuberHinge(i, j):
if p[i] > 1+h:
return 0
elif abs(1-p[i]) <= h:
return ((1+h-p[i])/(2*h))*(-y[i]*x[i][j])
else:
return (-y[i]*x[i][j])
for j in range(w.shape[0]):
sum_over_i = 0.0
for i in range(x.shape[0]):
sum_over_i += gradHuberHinge(i,j)
gradW[j] = 2*w[j] + (C/float(x.shape[0]))*sum_over_i
return gradW
def __init__(self, stop_criteria="iter", eta=0.01, max_iter=1000, epsilon=1e-3):
self.learnt_w = None
self.stop_criteria = stop_criteria
self.eta = eta
# i) maximum number of iterations
self.max_iter = max_iter
# ii) optimization-based criterion
self.epsilon = epsilon
def fit(self, x_train, y_train):
x = self.add_bias_column(x_train)
y = y_train
w = np.zeros(x.shape[1])
F = self.compute_obj
dF = self.compute_grad
if self.stop_criteria == "iter":
for i in range(self.max_iter):
#print w, self.eta * dF(x,y,w)
eta = 1
val = F(x,y,w)
grad = dF(x,y,w)
while F(x, y, (w - eta * grad)) > val - ((eta/2.) * grad.dot(grad)):
eta = beta * eta
w = w - eta*grad
w = w - self.eta * dF(x,y,w)
elif self.stop_criteria == "opt":
grad = dF(x,y,w)
while np.sqrt(grad.dot(grad)) > self.epsilon:
#print F(x,y,w)
#print w, self.eta * dF(x,y,w)
w = w - self.eta * grad
grad = dF(x,y,w)
self.learnt_w = w
def predict(self, x_test):
x = self.add_bias_column(x_test)
y = np.dot(x, self.learnt_w)
y[y<0] = -1
y[y>0] = 1
return y
def score(self, x_test, y_test):
y_predict = self.predict(x_test)
bools = y_predict == y_test
accuracy = bools[bools == True].shape[0]/float(bools.shape[0])
return accuracy
def dataset_fixed_cov(n,dim):
'''Generate 2 Gaussians samples with the same covariance matrix'''
C = np.array([[-0.8, 0.2], [0.8, 0.2]])
X = np.r_[np.dot(np.random.randn(n, dim), C) + np.array([1, -1]),
np.dot(np.random.randn(n, dim), C) + np.array([-1, 1])]
y = np.hstack((-np.ones(n), np.ones(n)))
return X, y
x_train, y_train = dataset_fixed_cov(250,2)
x_test, y_test = dataset_fixed_cov(250,2)
scaler = preprocessing.StandardScaler().fit(x_train)
scaler.transform(x_train)
scaler.transform(x_test);
plt.plot(x_train[:250,0], x_train[:250,1], 'o', color='red')
plt.plot(x_train[250:,0], x_train[250:,1], 'o', color='blue')
svm = my_svm(stop_criteria="opt")
svm.fit(x_train, y_train)
line = svm.learnt_w
print line
xx = np.linspace(-5, 5)
yy = ((-line[0]/line[1])*xx)+(-line[2]/line[1]) # y = (-a/b)*x + (-c/b)
plt.plot(xx, yy)
svm = LinearSVC()
svm.fit(x_train, y_train)
line = svm.coef_
xx = np.linspace(-5, 5)
yy = ((-svm.coef_[0][0]/svm.coef_[0][1])*xx)+(-svm.intercept_[0]/svm.coef_[0][1]) # y = (-a/b)*x + (-c/b)
plt.plot(xx, yy)
def my_sgd(x, y, F, dF, eta=0.01, epochs=20):
w = np.zeros(x.shape[1])
for i in range(epochs):
for j in range(x.shape[0]):
grad = dF(x[j:j+1,:],y[j:j+1],w)
w = w - eta*grad
return w
x_train,y_train = dataset_fixed_cov(50000,2)
x_copy = add_bias_column(x_train)
%time sgd_w = my_sgd(x_copy, y_train, compute_obj, compute_grad)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ladyboy
Step2: Ladyboy Big
Step3: Girl
|
<ASSISTANT_TASK:>
Python Code:
import cv2
from PIL import Image
import math
import copy
#the usual data science stuff
import os,sys
import glob
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
ladyboy_big_input = '../data/ladyboy_big/'
ladyboy_big_output = '../data/processed/ladyboy_big/'
ladyboy_input = '../data/ladyboy/'
ladyboy_output = '../data/processed/ladyboy/'
girl_input = '../data/girl/'
girl_output = '../data/processed/girl/'
cascade_file_src = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascade_file_src)
#i=0
for root, dirs, files in os.walk(ladyboy_input):
for name in files:
#print(i)
#i+=1
imagePath = os.path.join(root, name)
# load image on gray scale :
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect faces in the image :
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
#skip if face not detected
if(len(faces)==0):
continue
#open image
im = Image.open(imagePath)
#get box dimensions
(x, y, w, h) = faces[0]
center_x = x+w/2
center_y = y+h/2
b_dim = min(max(w,h)*1.2,im.width, im.height)
box = (int(center_x-b_dim/2), int(center_y-b_dim/2),
int(center_x+b_dim/2), int(center_y+b_dim/2))
# Crop Image
crpim = im.crop(box).resize((224,224))
#plt.imshow(np.asarray(crpim))
#save file
crpim.save(ladyboy_output+name,format='JPEG')
#i=0
for root, dirs, files in os.walk(ladyboy_big_input):
for name in files:
#print(i)
#i+=1
imagePath = os.path.join(root, name)
# load image on gray scale :
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect faces in the image :
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
#skip if face not detected
if(len(faces)==0):
continue
#open image
im = Image.open(imagePath)
#get box dimensions
(x, y, w, h) = faces[0]
center_x = x+w/2
center_y = y+h/2
b_dim = min(max(w,h)*1.2,im.width, im.height)
box = (int(center_x-b_dim/2), int(center_y-b_dim/2),
int(center_x+b_dim/2), int(center_y+b_dim/2))
# Crop Image
crpim = im.crop(box).resize((224,224))
#plt.imshow(np.asarray(crpim))
#save file
crpim.save(ladyboy_big_output+name,format='JPEG')
#i=0
for root, dirs, files in os.walk(girl_input):
for name in files:
#print(i)
#i+=1
imagePath = os.path.join(root, name)
# load image on gray scale :
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect faces in the image :
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
#skip if face not detected
if(len(faces)==0):
continue
#open image
im = Image.open(imagePath)
#get box dimensions
(x, y, w, h) = faces[0]
center_x = x+w/2
center_y = y+h/2
b_dim = min(max(w,h)*1.2,im.width, im.height)
box = (int(center_x-b_dim/2), int(center_y-b_dim/2),
int(center_x+b_dim/2), int(center_y+b_dim/2))
# Crop Image
crpim = im.crop(box).resize((224,224))
#plt.imshow(np.asarray(crpim))
#save file
crpim.save(girl_output+name,format='JPEG')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Choose a file to check
Step2: Load and prepare data
Step3: Time Series
Step4: Measured vs Calibrated
Step5: Differences
|
<ASSISTANT_TASK:>
Python Code:
# Package imports
import os
import pandas as pd
import numpy as np
import statistics
from ipywidgets import interact
import ipywidgets as widgets
# Bokeh Plots
from bokeh.io import output_notebook, push_notebook, show
from bokeh.plotting import figure
output_notebook()
# Hide warnings
import warnings
warnings.filterwarnings('ignore')
# Create the data file directory if it doesn't exist
FILE_DIR='data_files'
if not os.path.exists(FILE_DIR):
os.mkdir(FILE_DIR)
available_files = [f for f in os.listdir(FILE_DIR) if f.endswith('ICOS OTC.csv')]
available_files.sort()
chosen_file = None
def load_file(filename):
global chosen_file
chosen_file = filename
dummy = interact(load_file, filename=available_files)
in_data = pd.read_csv(os.path.join(FILE_DIR, chosen_file))
data = in_data[['Date/Time', 'CO2 Mole Fraction [umol mol-1]', 'xCO2 In Water - Calibrated In Dry Air [umol mol-1]']]
data.rename(columns = {'Date/Time':'Timestamp'}, inplace = True)
data.rename(columns = {'CO2 Mole Fraction [umol mol-1]':'Measured'}, inplace = True)
data.rename(columns = {'xCO2 In Water - Calibrated In Dry Air [umol mol-1]':'Calibrated'}, inplace = True)
data['Timestamp'] = data['Timestamp'].apply(pd.to_datetime)
data = data[pd.to_numeric(data['Calibrated'], errors='coerce').notnull()]
timeseries = figure(plot_width=900, plot_height=600, x_axis_type='datetime', x_axis_label='Time', y_axis_label='CO₂')
timeseries.circle(data['Timestamp'], data['Measured'], color='black', size=5, legend_label='Measured')
timeseries.circle(data['Timestamp'], data['Calibrated'], color='blue', size=5, legend_label='Calibrated')
show(timeseries)
vs_plot = figure(plot_width=600, plot_height=600, x_axis_label='Measured', y_axis_label='Calibrated')
vs_plot.circle(data['Measured'], data['Calibrated'], size=5)
show(vs_plot)
data['Difference'] = data['Calibrated'] - data['Measured']
# Time series of differences
diff_timeseries = figure(plot_width=900, plot_height=600, x_axis_type='datetime', x_axis_label='Time', y_axis_label='Calibrated - Measured')
diff_timeseries.circle(data['Timestamp'], data['Difference'], color='black', size=5, legend_label='Calibrated - Measured')
show(diff_timeseries)
hist, edges = np.histogram(data['Difference'], density=True, bins=100)
print(f'Difference range {min(data["Difference"])} to {max(data["Difference"])}')
print(f'Mean difference {statistics.mean(data["Difference"])}')
print(f'Median difference {statistics.median(data["Difference"])}')
p = figure(plot_width=900, plot_height=600, x_axis_label='Difference', y_axis_label='Proprotion')
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], fill_color="navy", line_color="white", alpha=0.5)
show(p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This precision has not been scaled to a specific flux/SNR level.
Step2: Scaling Effects
|
<ASSISTANT_TASK:>
Python Code:
import os
import matplotlib.pyplot as plt
from eniric import config, precision
# Load a spectrum
from astropy.io import fits
test_data = config.paths["phoenix_raw"]
print(test_data)
wav = fits.getdata(os.path.join(test_data, "WAVE_PHOENIX-ACES-AGSS-COND-2011.fits"))
flux = fits.getdata(
os.path.join(
test_data, "Z-0.0", "lte03900-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits"
)
)
# This is a bit long
print(len(flux))
# Take a section of wavelength
xmin = 2000 # nm
xmax = 3000 # nm
mask = (wav >= xmin) & (wav <= xmax)
wav = wav[mask]
flux = flux[mask]
print(len(flux))
# Calculate precision with no masking and without scaling
rv_precision = precision.rv_precision(wavelength=wav, flux=flux)
print(f"RV precision between {xmin}-{xmax}nm = {rv_precision:1.6f}")
# Calculate spectral quality also with no masking and without scaling
quality = precision.quality(wavelength=wav, flux=flux)
print(f"Spectral quality between {xmin}-{xmax}nm = {quality:7.1f}")
# The precision needs to be scaled to the relative SNR.
# 1/1e6 scales the test data flux to give sensible numbers here.
scales = [0.1, 0.2, 0.5, 2, 5, 10]
prec = [
precision.rv_precision(wavelength=wav, flux=flux / 1e6 * scale).value
for scale in scales
]
plt.plot(scales, prec)
plt.xlabel("Flux scale")
plt.ylabel("Precision (m/s)")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: train_data and val_data are needed when creating a simple rnn learner. Both attributes take lists of examples and the targets in a tuple. Please note that we build the network by adding layers to a Sequential() model which means data are passed through the network one by one. SimpleRNN layer is the key layer of rnn which acts the recursive role. Both Embedding and Dense layers before and after the rnn layer are used to map inputs and outputs to data in rnn form. And the optimizer used in this case is the Adam optimizer.
Step2: Then we build and train the rnn model for 10 epochs
Step3: The accuracy of the training dataset and validation dataset are both over 80% which is very promising. Now let's try on some random examples in the test set
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import os, sys
sys.path = [os.path.abspath("../../")] + sys.path
from deep_learning4e import *
from notebook4e import *
psource(SimpleRNNLearner)
from keras.datasets import imdb
data = imdb.load_data(num_words=5000)
train, val, test = keras_dataset_loader(data)
model = SimpleRNNLearner(train, val, epochs=10)
psource(AutoencoderLearner)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You will find the variable df used quite often to store a dataframe
Step2: Understand Data Structure and Types
Step3: Data Structure
Step4: So we are getting the quantitive columns are correctly being shown as integers and the categorical columns are showing as objects(strings) which is fine.
Step5: Question 1 - How big is the Bangalore onion market compared to other cities in India?
Step6: Principle
Step7: PRINCIPLE
Step8: Exercise
Step9: Exercise
Step10: PRINCIPLE
Step11: To calculate the range of change, we will create a new price difference variable - which is the difference between the priceMin and priceMax
Step12: PRINCIPLE
Step13: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# Import the library we need, which is Pandas
import pandas as pd
# Read the csv file of Monthwise Quantity and Price csv file we have.
df = pd.read_csv('MonthWiseMarketArrivals_clean.csv')
df.shape
df.head()
# Get the typeof each column
df.dtypes
# Changing the date column to a Time Interval columnn
df.date = pd.DatetimeIndex(df.date)
df.shape
# Now checking for type of each column
df.dtypes
# Let us see the dataframe again now
df.head()
# df.city.unique()
# Functional Approach
pd.unique(df.city)
df2010 = df[df.year == 2010]
df2010.head()
# We can also subset on multiple criterias
df2010Bang = df[(df.year == 2010) & (df.city == 'BANGALORE')]
df2010Bang.head()
# Group by using city
df2010City = df2010.groupby(['city']).sum()
df2010City.head()
type(df2010City)
# If we only want to apply the sum function on quantity, then we specify the quantity column
df2010City = df2010.groupby(['city']).quantity.sum()
# Let us see this dataframe
df2010City.head()
type(df2010City)
# To create a dataframe again, it is best to specify index as false
df2010City = df2010.groupby(['city'], as_index=False).quantity.sum()
df2010City.head()
sorted(df2010City.quantity)
# Sort the Dataframe by Quantity to see which one is on top
df2010City = df2010City.sort_values(by = "quantity", ascending = False)
df2010City.head()
%timeit sorted(df2010City.quantity)
%timeit df2010City.quantity.sort_values()
%timeit df2010City.sort_values(by = "quantity", ascending = False)
# Load the visualisation libraries - Matplotlib
import matplotlib.pyplot as plt
# Let us see the output plots in the notebook itself
%matplotlib inline
# Set some parameters to get good visuals - style to ggplot and size to 15,10
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 10)
# Plot the Data
df2010City.plot(kind ="barh", x = 'city', y = 'quantity')
df2015 = df[]
df.head()
dfBang = df[df.city == 'BANGALORE']
dfBang.head()
dfBang.describe()
# Reduce the precision of numbers - so that it is easy to read
pd.set_option('precision', 0)
dfBang.describe()
dfBang.head()
dfBang.index
# Set the index as date
dfBang = dfBang.sort_values(by = "date")
dfBang.head()
# Set the Index for the Dataframe
dfBang.index = pd.PeriodIndex(dfBang.date, freq='M')
dfBang.head()
dfBang.priceMod.plot()
dfBang.plot(kind = "line", y = ['priceMin', 'priceMod', 'priceMax'])
dfBang['priceDiff'] = dfBang['priceMax'] - dfBang['priceMin']
dfBang.head()
dfBang.plot(kind = 'line', y = 'priceDiff')
# Create new variable for Integer Month
dfBang['monthVal'] = pd.DatetimeIndex(dfBang['date']).month
dfBang.head()
dfBangPivot = pd.pivot_table(dfBang, values = "priceDiff",
columns = "year", index = "monthVal")
dfBangPivot
dfBangPivot.plot()
dfBangPivot.plot(subplots = True, figsize=(15, 15), layout=(3, 5), sharey=True)
dfBangPivot.plot(subplots = True, figsize=(15, 15), layout=(3, 5),
sharey=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 4.5.2 $uv$ coverage
Step3: From the list above, you can select different configurations corresponding to real instrumental layouts.
Step4: Let's plot the distribution of the antennas from the selected (or customized) interferometer
Step5: <a id="fig
Step6: 4.5.2.1.2 The snapshot $\boldsymbol{uv}$ coverage
Step7: <a id="vis
Step8: <a id="fig
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import display
from ipywidgets import *
from mpl_toolkits.mplot3d import Axes3D
import plotBL
HTML('../style/code_toggle.html')
config = widgets.Dropdown(
options={'VLAa':'configs/vlaa.enu.txt',
'VLAb':'configs/vlab.enu.txt',
'VLAc':'configs/vlac.enu.txt',
'VLAd':'configs/vlad.enu.txt',
'WSRT':'configs/wsrt.enu.txt',
'kat7':'configs/kat-7.enu.txt',
'meerkat':'configs/meerkat.enu.txt'},
value="configs/vlaa.enu.txt",
Description="Antennas:")
display(config)
# you need to re-evaluate this box if you modify the array.
antennaPosition=np.genfromtxt(config.value)
# custom antenna distribution
custom=0
if (custom):
antennaPosition = np.zeros((10, 2), dtype=float)
antennaPosition[0,:] = [0,0]
antennaPosition[1,:] = [-4, 5]
antennaPosition[2,:] = [4, 5]
antennaPosition[3,:] = [-10,0]
antennaPosition[4,:] = [-8,-3]
antennaPosition[5,:] = [-4,-5]
antennaPosition[6,:] = [0,-6]
antennaPosition[7,:] = [4,-5]
antennaPosition[8,:] = [8,-3]
antennaPosition[9,:] = [10,0]
%matplotlib inline
mxabs = np.max(abs(antennaPosition[:]))*1.1;
# make use of pylab librery to plot
fig=plt.figure(figsize=(6,6))
plt.plot((antennaPosition[:,0]-np.mean(antennaPosition[:,0]))/1e3, \
(antennaPosition[:,1]-np.mean(antennaPosition[:,1]))/1e3, 'o')
plt.axes().set_aspect('equal')
plt.xlim(-mxabs/1e3, mxabs/1e3)
plt.ylim(-mxabs/1e3, (mxabs+5)/1e3)
plt.xlabel("E (km)")
plt.ylabel("N (km)")
plt.title("Antenna positions")
# Observation parameters
c=3e8 # Speed of light
f=1420e6 # Frequency
lam = c/f # Wavelength
time_steps = 1200 # time steps
h = np.linspace(-6,6,num=time_steps)*np.pi/12 # Hour angle window
# declination convert in radian
L = np.radians(34.0790) # Latitude of the VLA
dec = np.radians(34.)
%matplotlib inline
Ntimes=3
plotBL.plotuv(antennaPosition,L,dec,h,Ntimes,lam)
from ipywidgets import *
from IPython.display import display
def Interactplot(key,Ntimes):
print "Ntimes="+str(Ntimes)
plotBL.plotuv(antennaPosition,L,dec,h,Ntimes,lam)
slider=IntSlider(description="Ntimes",min=2,max=1200,step=100,continuous_update=False)
slider.on_trait_change(Interactplot,'value')
display(slider)
Interactplot("",2)
df=10e6 # frequency step
f0=c/lam # starting frequency
lamb0=lam # starting wavelength
def Interactplot(key,Nfreqs):
print "Nfreqs="+str(Nfreqs)
plotBL.plotuv_freq(antennaPosition,L,dec,h,Nfreqs,lamb0,df)
slider=IntSlider(description="Nfreqs",min=1,max=200,step=1,continuous_update=False)
slider.on_trait_change(Interactplot,'value')
display(slider)
Interactplot("",1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alternative
Step2: Checking our results (inference)
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
x_train.shape
import numpy as np
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
x_train.shape
# recude memory and compute time
NUMBER_OF_SAMPLES = 50000
x_train_samples = x_train[:NUMBER_OF_SAMPLES]
y_train_samples = y_train[:NUMBER_OF_SAMPLES]
import skimage.data
import skimage.transform
x_train_224 = np.array([skimage.transform.resize(image, (32, 32)) for image in x_train_samples])
x_train_224.shape
from tensorflow.keras.applications.resnet50 import ResNet50
# https://keras.io/applications/#mobilenet
# https://arxiv.org/pdf/1704.04861.pdf
from tensorflow.keras.applications.mobilenet import MobileNet
# model = ResNet50(classes=10, weights=None, input_shape=(32, 32, 1))
model = MobileNet(classes=10, weights=None, input_shape=(32, 32, 1))
model.summary()
%%time
BATCH_SIZE=10
EPOCHS = 10
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train_224, y_train_samples, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_split=0.2, verbose=1)
import matplotlib.pyplot as plt
plt.xlabel('epochs')
plt.ylabel('loss')
plt.yscale('log')
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['Loss', 'Validation Loss'])
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['Accuracy', 'Validation Accuracy'])
x_test_224 = np.array([skimage.transform.resize(image, (32, 32)) for image in x_test])
LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']
def plot_predictions(images, predictions):
n = images.shape[0]
nc = int(np.ceil(n / 4))
f, axes = plt.subplots(nc, 4)
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = LABEL_NAMES[np.argmax(predictions[i])]
confidence = np.max(predictions[i])
if i > n:
continue
axes[x, y].imshow(images[i])
axes[x, y].text(0.5, 0.5, label + '\n%.3f' % confidence, fontsize=14)
plt.gcf().set_size_inches(8, 8)
plot_predictions(np.squeeze(x_test_224[:16]),
model.predict(x_test_224[:16]))
train_loss, train_accuracy = model.evaluate(x_train_224, y_train_samples, batch_size=BATCH_SIZE)
train_accuracy
test_loss, test_accuracy = model.evaluate(x_test_224, y_test, batch_size=BATCH_SIZE)
test_accuracy
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graphs of population distribution and trajectories for base parameter set
Step2: Given a delta_f of .03 and a K of 1 million, we expect it would take about 450 generations for a new mutation increasing fitness by one step to fix after it invades the population
Step3: Varying the probability of beneficial mutations
Step4: Varying the probability of accurate mutations
Step5: Varying $\Delta f$
Step6: Varying K
|
<ASSISTANT_TASK:>
Python Code:
fbvary_results = []
for file in glob.glob('runs/evolved_mu_f_b_vary?replicate?datetime.datetime(2019, 5, *).hdf5'):
try:
fbvary_results.append(popev.PopulationReader(file))
except OSError:
pass
favary_results = []
for file in glob.glob('runs/evolved_mu_f_a_vary?replicate?datetime.datetime(2019, 5, *).hdf5'):
try:
favary_results.append(popev.PopulationReader(file))
except OSError:
pass
delta_f_vary_results = []
for file in glob.glob('runs/evolved_mu_delta_f_vary?replicate?datetime.datetime(2019, 5, *).hdf5'):
try:
delta_f_vary_results.append(popev.PopulationReader(file))
except OSError:
pass
K_vary_results = []
for file in glob.glob('runs/evolve_mu_K_vary?replicate?datetime.datetime(2019, 6, *).hdf5'):
try:
K_vary_results.append(popev.PopulationReader(file))
except OSError:
pass
len(fbvary_results[0])
cd Thesis_Data_and_Figures/
basepop = fbvary_results[50]
print('The parameters of the base population are delta_f: {},'
' M: {}, P_b: {}, P_a: {}, P_mu: {}, K: {}'.format(basepop.delta_fitness, basepop.mu_multiple,
basepop.fraction_beneficial, basepop.fraction_accurate,
basepop.fraction_mu2mu, basepop.pop_cap))
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
logNbarplot(ax, basepop(15000))
plt.savefig('traveling_wave_full_distribution.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
logNvsfplot(ax, basepop(15000))
plt.savefig('traveling_wave_f_distribution.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
logNvsmuplot(ax, basepop(15000))
plt.savefig('traveling_wave_mu_distribution.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.plot(basepop.mode_fitness[:10000], marker='');
ax.set_xlabel('generation', fontsize=32);
ax.set_ylabel('mode of population fitness, $f_{mode}$', fontsize=28);
plt.savefig('traveling_wave_f_evolution.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.semilogy(basepop.mode_mutation_rate[:], marker='');
ax.minorticks_off()
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.set_yticks(np.unique(basepop.mode_mutation_rate[:]))
ax.set_xlabel('generation', fontsize=32);
ax.set_ylabel('mode of population\nmutation rate, $\mu_{mode}$', fontsize=28);
plt.savefig('traveling_wave_mu_evolution.pdf')
fbs = [pop.fraction_beneficial for pop in fbvary_results]
dfdts = [mean_df_dt(pop) for pop in fbvary_results]
fbs, mean_dfdts, std_dfdts = mean_and_std_by(fbs, dfdts)
fig = plt.figure(figsize=(10,10))
plt.errorbar(fbs, mean_dfdts, 2*std_dfdts, linestyle='');
plt.xscale('log')
plt.yscale('log')
plt.xlabel('probability of a beneficial mutation $P_{b}$', fontsize=28);
plt.ylabel(r'rate of fitness increase $\frac{df_{mean}}{dt}$', fontsize=28);
plt.savefig('df_dt_varying_P_b.pdf')
fbs = [pop.fraction_beneficial for pop in fbvary_results]
mu_avs = [np.mean(pop.mean_mutation_rate[:]) for pop in fbvary_results]
fbs, mean_mu_avs, std_mu_avs = mean_and_std_by(fbs, mu_avs)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.errorbar(fbs, mean_mu_avs, 2*std_mu_avs, linestyle='')
ax.set_xscale('log')
ax.set_yscale('log')
ax.minorticks_off()
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.set_yticks([.00125, .0025, .005, .01, .02, .04])
ax.set_xlabel('probability of a beneficial mutation $P_{b}$', fontsize=28);
ax.set_ylabel('mean mutation rate, $\mu$,\n(over population and time)', fontsize=28);
plt.savefig('mean_mu_varying_P_b.pdf')
fas = [pop.fraction_accurate for pop in favary_results]
dfdts = [mean_df_dt(pop) for pop in favary_results]
fas, mean_dfdts, std_dfdts = mean_and_std_by(fas, dfdts)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.errorbar(fas, mean_dfdts, 2*std_dfdts, linestyle='')
ax.set_xscale('log')
ax.set_xlabel('probability of an accuracy increasing mutation $P_{a}$', fontsize=28);
ax.set_ylabel(r'rate of fitness increase $\frac{df_{mean}}{dt}$', fontsize=28);
plt.savefig('df_dt_varying_P_a.pdf')
fas = [pop.fraction_accurate for pop in favary_results]
mu_avs = [np.mean(pop.mean_mutation_rate[:]) for pop in favary_results]
fas, mean_mu_avs, std_mu_avs = mean_and_std_by(fas, mu_avs)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.errorbar(fas, mean_mu_avs, 2*std_mu_avs, linestyle='')
ax.set_xscale('log')
ax.set_xlabel('probability of an accuracy increasing mutation $P_{a}$', fontsize=28);
ax.set_ylabel('mean mutation rate, $\mu$,\n(over population and time)', fontsize=28);
plt.savefig('mean_mu_varying_P_a.pdf')
delta_fs = [pop.delta_fitness for pop in delta_f_vary_results]
dfdts = [mean_df_dt(pop) for pop in delta_f_vary_results]
delta_fs, mean_dfdts, std_dfdts = mean_and_std_by(delta_fs, dfdts)
fig = plt.figure(figsize=(10,10))
plt.errorbar(delta_fs, mean_dfdts, 2*std_dfdts, linestyle='', label='simulations')
def parabola(x, a):
return a*x**2
popt, pcov = spopt.curve_fit(parabola,
np.array([pop.delta_fitness for pop in delta_f_vary_results]),
np.array([mean_df_dt(pop) for pop in delta_f_vary_results]))
effs = np.linspace(.005,.05)
plt.plot(effs, parabola(effs,*popt), 'r', marker='', label='parabolic fit')
plt.xlabel('size of changes in fitness, $\Delta f$', fontsize=28);
plt.ylabel(r'rate of fitness increase $\frac{df_{mean}}{dt}$', fontsize=28);
plt.legend()
plt.savefig('df_dt_varying_delta_f.pdf')
popt
delta_fs = [pop.delta_fitness for pop in delta_f_vary_results]
mu_avs = [np.mean(pop.mean_mutation_rate[:]) for pop in delta_f_vary_results]
delta_fs, mean_mu_avs, std_mu_avs = mean_and_std_by(delta_fs, mu_avs)
fig = plt.figure(figsize=(10,10))
plt.errorbar(delta_fs, mean_mu_avs, 2*std_mu_avs, linestyle='')
plt.xlabel('size of changes in fitness, $\Delta f$', fontsize=28);
plt.ylabel('mean mutation rate, $\mu$,\n(over population and time)', fontsize=28);
plt.savefig('mean_mu_varying_delta_f.pdf')
Ks = [pop.pop_cap for pop in K_vary_results]
dfdts = [mean_df_dt(pop) for pop in K_vary_results]
Ks, mean_dfdts, std_dfdts = mean_and_std_by(Ks, dfdts)
fig = plt.figure(figsize=(10,10))
plt.errorbar(Ks[:-1], mean_dfdts[:-1], 2*std_dfdts[:-1], linestyle='')
plt.xscale('log')
plt.xlabel('population size, $K$', fontsize=28);
plt.ylabel(r'rate of fitness increase $\frac{df_{mean}}{dt}$', fontsize=28);
plt.savefig('df_dt_varying_K.pdf')
Ks = [pop.pop_cap for pop in K_vary_results]
mu_avs = [np.mean(pop.mean_mutation_rate[:]) for pop in K_vary_results]
Ks, mean_mu_avs, std_mu_avs = mean_and_std_by(Ks, mu_avs)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.errorbar(Ks[:-1], mean_mu_avs[:-1], 2*std_mu_avs[:-1], linestyle='')
ax.set_xscale('log')
ax.set_xlabel('population size, $K$', fontsize=28);
ax.set_ylabel('mean mutation rate, $\mu$,\n(over population and time)', fontsize=28);
plt.savefig('mean_mu_varying_K.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.set_ylim(-.1,.06)
ax.set_yticks(np.linspace(-.1,.06,9))
ax.set_xlim(.01/4,.04)
ax.set_xscale('log')
ax.minorticks_off()
ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.set_xticks([.01/4, .01/2, .01, .02, .04, .08])
ax.grid(True)
ax.text(.01, 0, '823', ha='center', va='center', fontsize=32, color='red')
ax.text(.01, -.02, '110', ha='center', va='center', fontsize=32, color='red')
ax.text(.02, 0, '58', ha='center', va='center', fontsize=32, color='red')
ax.text(.02, -.02, '23', ha='center', va='center', fontsize=32, color='red')
ax.set_xlabel('mutation rate $\mu$', fontsize=32)
ax.set_ylabel('fitness $f$', fontsize=32)
plt.savefig('example_fitness_and_mu_layout.pdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The output is the commit data that I've describe above where each in line the text file represents one row in the DataFrame (without blank lines).
Step2: Extracting metadata
Step3: To assign each commit's metadata to the remaining rows, we forward fill those rows with the metadata by using the fillna method.
Step4: Identifying source code lines
Step5: For our later indentation-based complexity calculation, we have to make sure that each line
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
diff_raw = pd.read_csv(
"../../buschmais-spring-petclinic_fork/git_diff.log",
sep="\n",
names=["raw"])
diff_raw.head(16)
index_row = diff_raw.raw.str.startswith("index ")
ignored_diff_rows = (index_row.shift(1) | index_row.shift(2))
diff_raw = diff_raw[~(index_row | ignored_diff_rows)]
diff_raw.head(10)
diff_raw['commit'] = diff_raw.raw.str.split("^commit ").str[1]
diff_raw['timestamp'] = pd.to_datetime(diff_raw.raw.str.split("^Date: ").str[1])
diff_raw['path'] = diff_raw.raw.str.extract("^diff --git.* b/(.*)", expand=True)[0]
diff_raw.head()
diff_raw = diff_raw.fillna(method='ffill')
diff_raw.head(8)
%%timeit
diff_raw.raw.str.extract("^\+( *).*$", expand=True)[0].str.len()
diff_raw["i"] = diff_raw.raw.str[1:].str.len() - diff_raw.raw.str[1:].str.lstrip().str.len()
diff_raw
%%timeit
diff_raw.raw.str[0] + diff_raw.raw.str.[1:].str.lstrip().str.len()
diff_raw['added'] = diff_raw.line.str.extract("^\+( *).*$", expand=True)[0].str.len()
diff_raw['deleted'] = diff_raw.line.str.extract("^-( *).*$", expand=True)[0].str.len()
diff_raw.head()
diff_raw['line'] = diff_raw.raw.str.replace("\t", " ")
diff_raw.head()
diff = \
diff_raw[
(~diff_raw['added'].isnull()) |
(~diff_raw['deleted'].isnull())].copy()
diff.head()
diff['is_comment'] = diff.line.str[1:].str.match(r' *(//|/*\*).*')
diff['is_empty'] = diff.line.str[1:].str.replace(" ","").str.len() == 0
diff['is_source'] = ~(diff['is_empty'] | diff['is_comment'])
diff.head()
diff.raw.str[0].value_counts()
diff['lines_added'] = (~diff.added.isnull()).astype('int')
diff['lines_deleted'] = (~diff.deleted.isnull()).astype('int')
diff.head()
diff = diff.fillna(0)
#diff.to_excel("temp.xlsx")
diff.head()
commits_per_day = diff.set_index('timestamp').resample("D").sum()
commits_per_day.head()
%matplotlib inline
commits_per_day.cumsum().plot()
(commits_per_day.added - commits_per_day.deleted).cumsum().plot()
(commits_per_day.lines_added - commits_per_day.lines_deleted).cumsum().plot()
diff_sum = diff.sum()
diff_sum.lines_added - diff_sum.lines_deleted
3913
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Top-level
Step2: As you can see above, to make DNA, RNA, or Peptide objects you just invoke the correct sequence. command and give it a valid string as an argument. Case does not matter, but precision does - only unambiguous and valid DNA, RNA, or Peptide sequences are allowed. The sequence module also contains special cases of DNA objects (Primer, RestrictionSite, Feature), which are covered in detail later. You can treat DNA, RNA, and Peptide objects much like strings or lists in python, so addition, multiplication, slicing, and container logic are all defined.
|
<ASSISTANT_TASK:>
Python Code:
import coral as cor # alternative you can import each module by itself e.g. from coral import design
dir(cor) # dir lists everything in a module/object. Ignore the double underscore items.
dna = cor.DNA("ATGC")
print "DNA: {}".format(dna)
# You can also run methods on the object - in this case, check if the DNA is palindromic
print "Palindrome?: {}".format(dna.is_palindrome())
print
rna = cor.RNA("AUGC")
print "RNA: {}".format(rna)
print
pep = cor.Peptide("mlnp")
print "Peptide: {}".format(pep)
# Example: finding the Tm of ATGCATGCATGCATGC according to the SantaLucia98 method.
cor.analysis.tm(dna * 4, parameters="santalucia98")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load raw data
Step2: Use tf.data to read the CSV files
Step3: Build a simple keras DNN model
Step4: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
Step5: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments
Step6: Train the model
Step7: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
Step8: High-level model evaluation
Step9: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
Step10: Making predictions with our model
Step11: Export and deploy our model
Step12: Deploy our model to Vertex AI
Step13: <img src='assets/taxi_fare_keras_seq_model.png' width='80%'>
Step14: Cleanup
Step15: ...then delete the endpoint.
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
%matplotlib inline
!ls -l ../data/*.csv
!head ../data/taxi*.csv
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
# Create input layer of feature columns
# TODO 1
feature_columns = {
colname: tf.feature_column.numeric_column(colname) for colname in INPUT_COLS
}
# Build a keras DNN model using Sequential API
# TODO 2a
model = Sequential(
[
DenseFeatures(feature_columns=feature_columns.values()),
Dense(units=32, activation="relu", name="h1"),
Dense(units=8, activation="relu", name="h2"),
Dense(units=1, activation="linear", name="output"),
]
)
# TODO 2b
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=TRAIN_BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=1000, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
# TODO 3
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(
x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)],
)
model.summary()
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ["loss", "val_loss"]
pd.DataFrame(history.history)[LOSS_COLS].plot()
model.predict(
x={
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
},
steps=1,
)
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
MODEL_DISPLAYNAME = f"taxifare-kerase-sequential{TIMESTAMP}"
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
)
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH gs://$BUCKET/$MODEL_DISPLAYNAME
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_DISPLAYNAME,
artifact_uri=f"gs://{BUCKET}/{MODEL_DISPLAYNAME}",
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0,
}
endpoint.predict([instance])
endpoint.undeploy_all()
endpoint.delete()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Everything in a single tool - tellurium
Step2: Antimony is a language that is analog to SBML Systems Biology Markup Language but human-readable. Therefore the concepts that are present in SBML can be found in antimony too. The definition of a compartment is such a SB concept!
Step 1
Step3: Ok... But that's just boring. Let's define something truly special and add a second compartment which is located in your previous one
Step4: Awesome! But be aware... circular definitions are not allowed.
Step5: To specify units you can write the value and the unit combined in the same line
Step6: Or simply change the units
Step7: Step 2
Step8: Smart! You already initialized your species. But you forgot to specify the location of your species! Add these lines
Step9: Step 3
Step10: Syntax in antimony is subtile
Step11: Or set a time dependend event. You may notice that time is special and the already predefined model time
Step12: Ok, now you may want to test your model and simulate it. You should definitly do this before we continue. Make sure that your model is correctly defined at the top of this jupyter notebook (you may execute the cell again!).
Step13: Import/Export Models
Step14: Load a model... You can choose from
Step15: ii) download 'curated BioModels translated to Antimony' from here http
Step16: iii) load your own model
Step17: You can also convert models between modeling-languages
Step18: Then save the SBML model in a file
Step19: An read it again...
Step20: You can also draw your model
Step21: Parameterscan
Step22: Stochastic vs. deterministic
|
<ASSISTANT_TASK:>
Python Code:
import tellurium as te; te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
antimony_model = '''J0: -> y; -x;J1: -> x; y;x = 1.0;y = 0.2;'''
r = te.loada(antimony_model)
r.simulate(0,100,1000)
r.plot()
import tellurium as te
model = ''''''
model_backup = '''
model example
# UNITS:
#
#unit alienliters = 0.123 liters
#unit fL = 1.e-15 liters
# COMPARTMENTS:
#
compartment cell = 100;
compartment mitochondria = 10;
mitochondria in cell;
#mitochondria has fL;
#cell has fL;
# INITIAL VALUES
#
TOM1 = 10;
A_c = 100;
A_m = 1;
I = 1;
TOM1 in cell;
A_c in cell;
A_m in mitochondria;
I in cell;
# REACTIONS
#
T0: A_c + TOM1 -> 2 A_m + TOM1; kineticLaw;
kineticLaw := k1 * TOM1 * (A_c - A_m)/I
k1 = 0.01
# EVENTS:
#E1: at (A_c < 50): k1 = 0;
#E2: at (time>300): TOM1 = 20, A_c = 120;
end
'''
r = te.loada(model)
def aa(line, cell):
global model
#print line,cell
model = model + str(cell) + "\n"
te.loada(str(model))
get_ipython().register_magic_function(aa, "cell")
%%aa # %%aa: A-ppend to A-ntimony model, only for educational purpose!
compartment cell = 100;
%%aa
compartment mitochondria;
mitochondria = 10;
mitochondria in cell;
%%aa
unit alienliters = 0.123 liters
unit fL = 1.e-15 liters
%%aa
mitochondria = 10 fL;
%%aa
mitochondria has fL;
%%aa
TOM1 = 10;
A_c = 100;
A_m = 1;
%%aa
TOM1 in cell;
A_c in cell;
A_m in mitochondria;
%%aa
T0: A_c + TOM1 -> A_m + TOM1; kineticLaw;
kineticLaw := k1 * TOM1 * (A_c - (A_m+0.01))
k1 = 0.01
%%aa
E1: at (A_c < 50): k1 = 0;
%%aa
E2: at (time>300): TOM1 = 20, A_c = 120;
import tellurium as te
r = te.loada(model)
r.simulate(0,1000,1000)
r.plot()
import tellurium as te
print te.listTestModels()
#r_feedback = te.loadTestModel('feedback.xml')
#result = r_feedback.simulate()
#r_feedback.plot()
import urllib2
bio1 = urllib2.urlopen('http://antimony.sourceforge.net/examples/biomodels/BIOMD0000000001.txt').read()
r_bio1 = te.loadAntimonyModel(bio1)
r_bio1.simulate(0,1000,10000)
r_bio1.plot()
import tellurium as te
r = te.loada(model)
r.simulate()
r.plot()
sbml_model = te.antimonyToSBML(model)
with open('model.xml','wb') as f:
f.write(sbml_model)
# directly for antimony:
with open('example.antimony','wb') as f:
f.write(model)
#f.write(model_backup)
sbml_model = te.loadSBMLModel('model.xml')
example = te.loada('example.antimony')
def draw(model):
diagram = te.visualization.SBMLDiagram(model.getSBML())
diagram.draw()
draw(r)
import numpy as np
import matplotlib.pyplot as plt
r = te.loada(model)
fig = plt.figure(figsize=(10,10))
for i in np.linspace(0.001,0.005,5):
r.reset()
r.k1 = i
#r.reset() # reset species to initial values
#r.resetAll() # reset species + paramters
#r.resetToOrigin() # similar to load the model
r.integrator='cvode'
result = r.simulate(0,1000)
plt.plot(result[:,0],result[:,1:],label='k1=%s'%i)
plt.legend()
plt.show()
r = te.loada(model_backup)
#plt.close()
fig = plt.figure(figsize=(10,10))
r.resetToOrigin()
r.timeCourseSelections=['time','[A_c]']
for i in range(10):
r.reset()
r.integrator = 'gillespie'
result = r.simulate(0,100,1000)
plt.plot(result[:,0],result[:,1:],color = 'lightblue',label='gillespie sim')
r.reset()
r.integrator='cvode'
result = r.simulate(0,100,1000)
plt.plot(result[:,0],result[:,1:],color = 'red',label='cvode sim')
plt.legend()
plt.show()
r = te.loada(model)
#print r.getAntimony()
model = '''
// Created by libAntimony v2.9.4
// Compartments and Species:
compartment cell, mitochondria;
species TOM1 in cell, A_c in cell, A_m in mitochondria;
// Assignment Rules:
kineticLaw := k1*TOM1*(A_c - (A_m + 0.01));
// Reactions:
T0: A_c + TOM1 -> A_m + TOM1; kineticLaw;
// Events:
E1: at A_c < 50: k1 = 0;
E2: at time > 300: TOM1 = 20, A_c = 120;
// Species initializations:
TOM1 = 10;
A_c = 100;
A_m = 1;
// Compartment initializations:
cell = 100;
mitochondria = 10;
mitochondria has fL;
// Variable initializations:
k1 = 0.01;
// Other declarations:
var k1, kineticLaw;
const cell, mitochondria;
// Unit definitions:
unit alienliter = 1.23e-1 litre;
unit fL = 1e-15 litre;
const A_c;
'''
r = te.loada(model)
r.timeCourseSelections= ['time'] + ['['+ r.getBoundarySpeciesIds()[0]+ ']'] + r.getFloatingSpeciesIds()
r.simulate(0,1000)
r.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
from IPython.display import HTML
from IPython.display import IFrame
assert True # leave this to grade the import statements
Image(url = 'http://newsroom.unl.edu/releases/downloadables/photo/20090923solenoid.jpg', width = 600, height = 600)
assert True # leave this to grade the image display
%%html
<table>
<th>Name </th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge (e)</th>
<th>Mass(MeV/$c^2$)</th>
</tr>
<tr>
<td> up </td>
<td> u </td>
<td> $\bar{u}$ </td>
<td> +$\frac{2}{3}$ </td>
<td> 0.511 </td>
</tr>
<tr>
<td> down </td>
<td> d </td>
<td> $\bar{d}$ </td>
<td> -$\frac{1}{3}$ </td>
<td> 3.5-6.0 </td>
</tr>
<tr>
<td> charm </td>
<td> c </td>
<td> $\bar{c}$ </td>
<td> +$\frac{2}{3}$ </td>
<td> 1,160-1,340 </td>
</tr>
<tr>
<td> strange </td>
<td> s </td>
<td> $\bar{s}$ </td>
<td> -$\frac{1}{3}$ </td>
<td> 70-130 </td>
</tr>
<tr>
<td> top </td>
<td> t </td>
<td> $\bar{t}$ </td>
<td> +$\frac{2}{3}$ </td>
<td> 169,100-173,300 </td>
</tr>
<tr>
<td> bottom </td>
<td> b </td>
<td> $\bar{b}$ </td>
<td> -$\frac{1}{3}$ </td>
<td> 4,130-4,370 </td>
assert True # leave this here to grade the quark table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: tr(vec)
Step2: dev(vec)
Step3: Mises_stress(vec)
Step4: Mises_strain(vec)
Step5: eta_stress(vec)
Step6: eta_strain(vec)
Step7: v2t_stress(vec)
Step8: t2v_stress(vec)
Step9: v2t_strain(vec)
Step10: t2v_strain(vec)
Step11: J2_stress(vec)
Step12: J2_strain(vec)
Step13: J3_stress(vec)
Step14: J3_strain(vec)
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from simmit import smartplus as sim
import os
v = np.random.rand(6)
trace = sim.tr(v)
print v
print trace
v = np.random.rand(6)
v_dev = sim.dev(v)
print v
print v_dev
v = np.random.rand(6)
Mises_sig = sim.Mises_stress(v)
print v
print Mises_sig
v = np.random.rand(6)
Mises_eps = sim.Mises_strain(v)
print v
print Mises_eps
v = np.random.rand(6)
sigma_f = sim.eta_stress(v)
print v
print sigma_f
v = np.random.rand(6)
eps_f = sim.eta_strain(v)
print v
print eps_f
v = np.random.rand(6)
m = sim.v2t_stress(v);
print v
print m
m = np.random.rand(3,3)
m_symm = (m + m.T)/2
v = sim.t2v_stress(m_symm);
print m_symm
print v
v = np.random.rand(6)
m = sim.v2t_strain(v);
print v
print m
m = np.random.rand(3,3)
m_symm = (m + m.T)/2
v = sim.t2v_strain(m_symm);
print m_symm
print v
v = np.random.rand(6)
J2 = sim.J2_stress(v)
print v
print J2
v = np.random.rand(6)
J2 = sim.J2_strain(v)
print v
print J2
v = np.random.rand(6)
J3 = sim.J3_stress(v)
print v
print J3
v = np.random.rand(6)
J3 = sim.J3_strain(v)
print v
print J3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: randomwalk each point for 1 day equivalent
Step2: make a grid from a scatter of many points
Step3: Find maximum time step without leaking mosquitos from the 3x3 grid
Step4: matrix generator for geting all possible combinations of matrix
|
<ASSISTANT_TASK:>
Python Code:
def findquadrant(point,size):
y,x = point
halfsize = size/2
if x < -halfsize:
if y > halfsize: return [0,0]
if y < -halfsize: return [2,0]
return [1,0]
if x > halfsize:
if y > halfsize: return [0,2]
if y < -halfsize: return [2,2]
return [1,2]
if y > halfsize: return [0,1]
if y < -halfsize: return [2,1]
return [1,1]
def findStep(points, box):
tempo = 0
while check_howmany(points, box) < 0.05:
for i in points:
a = uniform(0, 2*pi)
vvar, hvar = V*dt*sin(a), V*dt*cos(a)
i[0] += vvar; i[1] += hvar
tempo += dt
return tempo
def randomWalk(points, nonacessquadrants, time):
dt = 1
for atime in range(time):
for i in points:
a = uniform(0, 2*pi)
vvar, hvar = V*dt*sin(a), V*dt*cos(a)
i[0] += vvar; i[1] += hvar
#if findquadrant(i, size) in nonacessquadrants: i[0] -= 2*vvar
#if findquadrant(i, size) in nonacessquadrants: i[0] += 2*vvar; i[1] -= 2*hvar
#if findquadrant(i, size) in nonacessquadrants: i[0] -= 2*vvar
return points
def gridify(somelist, size):
shape = (3,3)
grid = np.zeros(shape)
for point in somelist:
quadrant = findquadrant(point,size)
grid[quadrant[0]][quadrant[1]] += 1
grid = grid/grid.sum()
return np.array(grid)
V = 300/60 #meters per minute
dt = 1 #min
npoints = 40000
size = 68
def newpoints(n):
return np.array([[uniform(-size/2,size/2),uniform(-size/2,size/2)] for i in range(n)])
%%time
def MaxStep(box):
a = 0
for i in range(7):
a += findStep(newpoints(npoints), box)
a = a/5
for i in range(int(a),0, -1):
if 24*60 % i == 0: return i
return("deu ruim")
b= MaxStep(68.66)
a = 24*60/b
print(a)
%%time
allmatrices = list(product(*(repeat((0, 1), 8))))
print(len(allmatrices))
dictionary_matrix_to_num = {}
dict_num_to_weights = {}
nowalls = gridify(randomWalk(newpoints(npoints), [], MaxStep(68.66)), size)
avgcorner = (nowalls[0,0]+nowalls[2,2]+nowalls[2,0]+nowalls[0,2])/4
avgwall = (nowalls[1,0]+nowalls[0,1]+nowalls[2,1]+nowalls[1,2])/4
nowalls[0,0], nowalls[2,2], nowalls[2,0], nowalls[0,2] = [avgcorner for i in range(4)]
nowalls[1,0], nowalls[0,1], nowalls[2,1], nowalls[1,2] = [avgwall for i in range(4)]
print(nowalls)
for index, case in enumerate(allmatrices):
dictionary_matrix_to_num[case] = index
multiplier = np.ones((3,3))
if case[0] == 1: multiplier[0,0] = 0
if case[1] == 1: multiplier[1,0] = 0
if case[2] == 1: multiplier[2,0] = 0
if case[3] == 1: multiplier[0,1] = 0
if case[4] == 1: multiplier[2,1] = 0
if case[5] == 1: multiplier[0,2] = 0
if case[6] == 1: multiplier[1,2] = 0
if case[7] == 1: multiplier[2,2] = 0
if index%25 == 0: print(index, case)
dict_num_to_weights[index] = nowalls*multiplier/(nowalls*multiplier).sum()
a = dict_num_to_weights[145]
print(a)
plt.imshow(dict_num_to_weights[145])
plt.show()
import pickle as pkl
MyDicts = [dictionary_matrix_to_num, dict_num_to_weights]
pkl.dump( MyDicts, open( "myDicts.p", "wb" ) )
#to read the pickled dicts use:
# dictionary_matrix_to_num, dict_num_to_weights = pkl.load( open ("myDicts.p", "rb") )
a = [(1,2), (3,4)]
a, b = zip(*a)
a
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Goal
Step2: The anomalies are the minority.
Step3: In unsupervised approaches, the label is not used
Step4: All the methods we will use, except iForests, performs best if the dataset is scaled
Step5: K-means clustering
Step6: The array clusters contains the cluster id of each sample
Step7: Check how many elements per cluster
Step8: If you try to compute the silhouette score with ordinary sklearn functions, it is extremely slow.
Step9: We will thus used an alternative implementation from Alexandre Abraham.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score
# We resort to a third party library to plot silhouette diagrams
! pip install yellowbrick
from yellowbrick.cluster import SilhouetteVisualizer
! wget https://datahub.io/machine-learning/creditcard/r/creditcard.csv
df = pd.read_csv('creditcard.csv')
df.head()
df.info(verbose=True)
df['Class'].value_counts()
df = df.drop('Time', axis=1)
X = df.drop('Class', axis=1)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
K =3
model = KMeans(n_clusters=K, random_state=3)
clusters = model.fit_predict(X_scaled)
clusters[0:5]
plt.hist(clusters)
# Inspired by https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html#sphx-glr-auto-examples-cluster-plot-kmeans-silhouette-analysis-py
fig, (ax1) = plt.subplots()
# The silhouette coefficient can range from -1, 1
ax1.set_xlim([-1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (K + 1) * 10])
print("Distances to be computed: ", "{:e}".format( X_scaled.shape[0]**2) )
! wget https://gist.githubusercontent.com/AlexandreAbraham/5544803/raw/221aa797cdbfa9e9f75fc0aabb2322dcc11c8991/unsupervised_alt.py
import unsupervised_alt
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X_scaled, clusters)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X_scaled, clusters)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: X_train and y_train sets are built
Step2: We set an hypothesis and call the Gradient Boosting cross validation
Step3: Same but this time we call the scikit-learn cross validation
Step4: Finally, the result
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
import os
import os.path as osp
import numpy as np
import pysptools.ml as ml
import pysptools.skl as skl
from sklearn.model_selection import train_test_split
home_path = os.environ['HOME']
source_path = osp.join(home_path, 'dev-data/CZ_hsdb')
result_path = None
def print_step_header(step_id, title):
print('================================================================')
print('{}: {}'.format(step_id, title))
print('================================================================')
print()
# img1
img1_scaled, img1_cmap = ml.get_scaled_img_and_class_map(source_path, result_path, 'img1',
[['Snow',{'rec':(41,79,49,100)}]],
skl.HyperGaussianNB, None,
display=False)
# img2
img2_scaled, img2_cmap = ml.get_scaled_img_and_class_map(source_path, result_path, 'img2',
[['Snow',{'rec':(83,50,100,79)},{'rec':(107,151,111,164)}]],
skl.HyperLogisticRegression, {'class_weight':{0:1.0,1:5}},
display=False)
def step_GradientBoostingCV(tune, update, cv_params, verbose):
print_step_header('Step', 'GradientBoosting cross validation')
tune.print_params('input')
tune.step_GradientBoostingCV(update, cv_params, verbose)
def step_GridSearchCV(tune, params, title, verbose):
print_step_header('Step', 'scikit-learn cross-validation')
tune.print_params('input')
tune.step_GridSearchCV(params, title, verbose)
tune.print_params('output')
verbose = False
n_shrink = 3
snow_fname = ['img1','img2']
nosnow_fname = ['imga1','imgb1','imgb6','imga7']
all_fname = snow_fname + nosnow_fname
snow_img = [img1_scaled,img2_scaled]
nosnow_img = ml.batch_load(source_path, nosnow_fname, n_shrink)
snow_cmap = [img1_cmap,img2_cmap]
M = snow_img[0]
bkg_cmap = np.zeros((M.shape[0],M.shape[1]))
X,y = skl.shape_to_XY(snow_img+nosnow_img,
snow_cmap+[bkg_cmap,bkg_cmap,bkg_cmap,bkg_cmap])
seed = 5
train_size = 0.25
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=train_size,
random_state=seed)
start_param = {'max_depth':10,
'min_child_weight':1,
'gamma':0,
'subsample':0.8,
'colsample_bytree':0.5,
'scale_pos_weight':1.5}
# Tune can be call with HyperXGBClassifier or HyperLGBMClassifier,
# but hyperparameters and cv parameters are differents
t = ml.Tune(ml.HyperXGBClassifier, start_param, X_train, y_train)
# Step 1: Fix learning rate and number of estimators for tuning tree-based parameters
step_GradientBoostingCV(t, {'learning_rate':0.2,'n_estimators':500,'silent':1},
{'verbose_eval':False},
True)
# After reading the cross validation results we manually set n_estimator
t.p_update({'n_estimators':9})
t.print_params('output')
# Step 2: Tune max_depth and min_child_weight
step_GridSearchCV(t, {'max_depth':[24,25, 26], 'min_child_weight':[1]}, 'Step 2', True)
print(t.get_p_current())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Norma $2$
Step2: Norma $1$
Step3: Norma $\infty$
Step4: ```{admonition} Observación
Step5: en este caso $D=\left[\begin{array}{cc} \frac{1}{25} &0\ 0 &\frac{1}{9} \end{array}\right ] = \left[\begin{array}{cc} \frac{1}{d_1} &0\ 0 &\frac{1}{d_2} \end{array}\right ]$
Step6: ```{admonition} Ejercicio
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
f=lambda x: np.sqrt(x[:,0]**2 + x[:,1]**2) #definición de norma2
density=1e-5
density_p=int(2.5*10**3)
x=np.arange(-1,1,density)
y1=np.sqrt(1-x**2)
y2=-np.sqrt(1-x**2)
x_p=np.random.uniform(-1,1,(density_p,2))
ind=f(x_p)<1
x_p_subset=x_p[ind]
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.')
plt.title('Puntos en el plano que cumplen $||x||_2 < 1$')
plt.grid()
plt.show()
f=lambda x:np.abs(x[:,0]) + np.abs(x[:,1]) #definición de norma1
density=1e-5
density_p=int(2.5*10**3)
x1=np.arange(0,1,density)
x2=np.arange(-1,0,density)
y1=1-x1
y2=1+x2
y3=x1-1
y4=-1-x2
x_p=np.random.uniform(-1,1,(density_p,2))
ind=f(x_p)<=1
x_p_subset=x_p[ind]
plt.plot(x1,y1,'b',x2,y2,'b',x1,y3,'b',x2,y4,'b')
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.')
plt.title('Puntos en el plano que cumplen $||x||_1 \leq 1$')
plt.grid()
plt.show()
f=lambda x:np.max(np.abs(x),axis=1) #definición de norma infinito
point1 = (-1, -1)
point2 = (-1, 1)
point3 = (1, 1)
point4 = (1, -1)
point5 = point1
arr = np.row_stack((point1, point2,
point3, point4,
point5))
density_p=int(2.5*10**3)
x_p=np.random.uniform(-1,1,(density_p,2))
ind=f(x_p)<=1
x_p_subset=x_p[ind]
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.')
plt.plot(arr[:,0], arr[:,1])
plt.title('Puntos en el plano que cumplen $||x||_{\infty} \leq 1$')
plt.grid()
plt.show()
d1_inv=1/5
d2_inv=1/3
f=lambda x: np.sqrt((d1_inv*x[:,0])**2 + (d2_inv*x[:,1])**2) #definición de norma2
density=1e-5
density_p=int(2.5*10**3)
x=np.arange(-1/d1_inv,1/d1_inv,density)
y1=1.0/d2_inv*np.sqrt(1-(d1_inv*x)**2)
y2=-1.0/d2_inv*np.sqrt(1-(d1_inv*x)**2)
x_p=np.random.uniform(-1/d1_inv,1/d1_inv,(density_p,2))
ind=f(x_p)<=1
x_p_subset=x_p[ind]
plt.plot(x,y1,'b',x,y2,'b')
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.')
plt.title('Puntos en el plano que cumplen $||x||_D \leq 1$')
plt.grid()
plt.show()
A=np.array([[1,2],[0,2]])
density=1e-5
x1=np.arange(0,1,density)
x2=np.arange(-1,0,density)
x1_y1 = np.column_stack((x1,1-x1))
x2_y2 = np.column_stack((x2,1+x2))
x1_y3 = np.column_stack((x1,x1-1))
x2_y4 = np.column_stack((x2,-1-x2))
apply_A = lambda vec : np.transpose(A@np.transpose(vec))
A_to_vector_1 = apply_A(x1_y1)
A_to_vector_2 = apply_A(x2_y2)
A_to_vector_3 = apply_A(x1_y3)
A_to_vector_4 = apply_A(x2_y4)
plt.subplot(1,2,1)
plt.plot(x1_y1[:,0],x1_y1[:,1],'b',
x2_y2[:,0],x2_y2[:,1],'b',
x1_y3[:,0],x1_y3[:,1],'b',
x2_y4[:,0],x2_y4[:,1],'b')
e1 = np.array([[0,0],
[1, 0]])
e2 = np.array([[0, 0],
[0, 1]])
plt.plot(e2[:,0], e2[:,1],'g',
e1[:,0], e1[:,1],'b')
plt.xlabel('Vectores con norma 1 menor o igual a 1')
plt.grid()
plt.subplot(1,2,2)
plt.plot(A_to_vector_1[:,0],A_to_vector_1[:,1],'b',
A_to_vector_2[:,0],A_to_vector_2[:,1],'b',
A_to_vector_3[:,0],A_to_vector_3[:,1],'b',
A_to_vector_4[:,0],A_to_vector_4[:,1],'b')
A_to_vector_e2 = apply_A(e2)
plt.plot(A_to_vector_e2[:,0],A_to_vector_e2[:,1],'g')
plt.grid()
plt.title('Efecto de la matriz A sobre los vectores con norma 1 menor o igual a 1')
plt.show()
print(np.linalg.norm(A,1))
print(np.linalg.norm(A,2))
_,s,_ = np.linalg.svd(A)
print(np.max(s))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
Step2: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
Step3: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order)
Step4: Problem set #2
Step5: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this
Step7: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Step8: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above
Step9: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
|
<ASSISTANT_TASK:>
Python Code:
from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
h3_tag = document.find_all('h3')
print(type(h3_tag))
[tag.string for tag in h3_tag]
print(len(document.find_all('h3')))
tel_tag = document.find(class_="tel")
print(tel_tag.contents[0])
widget_list = document.find_all(class_="wname")
for name in widget_list:
widget_names = print(name.text)
widgets = []
for td in document.find_all('td'):
partno = document.find_all(class_="partno")
wname = document.find_all(class_="wname")
price = document.find_all(class_="price")
quantity = document.find_all(class_="quantity")
# your code here
# end your code
widgets
widgets = []
# your code here
# end your code
widgets
example_html =
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
category_counts = {}
# your code here
# end your code
category_counts
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is it?
Step2: Define a Function to Read and Preprocess Text
Step3: Let's take a look at the training corpus
Step4: And the testing corpus looks like this
Step5: Notice that the testing corpus is just a list of lists and does not contain any tags.
Step6: Build a Vocabulary
Step7: Essentially, the vocabulary is a dictionary (accessible via model.wv.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.wv.vocab['penalty'].count for counts for the word penalty).
Step8: Inferring a Vector
Step9: Assessing Model
Step10: Let's count how each document ranks with respect to the training corpus
Step11: Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. the checking of an inferred-vector against a training-vector is a sort of 'sanity check' as to whether the model is behaving in a usefully consistent manner, though not a real 'accuracy' value.
Step12: Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself
Step13: Testing the Model
|
<ASSISTANT_TASK:>
Python Code:
import gensim
import os
import collections
import smart_open
import random
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data'])
lee_train_file = test_data_dir + os.sep + 'lee_background.cor'
lee_test_file = test_data_dir + os.sep + 'lee.cor'
def read_corpus(fname, tokens_only=False):
with smart_open.smart_open(fname, encoding="iso-8859-1") as f:
for i, line in enumerate(f):
if tokens_only:
yield gensim.utils.simple_preprocess(line)
else:
# For training data, add tags
yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(line), [i])
train_corpus = list(read_corpus(lee_train_file))
test_corpus = list(read_corpus(lee_test_file, tokens_only=True))
train_corpus[:2]
print(test_corpus[:2])
model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=2, epochs=55)
model.build_vocab(train_corpus)
%time model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)
model.infer_vector(['only', 'you', 'can', 'prevent', 'forest', 'fires'])
ranks = []
second_ranks = []
for doc_id in range(len(train_corpus)):
inferred_vector = model.infer_vector(train_corpus[doc_id].words)
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
rank = [docid for docid, sim in sims].index(doc_id)
ranks.append(rank)
second_ranks.append(sims[1])
collections.Counter(ranks) # Results vary due to random seeding and very small corpus
print('Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(train_corpus) - 1)
# Compare and print the most/median/least similar documents from the train corpus
print('Train Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
sim_id = second_ranks[doc_id]
print('Similar Document {}: «{}»\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(test_corpus) - 1)
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
# Compare and print the most/median/least similar documents from the train corpus
print('Test Document ({}): «{}»\n'.format(doc_id, ' '.join(test_corpus[doc_id])))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ENDF
Step2: We can access the parameters contained within File 32 in a similar manner to the File 2 parameters from before.
Step3: The newly created object will contain multiple resonance regions within gd157_endf.resonance_covariance.ranges. We can access the full covariance matrix from File 32 for a given range by
Step4: This covariance matrix currently only stores the upper triangular portion as covariance matrices are symmetric. Plotting the covariance matrix
Step5: The correlation matrix can be constructed using the covariance matrix and also give some insight into the relations among the parameters.
Step6: Sampling and Reconstruction
Step7: The sampling routine requires the incorporation of the openmc.data.ResonanceRange for the same resonance range object. This allows each sample itself to be its own openmc.data.ResonanceRange with a new set of parameters. Looking at some of the sampled parameters below
Step8: We can reconstruct the cross section from the sampled parameters using the reconstruct method of openmc.data.ResonanceRange. For more on reconstruction see the Nuclear Data example notebook.
Step9: Subset Selection
Step10: The subset method will also store the corresponding subset of the covariance matrix
Step11: Checking the size of the new covariance matrix to be sure it was sampled properly
Step12: And finally, we can sample from the subset as well
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
from pprint import pprint
import shutil
import subprocess
import urllib.request
import h5py
import numpy as np
import matplotlib.pyplot as plt
import openmc.data
# Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/Gd/157'
filename, headers = urllib.request.urlretrieve(url, 'gd157.endf')
# Load into memory
gd157_endf = openmc.data.IncidentNeutron.from_endf(filename, covariance=True)
gd157_endf
gd157_endf.resonance_covariance.ranges[0].parameters[:5]
covariance = gd157_endf.resonance_covariance.ranges[0].covariance
plt.imshow(covariance,cmap='seismic',vmin=-0.08, vmax=0.08)
plt.colorbar()
corr = np.zeros([len(covariance),len(covariance)])
for i in range(len(covariance)):
for j in range(len(covariance)):
corr[i, j]=covariance[i, j]/covariance[i, i]**(0.5)/covariance[j, j]**(0.5)
plt.imshow(corr, cmap='seismic',vmin=-1.0, vmax=1.0)
plt.colorbar()
rm_resonance = gd157_endf.resonances.ranges[0]
n_samples = 5
samples = gd157_endf.resonance_covariance.ranges[0].sample(n_samples)
type(samples[0])
print('Sample 1')
samples[0].parameters[:5]
print('Sample 2')
samples[1].parameters[:5]
gd157_endf.resonances.ranges
energy_range = [rm_resonance.energy_min, rm_resonance.energy_max]
energies = np.logspace(np.log10(energy_range[0]),
np.log10(energy_range[1]), 10000)
for sample in samples:
xs = sample.reconstruct(energies)
elastic_xs = xs[2]
plt.loglog(energies, elastic_xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
lower_bound = 2; # inclusive
upper_bound = 2; # inclusive
rm_res_cov_sub = gd157_endf.resonance_covariance.ranges[0].subset('J',[lower_bound,upper_bound])
rm_res_cov_sub.file2res.parameters[:5]
rm_res_cov_sub.covariance
gd157_endf.resonance_covariance.ranges[0].covariance.shape
old_n_parameters = gd157_endf.resonance_covariance.ranges[0].parameters.shape[0]
old_shape = gd157_endf.resonance_covariance.ranges[0].covariance.shape
new_n_parameters = rm_res_cov_sub.file2res.parameters.shape[0]
new_shape = rm_res_cov_sub.covariance.shape
print('Number of parameters\nOriginal: '+str(old_n_parameters)+'\nSubet: '+str(new_n_parameters)+'\nCovariance Size\nOriginal: '+str(old_shape)+'\nSubset: '+str(new_shape))
samples_sub = rm_res_cov_sub.sample(n_samples)
samples_sub[0].parameters[:5]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function requires four parameters
Step2: The nice thing about Quantities is that once the unit is specified you don't need to worry about rescaling the values to a common unit 'cause Quantities takes care of this for you
Step3: For a complete set of operations with quantities refer to its documentation.
Step4: Both spiketrains are instances of neo.core.spiketrain.SpikeTrain class
Step5: The important properties of a SpikeTrain are
Step6: Before exploring the statistics of spiketrains, let's look at the rasterplot. In the next section we'll compare numerically the difference between two.
Step7: 2. Rate estimation
Step8: The mean firing rate of spiketrain1 is higher than of spiketrain2 as expected from the Figure 1.
Step9: Additionally, the period within the spike train during which to estimate the firing rate can be further limited using the t_start and t_stop keyword arguments. Here, we limit the firing rate estimation to the first second of the spiketrain.
Step10: In some (rare) cases multiple spiketrains can be represented in multidimensional arrays when they contain the same number of spikes. In such cases, the mean firing rate can be calculated for multiple spiketrains at once by specifying the axis the along which to calculate the firing rate. By default, if no axis is specified, all spiketrains are pooled together before estimating the firing rate.
Step11: 2.2. Time histogram
Step12: AnalogSignal is a container for analog signals of any type, sampled at a fixed sampling rate.
Step13: Additionally, time_histogram can be limited to a shorter time period by using the keyword arguments t_start and t_stop, as described for mean_firing_rate.
Step14: The resulting rate estimate is again an AnalogSignal with the sampling rate of 1 / (50 ms).
Step15: Additionally, the convolution kernel type can be specified via the kernel keyword argument. E.g. to use an gaussian kernel, we do as follows
Step16: To compare all three methods of firing rate estimation, we visualize the results of all methods in a common plot.
Step17: Coefficient of Variation (CV)
Step18: Let's look at the rasterplot of the first second of spiketrains.
Step19: From the plot you can see the random nature of each Poisson spike train. Let us verify it numerically by calculating the distribution of the 100 CVs obtained from inter-spike intervals (ISIs) of these spike trains.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from quantities import ms, s, Hz
from elephant.spike_train_generation import homogeneous_poisson_process, homogeneous_gamma_process
help(homogeneous_poisson_process)
t_start = 275.5 * ms
print(t_start)
t_start2 = 3. * s
t_start_sum = t_start + t_start2
print(t_start_sum)
np.random.seed(28) # to make the results reproducible
spiketrain1 = homogeneous_poisson_process(rate=10*Hz, t_start=0.*ms, t_stop=10000.*ms)
spiketrain2 = homogeneous_gamma_process(a=3, b=10*Hz, t_start=0.*ms, t_stop=10000.*ms)
print("spiketrain1 type is", type(spiketrain1))
print("spiketrain2 type is", type(spiketrain2))
print(f"spiketrain2 has {len(spiketrain2)} spikes:")
print(" t_start:", spiketrain2.t_start)
print(" t_stop:", spiketrain2.t_stop)
print(" spike times:", spiketrain2.times)
plt.figure(figsize=(8, 3))
plt.eventplot([spiketrain1.magnitude, spiketrain2.magnitude], linelengths=0.75, color='black')
plt.xlabel('Time (ms)', fontsize=16)
plt.yticks([0,1], labels=["spiketrain1", "spiketrain2"], fontsize=16)
plt.title("Figure 1");
from elephant.statistics import mean_firing_rate
print("The mean firing rate of spiketrain1 is", mean_firing_rate(spiketrain1))
print("The mean firing rate of spiketrain2 is", mean_firing_rate(spiketrain2))
fr1 = len(spiketrain1) / (spiketrain1.t_stop - spiketrain1.t_start)
fr2 = len(spiketrain2) / (spiketrain2.t_stop - spiketrain2.t_start)
print("The mean firing rate of spiketrain1 is", fr1)
print("The mean firing rate of spiketrain2 is", fr2)
mean_firing_rate(spiketrain1, t_start=0*ms, t_stop=1000*ms)
multi_spiketrains = np.array([[1,2,3],[4,5,6],[7,8,9]])*ms
mean_firing_rate(multi_spiketrains, axis=0, t_start=0*ms, t_stop=5*ms)
from elephant.statistics import time_histogram, instantaneous_rate
histogram_count = time_histogram([spiketrain1], 500*ms)
print(type(histogram_count), f"of shape {histogram_count.shape}: {histogram_count.shape[0]} samples, {histogram_count.shape[1]} channel")
print('sampling rate:', histogram_count.sampling_rate)
print('times:', histogram_count.times)
print('counts:', histogram_count.T[0])
histogram_rate = time_histogram([spiketrain1], 500*ms, output='rate')
print('times:', histogram_rate.times)
print('rate:', histogram_rate.T[0])
inst_rate = instantaneous_rate(spiketrain1, sampling_period=50*ms)
print(type(inst_rate), f"of shape {inst_rate.shape}: {inst_rate.shape[0]} samples, {inst_rate.shape[1]} channel")
print('sampling rate:', inst_rate.sampling_rate)
print('times (first 10 samples): ', inst_rate.times[:10])
print('instantaneous rate (first 10 samples):', inst_rate.T[0, :10])
from elephant.kernels import GaussianKernel
instantaneous_rate(spiketrain1, sampling_period=20*ms, kernel=GaussianKernel(200*ms))
plt.figure(dpi=150)
# plotting the original spiketrain
plt.plot(spiketrain1, [0]*len(spiketrain1), 'r', marker=2, ms=25, markeredgewidth=2, lw=0, label='poisson spike times')
# mean firing rate
plt.hlines(mean_firing_rate(spiketrain1), xmin=spiketrain1.t_start, xmax=spiketrain1.t_stop, linestyle='--', label='mean firing rate')
# time histogram
plt.bar(histogram_rate.times, histogram_rate.magnitude.flatten(), width=histogram_rate.sampling_period, align='edge', alpha=0.3, label='time histogram (rate)')
# instantaneous rate
plt.plot(inst_rate.times.rescale(ms), inst_rate.rescale(histogram_rate.dimensionality).magnitude.flatten(), label='instantaneous rate')
# axis labels and legend
plt.xlabel('time [{}]'.format(spiketrain1.times.dimensionality.latex))
plt.ylabel('firing rate [{}]'.format(histogram_rate.dimensionality.latex))
plt.xlim(spiketrain1.t_start, spiketrain1.t_stop)
plt.legend()
plt.show()
spiketrain_list = [
homogeneous_poisson_process(rate=10.0*Hz, t_start=0.0*s, t_stop=100.0*s)
for i in range(100)]
plt.figure(dpi=150)
plt.eventplot([st.magnitude for st in spiketrain_list], linelengths=0.75, linewidths=0.75, color='black')
plt.xlabel("Time, s")
plt.ylabel("Neuron id")
plt.xlim([0, 1]);
from elephant.statistics import isi, cv
cv_list = [cv(isi(spiketrain)) for spiketrain in spiketrain_list]
# let's plot the histogram of CVs
plt.figure(dpi=100)
plt.hist(cv_list)
plt.xlabel('CV')
plt.ylabel('count')
plt.title("Coefficient of Variation of homogeneous Poisson process");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perform basic feature extraction
Step2: Compress x1 and x2 into a single principal component
Step3: Principal components analysis finds vectors that represent that direction(s) of most variance in a data set. These are called eigenvectors.
Step4: Principal components are the projection of the data onto these eigenvectors. Principal components are usually centered around zero and each principal component is uncorrelated with all the others, i.e. principal components are orthogonal to one-another. Becuase prinicipal components represent the highest variance dimensions in the data and are not correlated with one another, they do an excellent job summarizing a data set with only a few dimensions (e.g. columns) and PCA is probably the most popular feature extraction technique.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd # pandas for handling mixed data sets
import numpy as np # numpy for basic math and matrix operations
import matplotlib.pyplot as plt # pyplot for plotting
# scikit-learn for machine learning and data preprocessing
from sklearn.decomposition import PCA
scratch_df = pd.DataFrame({'x1': [1, 2.5, 3, 4.5],
'x2': [1.5, 2, 3.5, 4]})
scratch_df
pca = PCA(n_components=1)
pca.fit(scratch_df)
print('First eigenvector = ', pca.components_)
scratch_df['Centered_PC1'] = pca.transform(scratch_df[['x1', 'x2']])
scratch_df['Non_centered_PC1'] = pca.transform(scratch_df[['x1', 'x2']] + pca.mean_)
scratch_df['PC1_x1_back_projection'] = pd.Series(np.arange(1,8,2)) * pca.components_[0][0]
scratch_df['PC1_x2_back_projection'] = pd.Series(np.arange(1,8,2)) * pca.components_[0][1]
scratch_df
x = plt.scatter(scratch_df.x1, scratch_df.x2, color='b')
pc, = plt.plot(scratch_df.PC1_x1_back_projection, scratch_df.PC1_x2_back_projection, color='r')
plt.legend([x, pc], ['Observed data (x)', 'First principal component projection'], loc=4)
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
k_to_accuracies[k] = []
for i in range(num_folds):
train_indices = [j for j in range(num_folds) if j != i]
validate_index = i
X_for_train = np.vstack([X_train_folds[j] for j in train_indices])
y_for_train = np.hstack([y_train_folds[j] for j in train_indices])
X_for_validate = X_train_folds[validate_index]
y_for_validate = y_train_folds[validate_index]
classifier = KNearestNeighbor()
classifier.train(X_for_train, y_for_train)
validate_dists = classifier.compute_distances_no_loops(X_for_validate)
y_validate_pred = classifier.predict_labels(validate_dists, k=k)
num_correct = np.sum(y_validate_pred == y_for_validate)
accuracy = float(num_correct) / len(y_for_validate)
k_to_accuracies[k].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
mean_accuracies = [np.mean(k_to_accuracies[k]) for k in sorted(k_to_accuracies)]
best_k = k_choices[np.argmax(mean_accuracies)]
print('Best k is: %d' % best_k)
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Power law & Maoz
Step2: Maoz and power law with -1 is the same as visible below.
Step3: Gaussian
Step4: gauss_dtd=[4e9,3.2e9] (as mentioned in Wiersma09)
Step5: Difference in rate
Step6: Exponential
Step7: exp_dtd (as used in Wiersma09) 10e9
|
<ASSISTANT_TASK:>
Python Code:
%pylab nbagg
import sygma as s
reload(s)
s.__file__
from scipy.integrate import quad
from scipy.interpolate import UnivariateSpline
import numpy as np
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='power_law',beta_pow=-1,
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='power_law',beta_pow=-2,
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
s3_maoz=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='maoz',
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
s1.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t{^-1}$',marker1='o')
s2.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t^{-2}$',marker1='x',color1='b')
s3_maoz.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t^{-1}$, maoz',marker1='x',color1='b',shape1='--')
gauss_dtd=[1e9,6.6e8]
reload(s)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',gauss_dtd=gauss_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
g_dt1=s2
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline(np.log10(t))
#print 'mlim',mlim
if mlim>8.:
#print t
#print mlim
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
[1e9,6.6e8]
tau= gauss_dtd[0] #1e9 #3.3e9 #characteristic delay time
sigma=gauss_dtd[1] #0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=6,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.show()
gauss_dtd=[4e9,2e9]
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',gauss_dtd=gauss_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
g_dt2=s2
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline(np.log10(t))
#print 'mlim',mlim
if mlim>8.:
#print t
#print mlim
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
[1e9,6.6e8]
tau= gauss_dtd[0] #1e9 #3.3e9 #characteristic delay time
sigma=gauss_dtd[1] #0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=7,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.show()
g_dt1.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='gauss, 1',marker1='o',shape1='--')
g_dt2.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='gauss, 2',marker1='x',markevery=1)
print g_dt1.gauss_dtd
print g_dt2.gauss_dtd
exp_dtd=2e9
#import read_yields as ry
import sygma as s
reload(s)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',exp_dtd=exp_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
e_dt1=s1
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= 2e9
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print 'IMF test',norm*m**-2.35
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be : ', Yield_tot_sim/Yield_tot
s1.plot_mass(fig=8,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
exp_dtd=10e9
#import read_yields as ry
import sygma as s
reload(s)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',exp_dtd=exp_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
e_dt2=s1
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= exp_dtd
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print 'IMF test',norm*m**-2.35
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be : ', Yield_tot_sim/Yield_tot
s1.plot_mass(fig=9,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
e_dt1.plot_sn_distr(fig=77,rate=True,rate_only='sn1a',label1='exp, 1',marker1='o')
e_dt2.plot_sn_distr(fig=77,rate=True,rate_only='sn1a',label1='exp, 2',marker1='x',markevery=1)
print e_dt1.exp_dtd,
print e_dt2.exp_dtd
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example of model training
Step4: Set path to data
Step5: Set path to a model with config
Step6: Model training
Step7: Run model evaluation
|
<ASSISTANT_TASK:>
Python Code:
!git clone https://github.com/google-research/google-research.git
import sys
import os
import tarfile
import urllib
import zipfile
sys.path.append('./google-research')
# TF streaming
from kws_streaming.models import models
from kws_streaming.models import utils
from kws_streaming.layers.modes import Modes
import tensorflow as tf
import numpy as np
import tensorflow.compat.v1 as tf1
import logging
from kws_streaming.models import model_flags
from kws_streaming.models import model_params
from kws_streaming.train import test
from kws_streaming.train import train
from kws_streaming import data
tf1.disable_eager_execution()
config = tf1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf1.Session(config=config)
# general imports
import matplotlib.pyplot as plt
import os
import json
import numpy as np
import scipy as scipy
import scipy.io.wavfile as wav
import scipy.signal
tf.__version__
tf1.reset_default_graph()
sess = tf1.Session()
tf1.keras.backend.set_session(sess)
tf1.keras.backend.set_learning_phase(0)
# set PATH to data sets (for example to speech commands V2):
# it can be downloaded from
# https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz
# if you already run "00_check-data.ipynb" then folder "data2" should be located in the current dir
current_dir = os.getcwd()
DATA_PATH = os.path.join(current_dir, "data2/")
def waveread_as_pcm16(filename):
Read in audio data from a wav file. Return d, sr.
samplerate, wave_data = wav.read(filename)
# Read in wav file.
return wave_data, samplerate
def wavread_as_float(filename, target_sample_rate=16000):
Read in audio data from a wav file. Return d, sr.
wave_data, samplerate = waveread_as_pcm16(filename)
desired_length = int(
round(float(len(wave_data)) / samplerate * target_sample_rate))
wave_data = scipy.signal.resample(wave_data, desired_length)
# Normalize short ints to floats in range [-1..1).
data = np.array(wave_data, np.float32) / 32768.0
return data, target_sample_rate
# Set path to wav file to visualize it
wav_file = os.path.join(DATA_PATH, "left/012187a4_nohash_0.wav")
# read audio file
wav_data, samplerate = wavread_as_float(wav_file)
assert samplerate == 16000
plt.plot(wav_data)
# select model name should be one of
model_params.HOTWORD_MODEL_PARAMS.keys()
# This notebook is configured to work with 'ds_tc_resnet' and 'svdf'.
MODEL_NAME = 'ds_tc_resnet'
# MODEL_NAME = 'svdf'
MODELS_PATH = os.path.join(current_dir, "models")
MODEL_PATH = os.path.join(MODELS_PATH, MODEL_NAME + "/")
MODEL_PATH
# delete previously trained model with its folder and create a new one:
os.makedirs(MODEL_PATH)
# get toy model settings
FLAGS = model_params.HOTWORD_MODEL_PARAMS[MODEL_NAME]
# set path to data and model (where model will be stored)
FLAGS.data_dir = DATA_PATH
FLAGS.train_dir = MODEL_PATH
# set speech feature extractor properties
FLAGS.mel_upper_edge_hertz = 7600
FLAGS.window_size_ms = 30.0
FLAGS.window_stride_ms = 10.0
FLAGS.mel_num_bins = 80
FLAGS.dct_num_features = 40
FLAGS.feature_type = 'mfcc_tf'
FLAGS.preprocess = 'raw'
# for numerical correctness of streaming and non streaming models set it to 1
# but for real use case streaming set it to 0
FLAGS.causal_data_frame_padding = 0
FLAGS.use_tf_fft = True
FLAGS.mel_non_zero_only = not FLAGS.use_tf_fft
# set training settings
FLAGS.train = 1
# reduced number of training steps for test only
# so model accuracy will be low,
# to improve accuracy set how_many_training_steps = '40000,40000,20000,20000'
FLAGS.how_many_training_steps = '400,400,400,400'
FLAGS.learning_rate = '0.001,0.0005,0.0001,0.00002'
FLAGS.lr_schedule = 'linear'
FLAGS.verbosity = logging.INFO
# data augmentation parameters
FLAGS.resample = 0.15
FLAGS.time_shift_ms = 100
FLAGS.use_spec_augment = 1
FLAGS.time_masks_number = 2
FLAGS.time_mask_max_size = 25
FLAGS.frequency_masks_number = 2
FLAGS.frequency_mask_max_size = 7
FLAGS.pick_deterministically = 1
FLAGS.model_name = MODEL_NAME
# model parameters are different for every model
if MODEL_NAME == 'svdf':
FLAGS.model_name = MODEL_NAME
FLAGS.svdf_memory_size = "4,10,10,10,10,10"
FLAGS.svdf_units1 = "16,32,32,32,64,128"
FLAGS.svdf_act = "'relu','relu','relu','relu','relu','relu'"
FLAGS.svdf_units2 = "40,40,64,64,64,-1"
FLAGS.svdf_dropout = "0.0,0.0,0.0,0.0,0.0,0.0"
FLAGS.svdf_pad = 0
FLAGS.dropout1 = 0.0
FLAGS.units2 = ''
FLAGS.act2 = ''
elif MODEL_NAME == 'ds_tc_resnet':
# it is an example of model streaming with strided convolution, strided pooling and dilated convolution
FLAGS.activation = 'relu'
FLAGS.dropout = 0.0
FLAGS.ds_filters = '128, 64, 64, 64, 128, 128'
FLAGS.ds_filter_separable = '1, 1, 1, 1, 1, 1'
FLAGS.ds_repeat = '1, 1, 1, 1, 1, 1'
FLAGS.ds_residual = '0, 1, 1, 1, 0, 0' # residual can not be applied with stride
# FLAGS.ds_kernel_size = '11, 5, 15, 7, 29, 1'
FLAGS.ds_kernel_size = '11, 5, 15, 17, 15, 1'
FLAGS.ds_dilation = '1, 1, 1, 1, 2, 1'
FLAGS.ds_stride = '1, 1, 1, 1, 1, 1'
FLAGS.ds_pool = '1, 2, 1, 1, 1, 1'
# model should be causal, so that we can covert it to streaming mode
# if model is non causal then all non causal components should use Delay layer
FLAGS.ds_padding = "'causal', 'causal', 'causal', 'causal', 'causal', 'causal'"
else:
raise ValueError("set parameters for other models")
FLAGS.clip_duration_ms = 1000 # standard audio file in this data set has 1 sec length
FLAGS.batch_size = 100
flags = model_flags.update_flags(FLAGS)
with open(os.path.join(flags.train_dir, 'flags.json'), 'wt') as f:
json.dump(flags.__dict__, f)
# visualize a model
model_non_stream_batch = models.MODELS[flags.model_name](flags)
tf.keras.utils.plot_model(
model_non_stream_batch,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
model_non_stream_batch.summary()
# Model training
train.train(flags)
folder_name = 'tf'
test.tf_non_stream_model_accuracy(flags, folder_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Set up variables
Step8: Vertex constants
Step9: AutoML constants
Step10: Hardware Accelerators
Step11: Container (Docker) image
Step12: Tutorial
Step13: Dataset
Step14: Quick peek at your data
Step15: Dataset
Step16: Now save the unique dataset identifier for the Dataset resource instance you created.
Step17: Train the model
Step18: Construct the task requirements
Step19: Now save the unique identifier of the training pipeline you created.
Step20: Get information on a training pipeline
Step21: Deployment
Step22: Model information
Step23: Deploy the Model resource
Step24: Now get the unique identifier for the Endpoint resource you created.
Step25: Compute instance scaling
Step26: Deploy Model resource to the Endpoint resource
Step27: Make a online prediction request
Step28: Make a prediction
Step29: Undeploy the Model resource
Step30: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
IMPORT_FILE = "bq://bigquery-public-data.samples.gsod"
!bq head -n 10 $IMPORT_FILE
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("gsod-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
TRANSFORMATIONS = [
{"auto": {"column_name": "year"}},
{"auto": {"column_name": "month"}},
{"auto": {"column_name": "day"}},
]
label_column = "mean_temp"
PIPE_NAME = "gsod_pipe-" + TIMESTAMP
MODEL_NAME = "gsod_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="regression"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("rootMeanSquaredError", metrics["rootMeanSquaredError"])
print("meanAbsoluteError", metrics["meanAbsoluteError"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
ENDPOINT_NAME = "gsod_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
MIN_NODES = 1
MAX_NODES = 1
DEPLOYED_NAME = "gsod_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
INSTANCE = {"year": "1932", "month": "11", "day": "6"}
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [data]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(INSTANCE, endpoint_id, None)
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
mean_absolute_error(y_test, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up training data
Step2: Set up a Hadamard multitask model
Step3: Training the model
Step4: Make predictions with the model
|
<ASSISTANT_TASK:>
Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
train_x1 = torch.rand(50)
train_x2 = torch.rand(50)
train_y1 = torch.sin(train_x1 * (2 * math.pi)) + torch.randn(train_x1.size()) * 0.2
train_y2 = torch.cos(train_x2 * (2 * math.pi)) + torch.randn(train_x2.size()) * 0.2
class MultitaskGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.RBFKernel()
# We learn an IndexKernel for 2 tasks
# (so we'll actually learn 2x2=4 tasks with correlations)
self.task_covar_module = gpytorch.kernels.IndexKernel(num_tasks=2, rank=1)
def forward(self,x,i):
mean_x = self.mean_module(x)
# Get input-input covariance
covar_x = self.covar_module(x)
# Get task-task covariance
covar_i = self.task_covar_module(i)
# Multiply the two together to get the covariance we want
covar = covar_x.mul(covar_i)
return gpytorch.distributions.MultivariateNormal(mean_x, covar)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
train_i_task1 = torch.full_like(train_x1, dtype=torch.long, fill_value=0)
train_i_task2 = torch.full_like(train_x2, dtype=torch.long, fill_value=1)
full_train_x = torch.cat([train_x1, train_x2])
full_train_i = torch.cat([train_i_task1, train_i_task2])
full_train_y = torch.cat([train_y1, train_y2])
# Here we have two iterms that we're passing in as train_inputs
model = MultitaskGPModel((full_train_x, full_train_i), full_train_y, likelihood)
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iterations = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iterations):
optimizer.zero_grad()
output = model(full_train_x, full_train_i)
loss = -mll(output, full_train_y)
loss.backward()
print('Iter %d/50 - Loss: %.3f' % (i + 1, loss.item()))
optimizer.step()
# Set into eval mode
model.eval()
likelihood.eval()
# Initialize plots
f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3))
# Test points every 0.02 in [0,1]
test_x = torch.linspace(0, 1, 51)
tast_i_task1 = torch.full_like(test_x, dtype=torch.long, fill_value=0)
test_i_task2 = torch.full_like(test_x, dtype=torch.long, fill_value=1)
# Make predictions - one task at a time
# We control the task we cae about using the indices
# The gpytorch.settings.fast_pred_var flag activates LOVE (for fast variances)
# See https://arxiv.org/abs/1803.06058
with torch.no_grad(), gpytorch.settings.fast_pred_var():
observed_pred_y1 = likelihood(model(test_x, tast_i_task1))
observed_pred_y2 = likelihood(model(test_x, test_i_task2))
# Define plotting function
def ax_plot(ax, train_y, train_x, rand_var, title):
# Get lower and upper confidence bounds
lower, upper = rand_var.confidence_region()
# Plot training data as black stars
ax.plot(train_x.detach().numpy(), train_y.detach().numpy(), 'k*')
# Predictive mean as blue line
ax.plot(test_x.detach().numpy(), rand_var.mean.detach().numpy(), 'b')
# Shade in confidence
ax.fill_between(test_x.detach().numpy(), lower.detach().numpy(), upper.detach().numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
ax.set_title(title)
# Plot both tasks
ax_plot(y1_ax, train_y1, train_x1, observed_pred_y1, 'Observed Values (Likelihood)')
ax_plot(y2_ax, train_y2, train_x2, observed_pred_y2, 'Observed Values (Likelihood)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Date And Time Data
Step2: Show Days Of The Week
|
<ASSISTANT_TASK:>
Python Code:
# Load library
import pandas as pd
# Create dates
dates = pd.Series(pd.date_range('2/2/2002', periods=3, freq='M'))
# View data
dates
# Show days of the week
dates.dt.weekday_name
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll get a perfect R2=1 if actual and predicted values are the same
Step2: Now if we're a bit off on our predictions R2 will be still pretty high but not a perfect 1.
Step3: Intuitively it makes sense that we get a worse R2 if the predictions are farther off.
Step4: Now here's the interesting situation
Step5: This is because R2 is calculated relative to a hypothetical model that always predicts the mean of the actual values. Obviously we don't have access to the mean of actual values at the time of prediction but we do at evaluation time.
Step6: Special cases
Step7: Predicting exactly the same as mean everytime gives us an R2=0
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn
seaborn.set_style('whitegrid')
def r2(actual, predicted):
if isinstance(actual, list):
actual = np.array(actual)
if isinstance(predicted, list):
predicted = np.array(predicted)
plt.scatter(actual, predicted);
plt.plot(actual, actual, 'r', alpha=0.5)
plt.scatter(actual, actual, facecolors='none', edgecolor='r', linestyle='-')
plt.axhline(y=actual.mean(), ls='dashed')
mean = actual.mean()
ss_total = ((actual - mean) ** 2).sum()
ss_residual = ((actual - predicted) ** 2).sum()
return 1 - ss_residual / ss_total
actual = list(range(10))
actual
predicted = list(range(10))
r2(actual, predicted)
predicted = list(range(8)) + [9, 10]
predicted
r2(actual, predicted)
predicted = list(range(8)) + [13, 15]
predicted
r2(actual, predicted)
predicted = list(range(8)) + [100, 150]
predicted
r2(actual, predicted)
actual = [1] * 10
r2(actual, [1] * 10)
r2(actual, [-100] * 10)
actual = list(range(10))
r2(actual, [1] * 10)
r2(actual, [4.5] * 10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 만약 이 확률 변수의 확률 분포가 0 이상 360 미만의 구간내에서 균일 분포(uniform distribution) 모형을 가진다고 가정하면 답은 0(zero)이다.
Step2: 누적 밀도 함수 즉 cdf는 다음과 같은 특징을 가진다.
Step3: 이 때 사용할 수 있는 수학적 방법이 바로 미분(differentiation)이다. 미분은 함수의 구간을 무한대 갯수로 쪼개어 각 구간의 변화 정도 즉, 기울기를 계산하는 방법이다.
Step4: 이산 확률 분포
Step5: 위의 확률 질량 함수는 주사위 눈금 1이 나오지 않고 6이 비정상적으로 많이 나오게 만든 비정상적인 주사위(unfair dice)를 묘사한다.
|
<ASSISTANT_TASK:>
Python Code:
%install_ext https://raw.githubusercontent.com/meduz/ipython_magics/master/tikzmagic.py
%load_ext tikzmagic
%%tikz
\filldraw [fill=white] (0,0) circle [radius=1cm];
\foreach \angle in {60,30,...,-270} {
\draw[line width=1pt] (\angle:0.9cm) -- (\angle:1cm);
}
\draw (0,0) -- (90:0.8cm);
x = np.linspace(-4, 4)
y = sp.stats.norm.cdf(x)
plt.plot(x, y)
x = np.linspace(-4, 4, 20)
y = sp.stats.norm.cdf(x)
z = np.insert(np.diff(y), 0, None)
w = (4-(-4)) / 20
plt.bar(x-w, z/w, width=w)
x = np.linspace(-4, 4, 20)
y = sp.stats.norm.cdf(x)
z = np.insert(np.diff(y), 0, None)
w = (4-(-4))/20
plt.bar(x-w, z/w, width=w)
x1 = np.linspace(-4, 4, 200)
y1 = sp.stats.norm.pdf(x1)
plt.plot(x1, y1, 'lightsalmon', lw=4)
x = np.arange(1, 7)
y = np.array([0.0, 0.1, 0.1, 0.2, 0.2, 0.4])
plt.stem(x, y)
plt.xlim(0, 7)
plt.ylim(-0.01, 0.5)
x = np.arange(1, 7)
y = np.array([0.0, 0.1, 0.1, 0.2, 0.2, 0.4])
z = np.cumsum(y)
plt.step(x, z)
plt.xlim(0, 7)
plt.ylim(-0.01, 1.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Not all the entries have an infant mortality rate element. So we need to make sure loop loops for the element named 'infant_mortality'.
Step2: Thus we have the countries with the ten lowest reported infant mortality rate element values (in order).
Step3: Top ten cities in the world by population as reported by the database.
Step4: Largest ethnic groups by population, based on the latest estimates from each country. Finally, we look for the longest river, largest lake, and highest airport. We can take advantage of the intelligent attributes included in the database already. Playing around with the river elements, we see that while the long rivers may have multiple 'located' subelements, for each country, the river element itself has a country attribute which lists the country codes all together. This simplifies the problem. We assume there are no ties... simply because it's a bit quicker and because the coincidence seems a bit ridiculous.
|
<ASSISTANT_TASK:>
Python Code:
document = ET.parse( './data/mondial_database.xml' )
import pandas as pd
root = document.getroot()
#get infant mortality of each country, add to heap if under capacity
#otherwise check if new value is greater than smallest.
inf_mort = dict()
for element in document.iterfind('country'):
for subelement in element.iterfind('infant_mortality'):
inf_mort[element.find('name').text] = float(subelement.text)
infmort_df = pd.DataFrame.from_dict(inf_mort, orient ='index')
infmort_df.columns = ['infant_mortality']
infmort_df.index.names = ['country']
infmort_df.sort_values(by = 'infant_mortality', ascending = True).head(10)
current_pop = 0
current_pop_year = 0
citypop = dict()
for country in document.iterfind('country'):
for city in country.getiterator('city'):
for subelement in city.iterfind('population'):
#compare attributes of identically named subelements. Use this to hold onto the latest pop estimate.
if int(subelement.attrib['year']) > current_pop_year:
current_pop = int(subelement.text)
current_pop_year = int(subelement.attrib['year'])
citypop[city.findtext('name')] = current_pop
current_pop = 0
current_pop_year = 0
citypop_df = pd.DataFrame.from_dict(citypop, orient ='index')
citypop_df.columns = ['population']
citypop_df.index.names = ['city']
citypop_df.sort_values(by = 'population', ascending = False).head(10)
ethn = dict()
current_pop = 0
current_pop_year = 0
for country in document.iterfind('country'):
for population in country.getiterator('population'):
#compare attributes of identically named subelements. Use this to hold onto the latest pop estimate.
#Probably faster way to do this if sure of tree structure (i.e. last element is always latest)
if int(population.attrib['year']) > current_pop_year:
current_pop = int(population.text)
current_pop_year = int(population.attrib['year'])
for ethn_gp in country.iterfind('ethnicgroup'):
if ethn_gp.text in ethn:
ethn[ethn_gp.text] += current_pop*float(ethn_gp.attrib['percentage'])/100
else:
ethn[ethn_gp.text] = current_pop*float(ethn_gp.attrib['percentage'])/100
current_pop = 0
current_pop_year = 0
ethnic_df = pd.DataFrame.from_dict(ethn, orient ='index')
ethnic_df.columns = ['population']
ethnic_df.index.names = ['ethnic_group']
ethnic_df.groupby(ethnic_df.index).sum().sort_values(by = 'population', ascending = False).head(10)
river_ctry=None
river_name= None
lake_ctry= None
lake_name= None
airport_ctry= None
airport_name = None
river_length= 0
lake_area = 0
airport_elv = 0
for river in document.iterfind('river'):
for length in river.iterfind('length'):
if river_length < float(length.text):
river_length=float(length.text)
river_ctry= river.attrib['country']
river_name = river.findtext('name')
for lake in document.iterfind('lake'):
for area in lake.iterfind('area'):
if lake_area < float(area.text):
lake_area=float(area.text)
lake_ctry= lake.attrib['country']
lake_name = lake.findtext('name')
for airport in document.iterfind('airport'):
for elv in airport.iterfind('elevation'):
#apprarently there is an airport in the database with an elevation tag an no entry.
#Probably should have been doing this previously
if (elv.text is not None) and (airport_elv < float(elv.text)):
airport_elv=float(elv.text)
airport_ctry= airport.attrib['country']
airport_name = airport.findtext('name')
data = [[lake_name,river_name,airport_name],[lake_ctry,river_ctry,airport_ctry],[lake_area,river_length,airport_elv]]
df = pd.DataFrame(data, columns = ['Largest Lake','Longest River','Highest Airport'],index=['Name','Location (Country Code)','Metric Value'])
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fitting a decaying oscillation
Step2: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
def modl(t,A,o,l,d):
return A*np.exp(-1*t)*np.cos(o*t)+d
thetabest,thetacov=opt.curve_fit(modl,tdata,ydata,np.array((6,1,1,0)),dy,absolute_sigma=True)
assert True # leave this to grade the data import and raw data plot
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It returned empty because there is no zoidberg in our list.
Step2: Next
Step3: Next
Step4: There is a problem with the above
|
<ASSISTANT_TASK:>
Python Code:
# import sqlite3 here
#open connection to database
# 1st challenge: Write a sql query to search for the name: zoidberg
# Note: It will return 0
# Add the zoidberg data to the database below. remeber to commit()
# Search for zoidberg again. This time, you should get the results below:
# Count number of practices in New York vs Texas
# 1st, get number in New York. You should get the values below
print("Number in NY: ", len(result))
# Now get Texas:
print("Number in TX: ", len(result))
# Find number of Johns. Remember, this uses the % symbol
print(len(result))
#This time, printing the 1st 6 results as well, to check.
print(result[:6])
# Now find all Johns in the state 'AL'
print(len(result))
print(result[:6])
# Finally, Johns in AL and city of Mobile.
print(len(result))
print(result[:6])
#Always close the database!
conn.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 5
sample_id = 29
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return (x - x.min()) / (x.max()-x.min())
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn import preprocessing
rrange = np.arange(10)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
lb = preprocessing.LabelBinarizer()
lb.fit(rrange)
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, (None, image_shape[0], image_shape[1], image_shape[2]), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, (None, n_classes), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
input_feature_map = x_tensor.get_shape()[3].value
weight = tf.Variable(
tf.truncated_normal(
[conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs],
mean=0,
stddev=0.01
),
name="conv2d_weight"
)
#bias = tf.Variable(tf.truncated_normal([conv_num_outputs], dtype=tf.float32))
bias = tf.Variable(tf.zeros([conv_num_outputs], dtype=tf.float32), name="conv2d_bias")
cstrides = [1, conv_strides[0], conv_strides[1], 1]
pstrides = [1, pool_strides[0], pool_strides[1], 1]
output = tf.nn.conv2d(
x_tensor,
weight,
strides=cstrides,
padding="SAME"
)
output = tf.nn.bias_add(output, bias)
output = tf.nn.relu(output)
output = tf.nn.max_pool(
output,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=pstrides,
padding="SAME"
)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
tensor_dims = x_tensor.get_shape().as_list()
return tf.reshape(x_tensor, [-1, tensor_dims[1]*tensor_dims[2]*tensor_dims[3]])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
tensor_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([tensor_shape[1], num_outputs], mean=0.0, stddev=0.03), name="weight_fc")
bias = tf.Variable(tf.zeros([num_outputs]), name="weight_bias")
output = tf.add(tf.matmul(x_tensor, weights), bias)
output = tf.nn.relu(output)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
tensor_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([tensor_shape[1], num_outputs], mean=0, stddev=0.01), name="output_weight")
bias = tf.Variable(tf.zeros([num_outputs]), name="output_bias")
output = tf.add(tf.matmul(x_tensor, weights), bias)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv_ksize = (3, 3)
conv_strides = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
num_outputs = 10
network = conv2d_maxpool(x, 16, conv_ksize, conv_strides, pool_ksize, pool_strides)
network = conv2d_maxpool(network, 32, conv_ksize, conv_strides, pool_ksize, pool_strides)
network = conv2d_maxpool(network, 64, conv_ksize, conv_strides, pool_ksize, pool_strides)
network = flatten(network)
network = fully_conn(network, 512)
network = tf.nn.dropout(network, keep_prob=keep_prob)
network = fully_conn(network, 1024)
network = output(network, num_outputs)
return network
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0
})
accuracies = np.zeros(5)
for i in [0, 1000, 2000, 3000, 4000]:
valid_acc = session.run(accuracy, feed_dict={
x: valid_features[i:i+1000],
y: valid_labels[i:i+1000],
keep_prob: 1.0
})
index = int(i/1000)
accuracies[index] = valid_acc
accuracy = np.mean(accuracies)
print("Loss: {loss} - Validation Accuracy: {valid_acc}".format(loss=loss, valid_acc=accuracy))
# TODO: Tune Parameters
epochs = 50
batch_size = 1024
keep_probability = .5
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The current objective function is the biomass reaction (bio1)--to change this, just set the objective to another reaction. Let's change the objective to CO2 exchange, then change it back to biomass production
Step2: Similarly, you can manipulate the environmental conditions as in cobrapy. The base model for this example ensemble is from ModelSEED, so exchange reactions are specified with the 'EX_' prefix, followed by the metabolite id. Let's take a look at the exchange reactions that are currently open
Step3: That's a lot of open exchange reactions! Let's make them a bit more realistic for an in vitro situation. We'll load a file specifying the base composition of the media in biolog single C/N growth conditions, and set the media conditions to reflect that. The base composition is missing a carbon source, so we'll enable uptake of glucose. In the medium dictionary, the numbers for each exchange reaction are uptake rates. If you inspect the actual exchange reactions, you will find that the equivalent to an uptake rate of 1000 units is a lower bound of -1000, because our exchange reactions are defined with the boundary metabolite as the reactant, e.g. cpd00182_e -->.
Step4: With the medium set, we can now simulate growth in these conditions
Step5: You may want to perform simulations with only a subset of ensemble members. There are two options for this; either identifying the desired members for simulation with the specific_models parameter, or passing a number of random members to perform simulations with the num_models parameter.
|
<ASSISTANT_TASK:>
Python Code:
import medusa
from medusa.test import create_test_ensemble
ensemble = create_test_ensemble("Staphylococcus aureus")
ensemble.base_model.objective.expression
ensemble.base_model.objective = 'EX_cpd00011_e'
print(ensemble.base_model.objective.expression)
ensemble.base_model.objective = 'bio1'
print(ensemble.base_model.objective.expression)
medium = ensemble.base_model.medium
medium
import pandas as pd
biolog_base = pd.read_csv("../medusa/test/data/biolog_base_composition.csv", sep=",")
biolog_base
# convert the biolog base to a dictionary, which we can use to set ensemble.base_model.medium directly.
biolog_base = {'EX_'+component:1000 for component in biolog_base['ID']}
# add glucose uptake to the new medium dictionary
biolog_base['EX_cpd00182_e'] = 10
# Set the medium on the base model
ensemble.base_model.medium = biolog_base
ensemble.base_model.medium
from medusa.flux_analysis import flux_balance
fluxes = flux_balance.optimize_ensemble(ensemble,return_flux='bio1')
# get fluxes for the first 10 members
fluxes.head(10)
import matplotlib.pylab as plt
fig, ax = plt.subplots()
plt.hist(fluxes['bio1'])
ax.set_ylabel('# ensemble members')
ax.set_xlabel('Flux through biomass reaction')
plt.show()
# perform FBA with a random set of 10 members:
subflux = flux_balance.optimize_ensemble(ensemble, num_models = 10, return_flux = "bio1")
subflux
submembers = [member.id for member in ensemble.members[0:10]]
print(submembers)
subflux = flux_balance.optimize_ensemble(ensemble, specific_models = submembers, return_flux = "bio1")
subflux
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <h2> Create ML dataset using Dataflow </h2>
Step3: Let's pull a sample of our data into a dataframe to see what it looks like.
Step4: Let's check our files to make sure everything went as expected
Step6: <h2> Create vocabularies using Dataflow </h2>
Step9: Also get vocab counts from the length of the vocabularies
Step10: Let's check our files to make sure everything went as expected
|
<ASSISTANT_TASK:>
Python Code:
# Import helpful libraries and setup our project, bucket, and region
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "1.13"
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
query_hybrid_dataset =
WITH CTE_site_history AS (
SELECT
fullVisitorId as visitor_id,
(SELECT MAX(IF(index = 10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,
(SELECT MAX(IF(index = 7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category,
(SELECT MAX(IF(index = 6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,
(SELECT MAX(IF(index = 2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,
SPLIT(RPAD((SELECT MAX(IF(index = 4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') AS year_month_array,
LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) AS nextCustomDimensions
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND
fullVisitorId IS NOT NULL
AND
hits.time != 0
AND
hits.time IS NOT NULL
AND
(SELECT MAX(IF(index = 10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
),
CTE_training_dataset AS (
SELECT
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) AS next_content_id,
visitor_id,
content_id,
category,
REGEXP_REPLACE(title, r",", "") AS title,
REGEXP_EXTRACT(author_list, r"^[^,]+") AS author,
DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970, 1, 1), MONTH) AS months_since_epoch
FROM
CTE_site_history
WHERE
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL)
SELECT
CAST(next_content_id AS STRING) AS next_content_id,
CAST(training_dataset.visitor_id AS STRING) AS visitor_id,
CAST(training_dataset.content_id AS STRING) AS content_id,
CAST(IFNULL(category, 'None') AS STRING) AS category,
CONCAT("\\"", REPLACE(TRIM(CAST(IFNULL(title, 'None') AS STRING)), "\\"",""), "\\"") AS title,
CAST(IFNULL(author, 'None') AS STRING) AS author,
CAST(months_since_epoch AS STRING) AS months_since_epoch,
IFNULL(user_factors._0, 0.0) AS user_factor_0,
IFNULL(user_factors._1, 0.0) AS user_factor_1,
IFNULL(user_factors._2, 0.0) AS user_factor_2,
IFNULL(user_factors._3, 0.0) AS user_factor_3,
IFNULL(user_factors._4, 0.0) AS user_factor_4,
IFNULL(user_factors._5, 0.0) AS user_factor_5,
IFNULL(user_factors._6, 0.0) AS user_factor_6,
IFNULL(user_factors._7, 0.0) AS user_factor_7,
IFNULL(user_factors._8, 0.0) AS user_factor_8,
IFNULL(user_factors._9, 0.0) AS user_factor_9,
IFNULL(item_factors._0, 0.0) AS item_factor_0,
IFNULL(item_factors._1, 0.0) AS item_factor_1,
IFNULL(item_factors._2, 0.0) AS item_factor_2,
IFNULL(item_factors._3, 0.0) AS item_factor_3,
IFNULL(item_factors._4, 0.0) AS item_factor_4,
IFNULL(item_factors._5, 0.0) AS item_factor_5,
IFNULL(item_factors._6, 0.0) AS item_factor_6,
IFNULL(item_factors._7, 0.0) AS item_factor_7,
IFNULL(item_factors._8, 0.0) AS item_factor_8,
IFNULL(item_factors._9, 0.0) AS item_factor_9,
FARM_FINGERPRINT(CONCAT(CAST(visitor_id AS STRING), CAST(content_id AS STRING))) AS hash_id
FROM
CTE_training_dataset AS training_dataset
LEFT JOIN
`cloud-training-demos.GA360_test.user_factors` AS user_factors
ON CAST(training_dataset.visitor_id AS FLOAT64) = CAST(user_factors.user_id AS FLOAT64)
LEFT JOIN
`cloud-training-demos.GA360_test.item_factors` AS item_factors
ON CAST(training_dataset.content_id AS STRING) = CAST(item_factors.item_id AS STRING)
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
df_hybrid_dataset = bq.query(query_hybrid_dataset + "LIMIT 100").to_dataframe()
df_hybrid_dataset.head()
df_hybrid_dataset.describe()
import apache_beam as beam
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = "next_content_id,visitor_id,content_id,category,title,author,months_since_epoch".split(",")
FACTOR_COLUMNS = ["user_factor_{}".format(i) for i in range(10)] + ["item_factor_{}".format(i) for i in range(10)]
# Write out rows for each input row for each column in rowdict
data = ",".join(["None" if k not in rowdict else (rowdict[k].encode("utf-8") if rowdict[k] is not None else "None") for k in CSV_COLUMNS])
data += ","
data += ",".join([str(rowdict[k]) if k in rowdict else "None" for k in FACTOR_COLUMNS])
yield ("{}".format(data))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = "preprocess-hybrid-recommendation-features" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc/features"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/hybrid_recommendation/preproc/features/".format(BUCKET)
try:
subprocess.check_call("gsutil -m rm -r {}".format(OUTPUT_DIR).split())
except:
pass
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
p = beam.Pipeline(RUNNER, options = opts)
query = query_hybrid_dataset
if in_test_mode:
query = query + " LIMIT 100"
for step in ["train", "eval"]:
if step == "train":
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hash_id, 10)) < 9".format(query)
else:
selquery = "SELECT * FROM ({}) WHERE ABS(MOD(hash_id, 10)) = 9".format(query)
(p
| "{}_read".format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| "{}_csv".format(step) >> beam.FlatMap(to_csv)
| "{}_out".format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{}.csv".format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
%%bash
rm -rf features
mkdir features
!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/features/*.csv* features/
!head -3 features/*
query_vocabularies =
SELECT
CAST((SELECT MAX(IF(index = index_value, value, NULL)) FROM UNNEST(hits.customDimensions)) AS STRING) AS grouped_by
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index = index_value, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
grouped_by
import apache_beam as beam
import datetime, os
def to_txt(rowdict):
# Pull columns from BQ and create a line
# Write out rows for each input row for grouped by column in rowdict
return "{}".format(rowdict["grouped_by"].encode("utf-8"))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = "preprocess-hybrid-recommendation-vocab-lists" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc/vocabs"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/hybrid_recommendation/preproc/vocabs/".format(BUCKET)
try:
subprocess.check_call("gsutil -m rm -r {}".format(OUTPUT_DIR).split())
except:
pass
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
p = beam.Pipeline(RUNNER, options = opts)
def vocab_list(index, name):
query = query_vocabularies.replace("index_value", "{}".format(index))
(p
| "{}_read".format(name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
| "{}_txt".format(name) >> beam.Map(to_txt)
| "{}_out".format(name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{0}_vocab.txt".format(name))))
)
# Call vocab_list function for each
vocab_list(10, "content_id") # content_id
vocab_list(7, "category") # category
vocab_list(2, "author") # author
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
import apache_beam as beam
import datetime, os
def count_to_txt(rowdict):
# Pull columns from BQ and create a line
# Write out count
return "{}".format(rowdict["count_number"])
def mean_to_txt(rowdict):
# Pull columns from BQ and create a line
# Write out mean
return "{}".format(rowdict["mean_value"])
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = "preprocess-hybrid-recommendation-vocab-counts" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc/vocab_counts"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/hybrid_recommendation/preproc/vocab_counts/".format(BUCKET)
try:
subprocess.check_call("gsutil -m rm -r {}".format(OUTPUT_DIR).split())
except:
pass
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
p = beam.Pipeline(RUNNER, options = opts)
def vocab_count(index, column_name):
query =
SELECT
COUNT(*) AS count_number
FROM ({})
.format(query_vocabularies.replace("index_value", "{}".format(index)))
(p
| "{}_read".format(column_name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
| "{}_txt".format(column_name) >> beam.Map(count_to_txt)
| "{}_out".format(column_name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{0}_vocab_count.txt".format(column_name))))
)
def global_column_mean(column_name):
query =
SELECT
AVG(CAST({1} AS FLOAT64)) AS mean_value
FROM ({0})
.format(query_hybrid_dataset, column_name)
(p
| "{}_read".format(column_name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
| "{}_txt".format(column_name) >> beam.Map(mean_to_txt)
| "{}_out".format(column_name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, "{0}_mean.txt".format(column_name))))
)
# Call vocab_count function for each column we want the vocabulary count for
vocab_count(10, "content_id") # content_id
vocab_count(7, "category") # category
vocab_count(2, "author") # author
# Call global_column_mean function for each column we want the mean for
global_column_mean("months_since_epoch") # months_since_epoch
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
%%bash
rm -rf vocabs
mkdir vocabs
!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/vocabs/*.txt* vocabs/
!head -3 vocabs/*
%%bash
rm -rf vocab_counts
mkdir vocab_counts
!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/vocab_counts/*.txt* vocab_counts/
!head -3 vocab_counts/*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Setup the operators, Hamiltonian, and initial state
Step3: Calculate the emission flux
Step4: Visualize the emission flux
Step5: Calculate the correlators involved in two-photon interference
Step6: Interpolate these functions, in preparation for time delays
Step7: Calculate measured degrees of HOM cross-correlation
Step8: Visualize the two-photon interference visibilities
Step9: Versions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interp2d
from qutip import *
# shared parameters
gamma = 1 # decay rate
tlist = np.linspace(0, 13, 300)
taulist = tlist
# parameters for TLS with exponential shape wavepacket (short pulse)
tp_e = 0.060 # Gaussian pulse parameter
Om_e = 19.40 # driving strength
t_offset_e = 0.405
pulse_shape_e = Om_e/2 * np.exp(-(tlist - t_offset_e) ** 2 /
(2 * tp_e ** 2))
# parameters for TLS with Gaussian shape wavepacket (long pulse)
tp_G = 2.000 # Gaussian pulse parameter
Om_G = 0.702 # driving strength
t_offset_G = 5
pulse_shape_G = Om_G/2 * np.exp(-(tlist - t_offset_G) ** 2 /
(2 * tp_G ** 2))
# initial state
psi0 = fock(2, 0) # ground state
# operators
sm = destroy(2) # atomic lowering operator
n = [sm.dag()*sm] # number operator
# Hamiltonian
H_I = sm + sm.dag()
H_e = [[H_I, pulse_shape_e]]
H_G = [[H_I, pulse_shape_G]]
# collapse operator that describes dissipation
c_ops = [np.sqrt(gamma) * sm] # represents spontaneous emission
n_e = mesolve(H_e, psi0, tlist, c_ops, n).expect[0]
n_G = mesolve(H_G, psi0, tlist, c_ops, n).expect[0]
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(tlist, n_e, 'r', label="exponential wavepacket")
ax.plot(tlist, n_G, 'b', label="Gaussian wavepacket")
ax.legend()
ax.set_xlim(0, 13)
ax.set_ylim(0, 1)
ax.set_xlabel('Time, $t$ [$1/\gamma$]')
ax.set_ylabel('Emission flux [$\gamma$]')
ax.set_title('TLS emission shapes');
# specify relevant operators to calculate the correlation
# <A(t+tau)B(t)>
a_op = sm.dag()
b_op = sm
# calculate two-time correlations
G1_t_tau_e = correlation_2op_2t(H_e, psi0, tlist, taulist, c_ops,
a_op, b_op, reverse=True)
G1_t_tau_e_r = correlation_2op_2t(H_e, psi0, tlist, taulist, c_ops,
a_op, b_op)
G1_t_tau_G = correlation_2op_2t(H_G, psi0, tlist, taulist, c_ops,
a_op, b_op, reverse=True)
G1_t_tau_G_r = correlation_2op_2t(H_G, psi0, tlist, taulist, c_ops,
a_op, b_op)
# g^(2)[0] values calculated for the sources in question in the
# notebook 'Pulse-wise second-order optical coherences of emission
# from a two-level system'
g20_e = 0.03
g20_G = 0.44
t_delay_list = np.linspace(-5, 0, 50)
TAULIST, TLIST = np.meshgrid(taulist, tlist)
c1_e = interp2d(taulist, tlist, np.real(G1_t_tau_e))
c1_e_f = lambda tau, t: c1_e(tau, t)
c2_e = interp2d(taulist, tlist, np.real(G1_t_tau_e_r))
c2_e_f = lambda tau, t: c2_e(tau, t)
c1_G = interp2d(taulist, tlist, np.real(G1_t_tau_G))
c1_G_f = lambda tau, t: c1_G(tau, t)
c2_G = interp2d(taulist, tlist, np.real(G1_t_tau_G_r))
c2_G_f = lambda tau, t: c2_G(tau, t)
# two delayed exponential wavepackets interfere
def g2HOM_exponential(t_delay):
corr_e = np.array(
[[c1_e_f(tau, t)[0] * c2_e_f(tau, t - t_delay)[0]
for tau in taulist]
for t in tlist]
)
return g20_e/2 + 1/2*abs(1 -
2 * np.trapz(np.trapz(corr_e, TLIST, axis=0), taulist)
)
g2HOM_e = parallel_map(g2HOM_exponential, t_delay_list)
# two delayed Gaussian wavepackets interfere
def g2HOM_Gaussian(t_delay):
corr_G = np.array(
[[c1_G_f(tau, t)[0] * c2_G_f(tau, t - t_delay)[0]
for tau in taulist]
for t in tlist]
)
return g20_G/2 + 1/2*abs(1 -
2 * np.trapz(np.trapz(corr_G, TLIST, axis=0), taulist)
)
g2HOM_G = parallel_map(g2HOM_Gaussian, t_delay_list)
# a delayed Gaussian wavepacket interferes with an exponential
# wavepacket
def g2HOM_Gaussian_exp(t_delay):
corr_Ge = np.array(
[[c1_e_f(tau, t)[0] * c2_G_f(tau, t - t_delay)[0]
for tau in taulist]
for t in tlist]
)
return (g20_e + g20_G)/4 + 1/2*abs(1 -
2 * np.trapz(np.trapz(corr_Ge, TLIST, axis=0), taulist)
)
g2HOM_Ge = parallel_map(g2HOM_Gaussian_exp, t_delay_list + 5.45)
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(np.concatenate([t_delay_list, -t_delay_list[-2::-1]]),
np.concatenate([g2HOM_e, g2HOM_e[-2::-1]]), 'r',
label="Two exponential")
ax.plot(np.concatenate([t_delay_list, -t_delay_list[-2::-1]]),
np.concatenate([g2HOM_G, g2HOM_G[-2::-1]]), 'b',
label="Two Gaussian")
ax.plot(np.concatenate([t_delay_list, -t_delay_list[-2::-1]]),
np.concatenate([g2HOM_Ge, g2HOM_Ge[-2::-1]]), 'purple',
label="One exponential &\n one Gaussian")
ax.legend(loc=3)
ax.set_xlim(-5, 5)
ax.axhline(y=0.5, color='k', linestyle='dashed')
ax.set_xlabel('Wavepacket delay [$1/\gamma$]')
ax.set_ylabel('$g^{(2)}_{HOM}[0]$')
ax.set_title('Two-photon interference visibilities for '+
'delayed wavepackets');
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load frames from CSV file. proxy_time is set by the client when sending the frame (using groundstation PC clock). server_time is set by the server when the frame is received (using the server clock).
Step2: To choose duplicated frames (some of them have errors), we assign points to each of the groundstations according as to how many frames they have received. We choose the duplicate frame instance from the station with more points.
Step3: DSLWP-B frames are TM data link frames. We classify them according to Spacecraft ID and virtual channel.
Step4: Spacecraft ID's 147 and 403 are used by DSLWP-B0 (435.4MHz) and DSLWP-B1 (436.4MHz). 146 and 402 are used by DSLWP-A0 and -A1.
Step5: We perform KISS stream recovery on each of the virtual channels. To discard some invalid packets, we check that the packet length in the Space Packet header matches the length of the packet.
Step6: Helper function to extract a telemetry channel with its timestamps. See usage examples below.
|
<ASSISTANT_TASK:>
Python Code:
TMPrimaryHeader = BitStruct('transfer_frame_version_number' / BitsInteger(2),
'spacecraft_id' / BitsInteger(10),
'virtual_channel_id' / BitsInteger(3),
'ocf_flag' / Flag,
'master_channel_frame_count' / BitsInteger(8),
'virtual_channel_frame_count' / BitsInteger(8),
'first_header_pointer' / BitsInteger(8))
SpacePacketPrimaryHeader = BitStruct('ccsds_version' / BitsInteger(3),
'packet_type' / BitsInteger(1),
'secondary_header_flag' / Flag,
'AP_ID' / BitsInteger(11),
'sequence_flags' / BitsInteger(2),
'packet_sequence_count_or_name' / BitsInteger(14),
'data_length' / BitsInteger(16))
class AffineAdapter(Adapter):
def __init__(self, c, a, *args, **kwargs):
self.c = c
self.a = a
return Adapter.__init__(self, *args, **kwargs)
def _encode(self, obj, context, path = None):
return int(round(obj * self.c + self.a))
def _decode(self, obj, context, path = None):
return (float(obj) - self.a)/ self.c
class LinearAdapter(AffineAdapter):
def __init__(self, c, *args, **kwargs):
return AffineAdapter.__init__(self, c, 0, *args, **kwargs)
Current = LinearAdapter(1/3.2, Int8ub)
Voltage = LinearAdapter(1/0.16, Int8ub)
class RSSIAdapter(Adapter):
def _encode(self, obj, context, path = None):
obj.rssi_asm = int(round((obj.rssi_asm + 174 - obj.gain_agc)*10))
obj.rssi_channel = int(round((obj.rssi_channel + 174 - obj.gain_agc)*10))
obj.rssi_7021 = int(round((obj.rssi_channel + 174 - obj.gain_agc)*2))
return obj
def _decode(self, obj, context, path = None):
obj.rssi_asm = -174 + obj.rssi_asm/10 + obj.gain_agc
obj.rssi_channel = -174 + obj.rssi_channel/10 + obj.gain_agc
obj.rssi_7021 = -174 + obj.rssi_7021 * 0.5 + obj.gain_agc
return obj
HKUV = RSSIAdapter(Struct(
'config' / Int8ub,
'flag_rx' / Int8ub,
'tx_gain' / Int8ub,
'tx_modulation' / Int8ub,
'flag_tx' / Int8ub,
'flag_7021' / Int8ub,
'n_cmd_buf' / Int8ub,
'n_cmd_dropped' / Int8ub,
'i_bus_rx' / Current,
'u_bus_rx' / Voltage,
'i_bus_tx' / Current,
'u_bus_tx' / Voltage,
't_pa' / Int8sb,
't_tx7021' / Int8sb,
'n_jt4_tx' / Int8ub,
'n_ham_rx' / Int8ub,
'n_422_tx' / Int8ub,
'n_422_rx' / Int8ub,
'n_422_rx_pkg_err' / Int8ub,
'n_422_rx_exe_err' / Int8ub,
'cmd_422_last_rx' / Int8ub,
'n_rf_tx' / Int8ub,
'n_rf_tx_dropped' / Int8ub,
'n_rf_rx' / Int8ub,
'n_rf_rx_pkg_err' / Int8ub,
'n_rf_rx_exe_err' / Int8ub,
'n_rf_rx_fec_err' / Int8ub,
'cmd_rf_last_rx' / Int8ub,
'rsvd0' / Int8ub,
'rsvd1' / Int8ub,
'byte_corr' / Int8sb,
'n_cmd' / Int8ub,
'fc_asm' / LinearAdapter(32768/3.1416, Int16sb),
'snr_asm' / LinearAdapter(256, Int16ub),
'rssi_asm' / Int16ub,
'rssi_channel' / Int16ub,
'rssi_7021' / Int8ub,
'gain_agc' / Mapping(Int8ub, {43.0: 0, 33.0: 1, 26.0: 2, 29.0: 4, 19.0: 5, 12.0: 6, 17.0: 8, 7.0: 9, 0.0: 10}),
'rsvd15' / Int16sb,
'seconds_since_epoch' / Int32ub,
'cam_mode' / Int8ub,
'cam_task_flag' / Int8ub,
'cam_err_flag' / Int8ub,
'cam_pic_len' / Int24ub,
'cam_memory_id' / Int8ub,
'jt4_task_flag' / Int8ub,
'n_reset' / Int8ub,
'flag_reset' / Int8ub,
'flag_sys' / Int8ub,
'n_dma_overflow' / Int8ub,
'runtime' / LinearAdapter(1/0.004, Int32ub),
'message' / Bytes(8)
))
StQ = LinearAdapter(2147483647, Int32sb)
FW = LinearAdapter(2, Int16sb)
Gyro = LinearAdapter(2147483647.0/400.0, Int32sb)
class QuadraticAdapter(Adapter):
def _encode(self, obj, context, path = None):
return np.sign(obj) * np.sqrt(np.abs(obj))
def _decode(self, obj, context, path = None):
return obj * np.abs(obj)
class WODTempAdapter(Adapter):
def _encode(self, obj, context, path = None):
raise Exception('Not implemented')
def _decode(self, obj, context, path = None):
return 1222415/(298.15*np.log(0.0244*obj/(25-0.0122*obj))+4100)-273.1
WODTemp = WODTempAdapter(Int16sb)
class WODTempThrustAdapter(Adapter):
def _encode(self, obj, context, path = None):
raise Exception('Not implemented')
def _decode(self, obj, context, path = None):
return -292525.18393*2/(-5289.94338+np.sqrt(5289.94338*5289.94338+4*292525.18393*(-4.77701-np.log(24.4*obj/(5-0.00244*obj)))))-273.15
WODTempThrust = WODTempThrustAdapter(Int16sb)
HKWOD = Struct(
'seconds_since_epoch' / Int32ub,
'n_cmd_exe' / Int8ub,
'n_cmd_delay' / Int8ub,
'this_wdt_timeout_count' / Int8ub,
'that_wdt_timeout_count' / Int8ub,
'sta_reset_count' / Int8ub,
'stb_reset_count' / Int8ub,
'ss_reset_count' / Int8ub,
'is_reset_count' / Int8ub,
'pl_task_err_flag' / Int8ub,
'hsd_task_err_flag' / Int8ub,
'tc_wdt_timeout_period' / LinearAdapter(12.0, Int8ub),
'v_bus' / AffineAdapter(1/(0.00244*6.3894), 0.005/(0.00244*6.3894), Int16sb),
'v_battery' / AffineAdapter(1/(0.00244*6.3617), -0.0318/(0.00244*6.3617), Int16sb),
'i_solar_panel' / AffineAdapter(1/(0.00244*0.7171), -0.0768/(0.00244*0.7171), Int16sb),
'i_load' / AffineAdapter(1/(0.00244*1.1442), 0.5254/(0.00244*1.1442), Int16sb),
'i_bus' / AffineAdapter(1/(0.00244*0.8814), 9.4347/(0.00244*0.8814), Int16sb),
'sw_flag' / Int8ub[4],
'sta_q' / StQ[4],
'sta_flag' / Int8ub,
'stb_q' / StQ[4],
'stb_flag' / Int8ub,
'stc_q' / StQ[4],
'stc_flag' / Int8ub,
'ss_x' / Int32ub,
'ss_y' / Int32ub,
'ss_flag' / Int8ub,
'fwx_rate' / FW,
'fwx_cmd' / FW,
'fwy_rate' / FW,
'fwy_cmd' / FW,
'fwz_rate' / FW,
'fwz_cmd' / FW,
'fws_rate' / FW,
'fws_cmd' / FW,
'gyro' / Gyro[3],
'tank_pressure' / AffineAdapter(1/(0.00244*0.6528), 0.0330/(0.00244*0.6528), Int16sb),
'aocs_period' / Int8ub,
'error_q' / QuadraticAdapter(LinearAdapter(32767, Int16sb))[3],
'error_w' / LinearAdapter(3.1415926/180, QuadraticAdapter(LinearAdapter(32767, Int16sb)))[3],
'usb_agc' / LinearAdapter(256.0/5.0, Int8ub),
'usb_rf_power' / LinearAdapter(256.0/5.0, Int8ub),
'usb_temp2' / LinearAdapter(256.0/5.0, Int8ub),
'usb_flag1' / Int8ub,
'usb_flag2' / Int8ub,
'usb_n_cmd' / Int8ub,
'usb_n_direct_cmd' / Int8ub,
'usb_n_inject_cmd' / Int8ub,
'usb_n_inject_cmd_err' / Int8ub,
'usb_n_sync' / Int8ub,
't_pl' / WODTemp,
't_hsd' / WODTemp,
't_obc' / WODTemp,
't_stb' / WODTemp,
't_ss' / WODTemp,
't_battery' / WODTemp,
't_thrustor1a' / WODTempThrust,
't_thrustor5a' / WODTempThrust,
't_value1' / WODTemp,
't_value5' / WODTemp,
't_tube1' / WODTemp,
't_tank' / WODTemp,
'heater_flag' / Int8ub[5],
'uva_flag_rx' / Int8ub,
'uva_tx_gain' / Int8ub,
'uva_tx_modulation' / Int8ub,
'uva_flag_tx' / Int8ub,
'uva_fc_asm' / LinearAdapter(32768/3.1416, Int16sb),
'uva_snr_asm' / LinearAdapter(256, Int16ub),
'uva_rssi_asm' / AffineAdapter(10, 10*(174-12), Int16ub),
'uva_rssi_7021' / AffineAdapter(2, 2*(140-12), Int8ub),
'uvb_flag_rx' / Int8ub,
'uvb_tx_gain' / Int8ub,
'uvb_tx_modulation' / Int8ub,
'uvb_flag_tx' / Int8ub,
'uvb_fc_asm' / LinearAdapter(32768/3.1416, Int16sb),
'uvb_snr_asm' / LinearAdapter(256, Int16ub),
'uvb_rssi_asm' / AffineAdapter(10, 10*(174-12), Int16ub),
'uvb_rssi_7021' / AffineAdapter(2, 2*(140-12), Int8ub),
)
CfgUV = Struct(
'dem_clk_divide' / Int8ub,
'tx_frequency_deviation' / Int8ub,
'tx_gain' / Int8ub,
'turbo_rate' / Int8ub,
'precoder_en' / Int8ub,
'preamble_len' / Int8ub,
'trailer_len' / Int8ub,
'rx_freq' / Int8ub,
'snr_threshold' / Float32b,
'gmsk_beacon_en' / Int8ub,
'jt4_beacon_en' / Int8ub,
'interval_beacon' / Int8ub,
'interval_vc0_timeout' / Int8ub,
'message_hk' / Bytes(8),
'callsign' / Bytes(5),
'open_camera_en' / Int8ub,
'repeater_en' / Int8ub,
'take_picture_at_power_on' / Int8ub,
'rx7021_r9' / Int32ub,
'crc' / Int32ub
)
CfgCam = Struct(
'size' / Int8ub,
'brightness' / Int8ub,
'contrast' / Int8ub,
'sharpness' / Int8ub,
'exposure' / Int8ub,
'compressing' / Int8ub,
'colour' / Int8ub,
'config' / Int8ub,
'id' / Int8ub
)
Packet = Struct(
'header' / SpacePacketPrimaryHeader,
'protocol' / Int8ub,
'payload' / Switch(lambda x: (x.header.AP_ID, x.protocol),\
{(0xE,0) : HKUV, (0xF,0) : HKUV, (0xE,1) : CfgCam, (0xF,1) : CfgCam,\
(0xAC,0) : HKWOD, (0xE,4) : CfgUV, (0xF,4) : CfgUV})
)
csv_frames = pd.read_csv('https://raw.githubusercontent.com/tammojan/dslwp-data/master/raw_frame.csv')
correct_frames = csv_frames['remark'] != 'replay'
csv_frames = csv_frames[correct_frames]
station = [s for s in csv_frames['proxy_nickname']]
proxy_time = np.array([np.datetime64(t) for t in csv_frames['proxy_receive_time']])
server_time = np.array([np.datetime64(t) for t in csv_frames['server_receive_time']])
frames = [bytes().fromhex(f) for f in csv_frames['raw_data']]
stations = set(station)
station_points = {s : station.count(s) for s in stations}
def get_channel(frame):
h = TMPrimaryHeader.parse(frame)
return (h.spacecraft_id, h.virtual_channel_id)
channels = set([get_channel(f) for f in frames])
frames_by_channel = {chan : sorted([(t,f,s) for t,f,s in zip(server_time, frames, station) if get_channel(f) == chan], key = itemgetter(0)) for chan in channels}
spacecrafts = {147 : 'DSWLP-B0 435.400MHz', 403 : 'DSLWP-B1 436.400MHz',\
146 : 'DSLWP-A0 435.425MHz', 402 : 'DSLWP-A1 436.425MHz'}
sorted([((spacecrafts[k[0]], k[1]),len(v)) for k,v in frames_by_channel.items()], key = itemgetter(1), reverse = True)
def join_kiss_stream(frames):
jumps = 0
repeated_distinct = 0
repeated_same = 0
continuation = 0
stream = list()
last_frame = frames[0]
frame_count = [TMPrimaryHeader.parse(f[1]).virtual_channel_frame_count for f in frames]
for j in range(1,len(frames)):
near_time = frames[j][0] - frames[j-1][0] < np.timedelta64(2*3600, 's')
if frame_count[j] == frame_count[j-1] and near_time:
# repeated frame
if station_points[frames[j][2]] > station_points[last_frame[2]]:
last_frame = frames[j]
if frames[j][1] != frames[j-1][1]:
repeated_distinct += 1
else:
repeated_same += 1
elif frame_count[j] == (frame_count[j-1] + 1) % 256 and near_time:
# continuation
stream.append((last_frame[0], last_frame[1][TMPrimaryHeader.sizeof():]))
last_frame = frames[j]
continuation += 1
else:
# broken KISS stream
stream.append((last_frame[0], last_frame[1][TMPrimaryHeader.sizeof():]))
last_frame = frames[j]
jumps += 1
stream.append((last_frame[0], last_frame[1][TMPrimaryHeader.sizeof():]))
print('jumps', jumps, 'repeated_distinct', repeated_distinct, 'repeated_same', repeated_same, 'continuation', continuation)
return stream
def parse_kiss(stream):
frames = list()
current = bytearray()
escape = False
for t,kiss in stream:
for b in kiss:
if b == 0xC0:
if len(current):
frames.append((t,bytes(current)))
current = bytearray()
elif b == 0xDB:
escape = True
elif escape and b == 0xDC:
current.append(0xC0)
escape = False
elif escape and b == 0xDD:
current.append(0xDB)
escape = False
else:
current.append(b)
escape = False
return frames
def filter_by_data_length(packets):
return [p for p in packets if len(p[1]) >= SpacePacketPrimaryHeader.sizeof() and\
SpacePacketPrimaryHeader.parse(p[1]).data_length + 1 + SpacePacketPrimaryHeader.sizeof() == len(p[1])]
def parse_packets_channel(channel):
packets = parse_kiss(join_kiss_stream(frames_by_channel[channel]))
parsed_packets = list()
for p in filter_by_data_length(packets):
try:
parsed = Packet.parse(p[1])
except:
pass
else:
parsed_packets.append((p[0], parsed))
return parsed_packets
tlm_channels = [(403,0), (403,2), (147,0), (147,2), (402,0)] # (146,0) only appears in replayed frames + [(146,0)]
tlm_packets = {chan : parse_packets_channel(chan) for chan in tlm_channels}
def get_tlm_variable(chan, var):
x = [(p[0], getattr(p[1].payload, var)) for p in tlm_packets[chan]\
if getattr(p[1].payload, var, None) is not None]
return [a[0] for a in x], [a[1] for a in x]
plt.figure(figsize = (14, 6), facecolor = 'w')
for chan in tlm_channels:
t, x = get_tlm_variable(chan, 'runtime')
plt.plot(t, x, '.', label = f'{spacecrafts[chan[0]]} channel {chan[1]}')
plt.ylim([-500,18000])
plt.legend()
plt.title('HKUV frames by Spacecraft ID and Virtual Channel')
plt.xlabel('UTC time')
plt.ylabel('Payload runtime (s)');
plt.figure(figsize = (14, 6), facecolor = 'w')
for chan in tlm_channels:
t, x = get_tlm_variable(chan, 'tx_modulation')
plt.plot(t, x, '.', label = f'{spacecrafts[chan[0]]} channel {chan[1]}')
plt.legend()
plt.title('HKUV TX modulation')
plt.ylabel('tx_modulation')
plt.xlabel('UTC time');
plt.figure(figsize = (14, 6), facecolor = 'w')
for chan in tlm_channels:
t, x = get_tlm_variable(chan, 'tx_modulation')
plt.plot(t, x, '.', label = f'{spacecrafts[chan[0]]} channel {chan[1]}')
plt.legend()
plt.xlim([np.datetime64('2018-10-15'), np.datetime64('2018-11-05')])
plt.title('HKUV TX modulation')
plt.ylabel('tx_modulation')
plt.xlabel('UTC time');
plt.figure(figsize = (14, 6), facecolor = 'w')
for chan in tlm_channels:
t, x = get_tlm_variable(chan, 'uva_tx_modulation')
plt.plot(t, x, '.', label = f'UVA {spacecrafts[chan[0]]} channel {chan[1]}', color = 'C0')
t, x = get_tlm_variable(chan, 'uvb_tx_modulation')
plt.plot(t, np.array(x) + 20, '.', label = f'UVB {spacecrafts[chan[0]]} channel {chan[1]}', color = 'C1')
plt.legend(['UVA', 'UVB'])
plt.xlim([np.datetime64('2018-10-15'), np.datetime64('2018-11-05')])
plt.title('HKWOD TX modulation')
plt.ylabel('tx_modulation')
plt.xlabel('UTC time');
plt.figure(figsize = (14, 6), facecolor = 'w')
for chan in tlm_channels:
t, x = get_tlm_variable(chan, 't_pa')
plt.plot(t, x, '.', color = 'C0')
t, x = get_tlm_variable(chan, 't_battery')
plt.plot(t, x, '.', color = 'C1')
plt.ylim([10,60])
plt.title('Temperature')
plt.ylabel('Temperature (ºC)')
plt.xlabel('UTC time')
plt.legend(['HKUV PA temperature', 'HKWOD Battery temperature']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Stack examples
|
<ASSISTANT_TASK:>
Python Code:
class Stack():
def __init__(self):
self.items = []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def peek(self):
return self.items[len(self.items) - 1]
def isEmpty(self):
return self.items == []
def size(self):
return len(self.items)
# Testing client
s = Stack()
print(s.isEmpty())
s.push(4)
s.push('dog')
print(s.peek())
s.push(True)
print(s.size())
print(s.isEmpty())
s.push(8.4)
print(s.pop())
print(s.pop())
print(s.size())
class Queue():
def __init__(self):
self.items = []
def enqueue(self, item):
self.items.insert(0, item)
def dequeue(self):
return self.items.pop()
def isEmpty(self):
return self.items == []
def size(self):
return len(self.items)
#simple testing
q = Queue()
q.enqueue('hello')
q.enqueue('dog')
q.enqueue(3)
q.dequeue()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read data created in the previous chapter.
Step2: <h2> Train and eval input functions to read from Pandas Dataframe </h2>
Step3: Our input function for predictions is the same except we don't provide a label
Step4: Create feature columns for estimator
Step5: <h3> Linear Regression with tf.Estimator framework </h3>
Step6: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
Step7: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
Step8: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
Step11: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.6
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS)
# TODO: Create an appropriate input_fn to read the training data
def make_train_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
# TODO: Create an appropriate input_fn to read the validation data
def make_eval_input_fn(df):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
# TODO: Create an appropriate prediction_input_fn
def make_prediction_input_fn(df):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
# TODO: Create feature columns
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
# TODO: Train a linear regression model
model = #ADD CODE HERE
model.train(#ADD CODE HERE
)
def print_rmse(model, df):
metrics = model.evaluate(input_fn = make_eval_input_fn(df))
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
print_rmse(model, df_valid)
# TODO: Predict from the estimator model we trained using test dataset
# TODO: Copy your LinearRegressor estimator and replace with DNNRegressor. Remember to add a list of hidden units i.e. [32, 8, 2]
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
phase: 1 = train 2 = valid
base_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers,
CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase)
else:
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, df)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: a=pd.pivot_table(df,index=["article_uni"],values=["article_rating"],aggfunc=[len,np.mean], columns='year')
Step2: b=df[df.article_pub_date>=data_first_ranking].groupby(['article_uni', 'year']).article_rating.agg(lambda x
Step3: column_name = '{year}{country}{header}_gapminder_grid'.format(
Step4: browser = webdriver.Edge(executable_path="..\MicrosoftWebDriver.exe")
|
<ASSISTANT_TASK:>
Python Code:
df_ranking=pd.read_csv('article_uni.csv', index_col=0)
print(df_ranking.shape)
df_ranking.head()
df.article_uni.replace('The London School of Economics and Political Science (United-Kingdom)',
'London School of Economics and Political Science', inplace=True)
from sklearn.preprocessing import MinMaxScaler, StandardScaler
scaler=StandardScaler()
uni_cluster_1=df[df.article_rating==df.article_rating.max()].article_uni.unique()
uni_cluster_2=list(set(df.article_uni.unique())-set(uni_cluster_1))
df[df.article_uni.isin(uni_cluster_1)].article_rating.values*2/10.max()
df['article_rating'][df.article_uni.isin(uni_cluster_1)]=df[df.article_uni.isin(uni_cluster_1)].article_rating.values*2/10#scaler.fit_transform(df[df.article_uni.isin(uni_cluster_1)].article_rating.values)
df['article_rating'][df.article_uni.isin(uni_cluster_2)]=df[df.article_uni.isin(uni_cluster_2)].article_rating.values*2.5/10#scaler.fit_transform(df[df.article_uni.isin(uni_cluster_2)].article_rating.values)
df.article_rating.hist()
df_ranking.sort_values(['article_uni'],inplace=True)
df_ranking.head()
b=df[df.article_pub_date>=data_first_ranking].groupby(['article_uni', 'year']).article_rating.agg({'article_rating_mean':'mean',
'article_rating_count':'count',
'article_rating_moda':lambda x:x.value_counts().index[0]}).reset_index()
b['ranking']=np.zeros((1, b.shape[0]))[0]
b.head()
for name in b.article_uni.unique():
for year in b[b.article_uni==name].year:
#print(year)
b['ranking'][(b.article_uni==name)&(b.year==year)]=df_ranking[(df_ranking.article_uni==name)][str(year)].values[0]
b=b[(~b.ranking.isnull())]
for year in np.sort(b.year.unique()):
print(year, round(np.corrcoef(b[(b.year==year)].article_rating_mean, b[(b.year==year)].ranking)[1,0],2))
for year in np.sort(b.year.unique()):
print(year, round(np.corrcoef(b[(b.year==year)].article_rating_moda, b[(b.year==year)].ranking)[1,0],2))
df_all=pd.merge(b, df_ranking[['article_uni','country']], on=['article_uni'])
df_all.head()
import plotly.plotly as py
from plotly.grid_objs import Grid, Column
from plotly.tools import FigureFactory as FF
import pandas as pd
import time
years_from_col = set(df_all['year'])
years_ints = sorted(list(years_from_col))
years = [str(year) for year in years_ints]
# make list of continents
countries = []
for country in df_all['country']:
if country not in countries:
countries.append(country)
columns = []
# make grid
for year in years:
for country in countries:
df_by_year = df_all[df_all['year'] == int(year)]
df_by_year_and_cont = df_by_year[df_by_year['country'] == country]
for col_name in df_by_year_and_cont:
#print(col_name)
# each column name is unique
column_name = '{year}_{country}_{header}_gapminder_grid'.format(
year=year, country=country, header=col_name
)
a_column = Column(list(df_by_year_and_cont[col_name]), column_name)
columns.append(a_column)
# upload grid
grid = Grid(columns)
url = py.grid_ops.upload(grid, 'gapminder_grid'+str(time.time()), auto_open=False)
url
figure = {
'data': [],
'layout': {},
'frames': [],
'config': {'scrollzoom': True}
}
# fill in most of layout
figure['layout']['yaxis'] = {'range': [-50, 200], 'title': 'Ranking un THE', 'gridcolor': '#FFFFFF'}
figure['layout']['xaxis'] = {'range': [-0.1, 1.1], 'title': 'Ranking mean', 'gridcolor': '#FFFFFF'}
figure['layout']['hovermode'] = 'closest'
figure['layout']['plot_bgcolor'] = 'rgb(223, 232, 243)'
figure['layout']['sliders'] = {
'active': 0,
'yanchor': 'top',
'xanchor': 'left',
'currentvalue': {
'font': {'size': 20},
'prefix': 'text-before-value-on-display',
'visible': True,
'xanchor': 'right'
},
'transition': {'duration': 1000, 'easing': 'cubic-in-out'},
'pad': {'b': 10, 't': 50},
'len': 0.9,
'x': 0.1,
'y': 0,
'steps': [...]
}
{
'method': 'animate',
'label': 'label-for-frame',
'value': 'value-for-frame(defaults to label)',
'args': [{'frame': {'duration': 300, 'redraw': False},
'mode': 'immediate'}
],
}
sliders_dict = {
'active': 0,
'yanchor': 'top',
'xanchor': 'left',
'currentvalue': {
'font': {'size': 20},
'prefix': 'Year:',
'visible': True,
'xanchor': 'right'
},
'transition': {'duration': 300, 'easing': 'cubic-in-out'},
'pad': {'b': 10, 't': 50},
'len': 0.9,
'x': 0.1,
'y': 0,
'steps': []
}
figure['layout']['updatemenus'] = [
{
'buttons': [
{
'args': [None, {'frame': {'duration': 500, 'redraw': False},
'fromcurrent': True, 'transition': {'duration': 300, 'easing': 'quadratic-in-out'}}],
'label': 'Play',
'method': 'animate'
},
{
'args': [[None], {'frame': {'duration': 0, 'redraw': False}, 'mode': 'immediate',
'transition': {'duration': 0}}],
'label': 'Pause',
'method': 'animate'
}
],
'direction': 'left',
'pad': {'r': 10, 't': 87},
'showactive': False,
'type': 'buttons',
'x': 0.1,
'xanchor': 'right',
'y': 0,
'yanchor': 'top'
}
]
custom_colors = {
'UK': 'rgb(171, 99, 250)',
'USA': 'rgb(230, 99, 250)',
'Canada': 'rgb(99, 110, 250)',
}
df_all.country.unique()
col_name_template = '{year}_{country}_{header}_gapminder_grid'
year = df_all.year.min()
for country in countries:
data_dict = {
'xsrc': grid.get_column_reference(col_name_template.format(
year=year, country=country, header='article_rating_mean'
)),
'ysrc': grid.get_column_reference(col_name_template.format(
year=year, country=country, header='ranking'
)),
'mode': 'markers',
'textsrc': grid.get_column_reference(col_name_template.format(
year=year, country=country, header='article_uni'
)),
'marker': {
'sizemode': 'area',
'sizeref': 0.05,
'sizesrc': grid.get_column_reference(col_name_template.format(
year=year, country=country, header='article_rating_count'
)),
'color': custom_colors[country]
},
'name': country
}
figure['data'].append(data_dict)
for year in years:
frame = {'data': [], 'name': str(year)}
for country in countries:
data_dict = {
'xsrc': grid.get_column_reference(col_name_template.format(
year=year, country=country, header='article_rating_mean'
)),
'ysrc': grid.get_column_reference(col_name_template.format(
year=year, country=country, header='ranking'
)),
'mode': 'markers',
'textsrc': grid.get_column_reference(col_name_template.format(
year=year, country=country, header='article_uni',
)),
'marker': {
'sizemode': 'area',
'sizeref': 0.05,
'sizesrc': grid.get_column_reference(col_name_template.format(
year=year, country=country, header='article_rating_count'
)),
'color': custom_colors[country]
},
'name': country
}
frame['data'].append(data_dict)
figure['frames'].append(frame)
slider_step = {'args': [
[year],
{'frame': {'duration': 300, 'redraw': False},
'mode': 'immediate',
'transition': {'duration': 300}}
],
'label': year,
'method': 'animate'}
sliders_dict['steps'].append(slider_step)
figure['layout']['sliders'] = [sliders_dict]
py.icreate_animations(figure, 'gapminder_example'+str(time.time()))
import seaborn as sns
df_all.head()
for i in np.sort(df_all.year.unique()):
plt.scatter(df_all[df_all.year==i].article_rating_mean,
df_all[df_all.year==i].ranking)
plt.title('Corr between ranking in THE and ranking review in {:0.0f}'.format(i))
plt.show()
df_all.article_rating_count
for i in np.sort(df_all.year.unique()):
plt.scatter(df_all[df_all.year==i].article_rating_count,
df_all[df_all.year==i].ranking)
plt.title('Corr between ranking in THE and ranking review count in {:0.0f}'.format(i))
plt.show()
plt.hist(df_all.article_rating_count)
import selenium
from selenium import webdriver
browser = webdriver.Chrome(executable_path="..\\chromedriver.exe")
url = "https://www.niche.com/colleges/stanford-university/reviews/"
browser.get(url)
browser.find_element_by_css_selector('.icon-arrowright-thin--pagination').click()
import math as ma
ma.sqrt(2)
browser.page_source
element.get_attribute('innerHTML')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function tokenize receives a string s as argument and returns a list of tokens.
Step2: Implementing the Recursive Descend Parser
Step3: The function parserExpr(TL) takes a list of tokens TL and tries to parse an expresssion according to the following
Step4: The function parserProduct(TL) takes a list of tokens TL and tries to parse a product according to the following
Step5: The function parseFactor implements the following grammar rules
Step6: Testing
|
<ASSISTANT_TASK:>
Python Code:
import re
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
lexSpec = r'''([ \t]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # parentheses
([-+*/]) | # arithmetical operators
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
# print(tokenList)
for ws, number, parenthesis, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ number ]
elif parenthesis:
result += [ parenthesis ]
elif operator:
result += [ operator ]
else:
result += [ f'ERROR({error})']
return result
tokenize('12 * 13 + 14 * 4 / 6 - 7')
def parse(s):
TL = tokenize(s)
result, Rest = parseExpr(TL)
assert Rest == [], f'Parse Error: could not parse {TL}, Rest = {Rest}'
return result
def parseExpr(TL):
result, Rest = parseProduct(TL)
while len(Rest) > 1 and Rest[0] in {'+', '-'}:
operator = Rest[0]
arg, Rest = parseProduct(Rest[1:])
if operator == '+':
result += arg
else: # operator == '-':
result -= arg
return result, Rest
def parseProduct(TL):
result, Rest = parseFactor(TL)
while len(Rest) > 1 and Rest[0] in {'*', '/'}:
operator = Rest[0]
arg, Rest = parseFactor(Rest[1:])
if operator == '*':
result *= arg
else: # operator == '/':
result /= arg
return result, Rest
def parseFactor(TL):
if TL[0] == '(':
expr, Rest = parseExpr(TL[1:])
assert Rest[0] == ')', "ERROR: ')' expected, got {Rest[0]}"
return expr, Rest[1:]
else:
return float(TL[0]), TL[1:]
def test(s: str) -> float:
r1 = parse(s)
r2 = eval(s)
assert r1 == r2
return r1
parse('12 * 13 + 14 * 4 / 6 - 7')
test('11+22*(33-44)/(5-10*5/(4-3))')
test('0*11+22*(33-44)/(5-10*5/(4-3))')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Speed
Step2: Views of the data (are free)
Step3: Exercise
Step4: Broadcasting is a way of looping on arrays which have "compatible" but unequal sizes.
Step5: Arrays are compatible as long as each of their dimensions (shape) is either equal to the other or 1.
Step6: In multiple dimensions, the rule applies but, perhaps, is less immediately intuitive
Step7: Note that this also works for
Step8: But not for
Step9: Vector Operations
Step10: Exercise
Step11: Exercise
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
## This is a list of everything in the module
np.__all__
an_array = np.array([0,1,2,3,4,5,6])
print an_array
print
print type(an_array)
print
help(an_array)
A = np.zeros((4,4))
print A
print
print A.shape
print
print A.diagonal()
print
A[0,0] = 2.0
print A
np.fill_diagonal(A, 1.0)
print A
B = A.diagonal()
B[0] = 2.0
for i in range(0,A.shape[0]):
A[i,i] = 1.0
print A
print
A[:,2] = 2.0
print A
print
A[2,:] = 4.0
print A
print
print A.T
print
A[...] = 0.0
print A
print
for i in range(0,A.shape[0]):
A[i,:] = float(i)
print A
print
for i in range(0,A.shape[0]):
A[i,:] = i
print A
print
print A[::2,::2]
print
print A[::-1,::-1]
%%timeit
B = np.zeros((1000,1000))
for i in range(0,1000):
for j in range(0,1000):
B[i,j] = 2.0
%%timeit
B = np.zeros((1000,1000))
B[:,:] = 2.0
%%timeit
B = np.zeros((1000,1000))
B[...] = 2.0
print A.reshape((2,8))
print
print A.reshape((-1))
print A.ravel()
print
print A.reshape((1,-1))
print
%%timeit
A.reshape((1,-1))
%%timeit
elements = A.shape[0]*A.shape[1]
B = np.empty(elements)
B[...] = A[:,:].ravel()
%%timeit
elements = A.shape[0]*A.shape[1]
B = np.empty(elements)
for i in range(0,A.shape[0]):
for j in range(0,A.shape[1]):
B[i+j*A.shape[1]] = A[i,j]
AA = np.zeros((100,100))
AA[10,11] = 1.0
AA[99,1] = 2.0
cond = np.where(AA >= 1.0)
print cond
print AA[cond]
print AA[ AA >= 1]
a = np.array([1.0, 2.0, 3.0])
b = np.array([2.0, 2.0, 2.0])
print a * b
b = np.array([2.0])
print a * b
print a * 2.0
print a.shape
print b.shape
print (a*b).shape
print (a+b).shape
aa = a.reshape(1,3)
bb = b.reshape(1,1)
print aa.shape
print bb.shape
print (aa*bb).shape
print (aa+bb).shape
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([[1.0,2.0,3.0]])
print a + b
print
print a.shape
print b.shape
print (a+b).shape
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([1.0,2.0,3.0])
print a + b
print
print a.shape
print b.shape
print (a+b).shape
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([[1.0],[2.0],[3.0]])
print a.shape
print b.shape
print (a+b).shape
X = np.arange(0.0, 2.0*np.pi, 0.0001)
print X[0:100]
import math
math.sin(X)
np.sin(X)
S = np.sin(X)
C = np.cos(X)
S2 = S**2 + C**2
print S2
print S2 - 1.0
test = np.isclose(S2,1.0)
print test
print np.where(test == False)
print np.where(S2 == 0.0)
X = np.linspace(0.0, 2.0*np.pi, 10000000)
print X.shape
# ...
%%timeit
S = np.sin(X)
%%timeit
S = np.empty_like(X)
for i, x in enumerate(X):
S[i] = math.sin(x)
X = np.linspace(0.0, 2.0*np.pi, 10000000)
Xj = X + 1.0j
print Xj.shape, Xj.dtype
%%timeit
Sj = np.sin(Xj)
import cmath
%%timeit
Sj = np.empty_like(Xj)
for i, x in enumerate(Xj):
Sj[i] = cmath.sin(x)
# Test the results here
A = np.array(([1.0,1.0,1.0,1.0],[2.0,2.0,2.0,2.0]))
B = np.array(([3.0,3.0,3.0,3.0],[4.0,4.0,4.0,4.0]))
C = np.array(([5.0,5.0,5.0,5.0],[6.0,6.0,6.0,6.0]))
R = np.concatenate((A,B,C))
print R
print
R = np.concatenate((A,B,C), axis=1)
print R
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear regression in 1d <a class="anchor" id="linreg-1d"></a>
Step2: Linear regression for boston housing <a class="anchor" id="linreg-boston"></a>
Step3: Ridge regression <a class="anchor" id="ridge"></a>
|
<ASSISTANT_TASK:>
Python Code:
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
import seaborn as sns
sns.set(style="ticks", color_codes=True)
import pandas as pd
pd.set_option("precision", 2) # 2 decimal places
pd.set_option("display.max_rows", 20)
pd.set_option("display.max_columns", 30)
pd.set_option("display.width", 100) # wide windows
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import MinMaxScaler
import sklearn.metrics
from sklearn.metrics import mean_squared_error as mse
def make_1dregression_data(n=21):
np.random.seed(0)
xtrain = np.linspace(0.0, 20, n)
xtest = np.arange(0.0, 20, 0.1)
sigma2 = 4
w = np.array([-1.5, 1 / 9.0])
fun = lambda x: w[0] * x + w[1] * np.square(x)
ytrain = fun(xtrain) + np.random.normal(0, 1, xtrain.shape) * np.sqrt(sigma2)
ytest = fun(xtest) + np.random.normal(0, 1, xtest.shape) * np.sqrt(sigma2)
return xtrain, ytrain, xtest, ytest
xtrain, ytrain, xtest, ytest = make_1dregression_data(n=21)
# Rescaling data
scaler = MinMaxScaler(feature_range=(-1, 1))
Xtrain = scaler.fit_transform(xtrain.reshape(-1, 1))
Xtest = scaler.transform(xtest.reshape(-1, 1))
degs = np.arange(1, 21, 1)
ndegs = np.max(degs)
mse_train = np.empty(ndegs)
mse_test = np.empty(ndegs)
ytest_pred_stored = np.empty(ndegs, dtype=np.ndarray)
ytrain_pred_stored = np.empty(ndegs, dtype=np.ndarray)
for deg in degs:
model = LinearRegression()
poly_features = PolynomialFeatures(degree=deg, include_bias=False)
Xtrain_poly = poly_features.fit_transform(Xtrain)
model.fit(Xtrain_poly, ytrain)
ytrain_pred = model.predict(Xtrain_poly)
ytrain_pred_stored[deg - 1] = ytrain_pred
Xtest_poly = poly_features.transform(Xtest)
ytest_pred = model.predict(Xtest_poly)
mse_train[deg - 1] = mse(ytrain_pred, ytrain)
mse_test[deg - 1] = mse(ytest_pred, ytest)
ytest_pred_stored[deg - 1] = ytest_pred
# Plot MSE vs degree
fig, ax = plt.subplots()
mask = degs <= 15
ax.plot(degs[mask], mse_test[mask], color="r", marker="x", label="test")
ax.plot(degs[mask], mse_train[mask], color="b", marker="s", label="train")
ax.legend(loc="upper right", shadow=True)
plt.xlabel("degree")
plt.ylabel("mse")
# save_fig('polyfitVsDegree.pdf')
plt.show()
# Plot fitted functions
chosen_degs = [1, 2, 14, 20]
fig, axes = plt.subplots(2, 2, figsize=(10, 7))
axes = axes.reshape(-1)
for i, deg in enumerate(chosen_degs):
# fig, ax = plt.subplots()
ax = axes[i]
ax.scatter(xtrain, ytrain)
ax.plot(xtest, ytest_pred_stored[deg - 1])
ax.set_ylim((-10, 15))
ax.set_title("degree {}".format(deg))
# save_fig('polyfitDegree{}.pdf'.format(deg))
plt.show()
# Plot residuals
chosen_degs = [1, 2, 14, 20]
fig, axes = plt.subplots(2, 2, figsize=(10, 7))
axes = axes.reshape(-1)
for i, deg in enumerate(chosen_degs):
# fig, ax = plt.subplots(figsize=(3,2))
ax = axes[i]
ypred = ytrain_pred_stored[deg - 1]
residuals = ytrain - ypred
ax.plot(ypred, residuals, "o")
ax.set_title("degree {}".format(deg))
# save_fig('polyfitDegree{}Residuals.pdf'.format(deg))
plt.show()
# Plot fit vs actual
chosen_degs = [1, 2, 14, 20]
fig, axes = plt.subplots(2, 2, figsize=(10, 7))
axes = axes.reshape(-1)
for i, deg in enumerate(chosen_degs):
for train in [True, False]:
if train:
ytrue = ytrain
ypred = ytrain_pred_stored[deg - 1]
dataset = "Train"
else:
ytrue = ytest
ypred = ytest_pred_stored[deg - 1]
dataset = "Test"
# fig, ax = plt.subplots()
ax = axes[i]
ax.scatter(ytrue, ypred)
ax.plot(ax.get_xlim(), ax.get_ylim(), ls="--", c=".3")
ax.set_xlabel("true y")
ax.set_ylabel("predicted y")
r2 = sklearn.metrics.r2_score(ytrue, ypred)
ax.set_title("degree {}. R2 on {} = {:0.3f}".format(deg, dataset, r2))
# save_fig('polyfitDegree{}FitVsActual{}.pdf'.format(deg, dataset))
plt.show()
import sklearn.datasets
import sklearn.linear_model as lm
from sklearn.model_selection import train_test_split
boston = sklearn.datasets.load_boston()
X = boston.data
y = boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
scaler = sklearn.preprocessing.StandardScaler()
scaler = scaler.fit(X_train)
Xscaled = scaler.transform(X_train)
# equivalent to Xscaled = scaler.fit_transform(X_train)
# Fit model
linreg = lm.LinearRegression()
linreg.fit(Xscaled, y_train)
# Extract parameters
coef = np.append(linreg.coef_, linreg.intercept_)
names = np.append(boston.feature_names, "intercept")
print(names)
print(coef)
# Assess fit on test set
Xtest_scaled = scaler.transform(X_test)
ypred = linreg.predict(Xtest_scaled)
plt.figure()
plt.scatter(y_test, ypred)
plt.xlabel("true price")
plt.ylabel("predicted price")
mse = sklearn.metrics.mean_squared_error(y_test, ypred)
plt.title("Boston housing, rmse {:.2f}".format(np.sqrt(mse)))
xs = np.linspace(min(y), max(y), 100)
plt.plot(xs, xs, "-")
# save_fig("boston-housing-predict.pdf")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error as mse
xtrain, ytrain, xtest, ytest = make_1dregression_data(n=21)
# Rescaling data
scaler = MinMaxScaler(feature_range=(-1, 1))
Xtrain = scaler.fit_transform(xtrain.reshape(-1, 1))
Xtest = scaler.transform(xtest.reshape(-1, 1))
deg = 14
alphas = np.logspace(-10, 1.3, 10)
nalphas = len(alphas)
mse_train = np.empty(nalphas)
mse_test = np.empty(nalphas)
ytest_pred_stored = dict()
for i, alpha in enumerate(alphas):
model = Ridge(alpha=alpha, fit_intercept=False)
poly_features = PolynomialFeatures(degree=deg, include_bias=False)
Xtrain_poly = poly_features.fit_transform(Xtrain)
model.fit(Xtrain_poly, ytrain)
ytrain_pred = model.predict(Xtrain_poly)
Xtest_poly = poly_features.transform(Xtest)
ytest_pred = model.predict(Xtest_poly)
mse_train[i] = mse(ytrain_pred, ytrain)
mse_test[i] = mse(ytest_pred, ytest)
ytest_pred_stored[alpha] = ytest_pred
# Plot MSE vs degree
fig, ax = plt.subplots()
mask = [True] * nalphas
ax.plot(alphas[mask], mse_test[mask], color="r", marker="x", label="test")
ax.plot(alphas[mask], mse_train[mask], color="b", marker="s", label="train")
ax.set_xscale("log")
ax.legend(loc="upper right", shadow=True)
plt.xlabel("L2 regularizer")
plt.ylabel("mse")
# save_fig('polyfitVsRidge.pdf')
plt.show()
# Plot fitted functions
chosen_alphas = alphas[[0, 5, 8]]
for i, alpha in enumerate(chosen_alphas):
fig, ax = plt.subplots()
ax.scatter(xtrain, ytrain)
ax.plot(xtest, ytest_pred_stored[alpha])
plt.title("L2 regularizer {:0.5f}".format(alpha))
# save_fig('polyfitRidge{}.pdf'.format(i))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The child class will use its own initialization method, if not specified otherwise.
Step2: If we want to use initialization from the parent class, we can do that using
Step6: Operation
Step7: Example Operations
Step8: Multiplication
Step9: Matrix Multiplication
Step11: Placeholders
Step13: Variables
Step15: Graph
Step16: A Basic Graph
Step17: Session
Step20: Traversing Operation Nodes
Step21: The result should look like
Step22: Looks like we did it!
Step23: Activation Function
Step24: Sigmoid as an Operation
Step25: Classification Example
Step26: Defining the Perceptron
Step27: Or if we have (4,-10)
Step28: Using an Example Session Graph
|
<ASSISTANT_TASK:>
Python Code:
class SimpleClass():
def __init__(self, str_input):
print("SIMPLE" + str_input)
class ExtendedClass(SimpleClass):
def __init__(self):
print('EXTENDED')
s = ExtendedClass()
class ExtendedClass(SimpleClass):
def __init__(self):
super().__init__(" My String")
print('EXTENDED')
s = ExtendedClass()
class Operation():
An Operation is a node in a "Graph". TensorFlow will also use this concept of a Graph.
This Operation class will be inherited by other classes that actually compute the specific
operation, such as adding or matrix multiplication.
def __init__(self, input_nodes = []):
Intialize an Operation
# The list of input nodes
self.input_nodes = input_nodes
# Initialize list of nodes consuming this node's output
self.output_nodes = []
# For every node in the input, we append this operation (self) to the list of
# the consumers of the input nodes
for node in input_nodes:
node.output_nodes.append(self)
# There will be a global default graph (TensorFlow works this way)
# We will then append this particular operation
# Append this operation to the list of operations in the currently active default graph
_default_graph.operations.append(self)
def compute(self):
This is a placeholder function. It will be overwritten by the actual specific operation
that inherits from this class.
pass
class add(Operation):
def __init__(self, x, y):
super().__init__([x, y])
def compute(self, x_var, y_var):
self.inputs = [x_var, y_var]
return x_var + y_var
class multiply(Operation):
def __init__(self, a, b):
super().__init__([a, b])
def compute(self, a_var, b_var):
self.inputs = [a_var, b_var]
return a_var * b_var
class matmul(Operation):
def __init__(self, a, b):
super().__init__([a, b])
def compute(self, a_mat, b_mat):
self.inputs = [a_mat, b_mat]
return a_mat.dot(b_mat)
class Placeholder():
A placeholder is a node that needs to be provided a value for computing the output in the Graph.
In case of supervised learning, X (input) and Y (output) will require placeholders.
def __init__(self):
self.output_nodes = []
_default_graph.placeholders.append(self)
class Variable():
This variable is a changeable parameter of the Graph.
For a simple neural networks, it will be weights and biases.
def __init__(self, initial_value = None):
self.value = initial_value
self.output_nodes = []
_default_graph.variables.append(self)
class Graph():
def __init__(self):
self.operations = []
self.placeholders = []
self.variables = []
def set_as_default(self):
Sets this Graph instance as the Global Default Graph
global _default_graph
_default_graph = self
g = Graph()
g.set_as_default()
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
A = Variable(10)
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
b = Variable(1)
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
# Will be filled out later
x = Placeholder()
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
y = multiply(A,x)
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
z = add(y, b)
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
import numpy as np
def traverse_postorder(operation):
PostOrder Traversal of Nodes.
Basically makes sure computations are done in the correct order (Ax first , then Ax + b).
nodes_postorder = []
def recurse(node):
if isinstance(node, Operation):
for input_node in node.input_nodes:
recurse(input_node)
nodes_postorder.append(node)
recurse(operation)
return nodes_postorder
class Session:
def run(self, operation, feed_dict = {}):
operation: The operation to compute
feed_dict: Dictionary mapping placeholders to input values (the data)
# Puts nodes in correct order
nodes_postorder = traverse_postorder(operation)
print("Post Order:")
print(nodes_postorder)
for node in nodes_postorder:
if type(node) == Placeholder:
node.output = feed_dict[node]
elif type(node) == Variable:
node.output = node.value
else: # Operation
node.inputs = [input_node.output for input_node in node.input_nodes]
node.output = node.compute(*node.inputs)
# Convert lists to numpy arrays
if type(node.output) == list:
node.output = np.array(node.output)
# Return the requested node value
return operation.output
sess = Session()
result = sess.run(operation = z,
feed_dict = {x : 10})
result
10 * 10 + 1
# Running just y = Ax
# The post order should be only up to
result = sess.run(operation = y,
feed_dict = {x : 10})
result
g = Graph()
g.set_as_default()
A = Variable([[10, 20], [30, 40]])
b = Variable([1, 1])
x = Placeholder()
y = matmul(A,x)
z = add(y,b)
sess = Session()
result = sess.run(operation = z,
feed_dict = {x : 10})
result
import matplotlib.pyplot as plt
%matplotlib inline
# Defining sigmoid function
def sigmoid(z):
return 1 / (1 + np.exp(-z))
sample_z = np.linspace(-10, 10, 100)
sample_a = sigmoid(sample_z)
plt.figure(figsize = (8, 8))
plt.title("Sigmoid")
plt.plot(sample_z, sample_a)
class Sigmoid(Operation):
def __init__(self, z):
# a is the input node
super().__init__([z])
def compute(self, z_val):
return 1 / (1 + np.exp(-z_val))
from sklearn.datasets import make_blobs
# Creating 50 samples divided into 2 blobs with 2 features
data = make_blobs(n_samples = 50,
n_features = 2,
centers = 2,
random_state = 75)
data
features = data[0]
plt.scatter(features[:,0],features[:,1])
labels = data[1]
plt.scatter(x = features[:,0],
y = features[:,1],
c = labels,
cmap = 'coolwarm')
# DRAW A LINE THAT SEPERATES CLASSES
x = np.linspace(0, 11 ,10)
y = -x + 5
plt.scatter(features[:,0],
features[:,1],
c = labels,
cmap = 'coolwarm')
plt.plot(x,y)
z = np.array([1, 1]).dot(np.array([[8], [10]])) - 5
print(z)
a = 1 / (1 + np.exp(-z))
print(a)
z = np.array([1,1]).dot(np.array([[2],[-10]])) - 5
print(z)
a = 1 / (1 + np.exp(-z))
print(a)
g = Graph()
g.set_as_default()
x = Placeholder()
w = Variable([1,1])
b = Variable(-5)
z = add(matmul(w,x),b)
a = Sigmoid(z)
sess = Session()
sess.run(operation = a,
feed_dict = {x : [8, 10]})
sess.run(operation = a,
feed_dict = {x : [2, -10]})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Interact with SVG display
Step5: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step6: Use interactive to build a user interface for exploing the draw_circle function
Step7: Use the display function to show the widgets created by interactive
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display, SVG
from IPython.display import Javascript
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
#TRIPLE QUOTES GIVE YOU LINE CAPABILITIES
l=
<svg width="%d" height="%d">
<circle cx="%d" cy="%d" r="%d" fill="%s" />
</svg>
svg= l %(width, height, cx, cy, r, fill)
display(SVG(svg))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
#?interactive
w = interactive(draw_circle, width = fixed(300), height = fixed(300), cx=[0,300], cy=[0,300], r=[0,50], fill = '')
w
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
display(w)
assert True # leave this to grade the display of the widget
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CHALLENGE
Step2: Task #2
Step3: CHALLENGE
|
<ASSISTANT_TASK:>
Python Code:
# Import spaCy and load the language library. Remember to use a larger model!
import spacy
nlp = spacy.load('en_core_web_md')
# Choose the words you wish to compare, and obtain their vectors
word1 = nlp.vocab['wolf'].vector
word2 = nlp.vocab['dog'].vector
word3 = nlp.vocab['cat'].vector
# Import spatial and define a cosine_similarity function
from scipy import spatial
cosine_similarity = lambda x, y: 1 - spatial.distance.cosine(x, y)
# Write an expression for vector arithmetic
# For example: new_vector = word1 - word2 + word3
new_vector = word1 - word2 + word3
# List the top ten closest vectors in the vocabulary to the result of the expression above
computed_similarities = []
for word in nlp.vocab:
if word.has_vector:
if word.is_lower:
if word.is_alpha:
similarity = cosine_similarity(new_vector, word.vector)
computed_similarities.append((word, similarity))
computed_similarities = sorted(computed_similarities, key=lambda item: -item[1])
print([w[0].text for w in computed_similarities[:10]])
def vector_math(a,b,c):
new_vector = nlp.vocab[a].vector - nlp.vocab[b].vector + nlp.vocab[c].vector
computed_similarities = []
for word in nlp.vocab:
if word.has_vector:
if word.is_lower:
if word.is_alpha:
similarity = cosine_similarity(new_vector, word.vector)
computed_similarities.append((word, similarity))
computed_similarities = sorted(computed_similarities, key=lambda item: -item[1])
return [w[0].text for w in computed_similarities[:10]]
# Test the function on known words:
vector_math('king','man','woman')
# Import SentimentIntensityAnalyzer and create an sid object
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
# Write a review as one continuous string (multiple sentences are ok)
review = 'This movie portrayed real people, and was based on actual events.'
# Obtain the sid scores for your review
sid.polarity_scores(review)
def review_rating(string):
scores = sid.polarity_scores(string)
if scores['compound'] == 0:
return 'Neutral'
elif scores['compound'] > 0:
return 'Positive'
else:
return 'Negative'
# Test the function on your review above:
review_rating(review)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the Yelp Question in HW1, please normalize the data so that it has the same L2 norm. We will grade it either way, but please state clearly what you did to treat the yelp data, which is currently not normalized.
Step2: Train models for varying lambda values. Calculate training error for each model.
Step3: Star data
|
<ASSISTANT_TASK:>
Python Code:
# Load a text file of integers:
y = np.loadtxt("yelp_data/upvote_labels.txt", dtype=np.int)
# Load a text file with strings identifying the 1000 features:
featureNames = open("yelp_data/upvote_features.txt").read().splitlines()
featureNames = np.array(featureNames)
# Load a csv of floats, which are the values of 1000 features (columns) for 6000 samples (rows):
A = np.genfromtxt("yelp_data/upvote_data.csv", delimiter=",")
norms = np.apply_along_axis(np.linalg.norm,0,A)
A = A / norms
# print(np.apply_along_axis(np.linalg.norm,0,A))
# Randomize input order
np.random.seed(12345)
shuffler = np.arange(len(y))
np.random.shuffle(shuffler)
A = A[shuffler,:]
y = y[shuffler]
#data_splits = (4000, 5000) # HW setting
data_splits = (2000, 2500)# faster setting
A_train = A[:data_splits[0], :]; y_train = y[:data_splits[0]]
A_val = A[data_splits[0]:data_splits[1], :]; y_val = y[data_splits[0]:data_splits[1]]
A_test = A[data_splits[1]:, :]; y_test = y[data_splits[1]:]
A_train.shape
result = RegularizationPathTrainTest(X_train=A_train[0:100, 0:50], y_train=y_train[0:100], feature_names=featureNames, lam_max=1,
X_val=A_val[0:100, 0:50], y_val=y_val[0:100,], steps=2, frac_decrease=0.05,
delta = 0.001)
result.results_df
result.analyze_path()
result.results_df
result = RegularizationPathTrainTest(X_train=A_train, y_train=y_train, feature_names=featureNames, lam_max=100,
X_val=A_val, y_val=y_val, steps=5, frac_decrease=0.7, delta=0.01)
result.analyze_path()
print(A_train.shape)
print(A_val.shape)
print(A_test.shape)
#Assuming df is a pandas data frame with columns 'x', 'y', and 'label'
fig, ax = plt.subplots(1, 1, figsize=(5, 4))
colors = {1: 'gray', 10: 'b'}
data = result.results_df.copy()
plt.semilogx(data['lam'], data['RMSE (validation)'], linestyle='--', marker='o', color='g')
plt.semilogx(data['lam'], data['RMSE (training)'], linestyle='--', marker='o', color='#D1D1D1')
#for key,grp in data.groupby('sigma'):
# print (key)
# plt.semilogx(grp.lam, grp.recall, linestyle='--', marker='o',
# color=colors[key], label='sigma = {}'.format(key)) #, t, t**2, 'bs', t, t**3, 'g^')
plt.legend(loc = 'best')
plt.xlabel('lambda')
plt.ylabel('RMSE')
#ax.set_ylim([0.55, 1.05])
#Assuming df is a pandas data frame with columns 'x', 'y', and 'label'
fig, ax = plt.subplots(1, 1, figsize=(4,3))
colors = {1: 'gray', 10: 'b'}
data = result.results_df.copy()
plt.semilogx(data['lam'], data['# nonzero coefficients'], linestyle='--', marker='o', color='b')
#for key,grp in data.groupby('sigma'):
# print (key)
# plt.semilogx(grp.lam, grp.recall, linestyle='--', marker='o',
# color=colors[key], label='sigma = {}'.format(key)) #, t, t**2, 'bs', t, t**3, 'g^')
plt.legend(loc = 'best')
plt.xlabel('lambda')
plt.ylabel('num nonzero coefficients')
#ax.set_ylim([0.55, 1.05])
assert False
# Load a text file of integers:
y = np.loadtxt("yelp_data/star_labels.txt", dtype=np.int)
# Load a text file with strings identifying the 2500 features:
featureNames = open("yelp_data/star_features.txt").read().splitlines()
# Load a matrix market matrix with 45000 samples of 2500 features, convert it to csc format:
A = sp.csc_matrix(io.mmread("yelp_data/star_data.mtx"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we can populate the object a dataset id, variables of interest, and
Step2: Longer introduction
Step3: All the get_<methods> will return a valid ERDDAP URL for the requested response and options. For example, a search for all datasets available.
Step4: There are many responses available, see the docs for griddap and
Step5: We can refine our search by providing some constraints.
Step6: Note that the search form was populated with the constraints we provided.
Step7: Now that we know the Dataset ID we can explore their metadata with the get_info_url method.
Step8: We can manipulate the metadata and find the variables that have the cdm_profile_variables attribute using the csv response.
Step9: Selecting variables by theirs attributes is such a common operation that erddapy brings its own method to simplify this task.
Step10: Another way to browse datasets is via the categorize URL. In the example below we can get all the standard_names available in the dataset with a single request.
Step11: We can also pass a value to filter the categorize results.
Step12: Let's create a map of all the gliders tracks from WHOI.
Step13: Finally let's see some figures!
Step14: Extra convenience methods for common responses
Step15: netCDF Climate and Forecast
Step16: xarray
Step17: Tabledap represents all data in tabular form and the next steps, while a bit awkward, are necessary to match the dimensions properly. The griddap response (unsupported at the moment) does not have this limitation.
Step18: iris
|
<ASSISTANT_TASK:>
Python Code:
from erddapy import ERDDAP
e = ERDDAP(
server="https://gliders.ioos.us/erddap",
protocol="tabledap",
response="csv",
)
e.dataset_id = "whoi_406-20160902T1700"
e.variables = [
"depth",
"latitude",
"longitude",
"salinity",
"temperature",
"time",
]
e.constraints = {
"time>=": "2016-07-10T00:00:00Z",
"time<=": "2017-02-10T00:00:00Z",
"latitude>=": 38.0,
"latitude<=": 41.0,
"longitude>=": -72.0,
"longitude<=": -69.0,
}
df = e.to_pandas(
index_col="time (UTC)",
parse_dates=True,
).dropna()
df.head()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
fig, ax = plt.subplots(figsize=(17, 2))
cs = ax.scatter(
df.index,
df["depth (m)"],
s=15,
c=df["temperature (Celsius)"],
marker="o",
edgecolor="none"
)
ax.invert_yaxis()
ax.set_xlim(df.index[0], df.index[-1])
xfmt = mdates.DateFormatter("%H:%Mh\n%d-%b")
ax.xaxis.set_major_formatter(xfmt)
cbar = fig.colorbar(cs, orientation="vertical", extend="both")
cbar.ax.set_ylabel("Temperature ($^\circ$C)")
ax.set_ylabel("Depth (m)");
from erddapy import ERDDAP
e = ERDDAP(server="https://gliders.ioos.us/erddap")
[method for method in dir(e) if not method.startswith("_")]
url = e.get_search_url(search_for="all", response="csv")
print(url)
import pandas as pd
df = pd.read_csv(url)
print(
f'We have {len(set(df["tabledap"].dropna()))} '
f'tabledap, {len(set(df["griddap"].dropna()))} '
f'griddap, and {len(set(df["wms"].dropna()))} wms endpoints.'
)
from erddapy.utilities import show_iframe
kw = {
"standard_name": "sea_water_temperature",
"min_lon": -72.0,
"max_lon": -69.0,
"min_lat": 38.0,
"max_lat": 41.0,
"min_time": "2016-07-10T00:00:00Z",
"max_time": "2017-02-10T00:00:00Z",
"cdm_data_type": "trajectoryprofile"
}
search_url = e.get_search_url(response="html", **kw)
show_iframe(search_url)
search_url = e.get_search_url(response="csv", **kw)
search = pd.read_csv(search_url)
gliders = search["Dataset ID"].values
gliders_list = "\n".join(gliders)
print(f"Found {len(gliders)} Glider Datasets:\n{gliders_list}")
glider = gliders[-1]
info_url = e.get_info_url(dataset_id=glider, response="html")
show_iframe(src=info_url)
info_url = e.get_info_url(dataset_id=glider, response='csv')
info = pd.read_csv(info_url)
info.head()
"".join(info.loc[info["Attribute Name"] == "cdm_profile_variables", "Value"])
%%time
# First one, slow.
e.get_var_by_attr(
dataset_id="whoi_406-20160902T1700",
standard_name="sea_water_temperature"
)
%%time
# Second one on the same glider, a little bit faster.
e.get_var_by_attr(
dataset_id="whoi_406-20160902T1700",
standard_name="sea_water_practical_salinity"
)
%%time
# New one, slow again.
e.get_var_by_attr(
dataset_id="cp_336-20170116T1254",
standard_name="sea_water_practical_salinity"
)
url = e.get_categorize_url(
categorize_by="standard_name",
response="csv"
)
pd.read_csv(url)["Category"]
url = e.get_categorize_url(
categorize_by="institution",
value="woods_hole_oceanographic_institution",
response="csv"
)
df = pd.read_csv(url)
whoi_gliders = df.loc[~df["tabledap"].isnull(), "Dataset ID"].tolist()
whoi_gliders
from joblib import Parallel, delayed
import multiprocessing
def request_whoi(dataset_id):
e.constraints = None
e.protocol = "tabledap"
e.variables = ["longitude", "latitude", "temperature", "salinity"]
e.dataset_id = dataset_id
# Drop units in the first line and NaNs.
df = e.to_pandas(response="csv", skiprows=(1,)).dropna()
return (dataset_id, df)
num_cores = multiprocessing.cpu_count()
downloads = Parallel(n_jobs=num_cores)(
delayed(request_whoi)(dataset_id) for dataset_id in whoi_gliders
)
dfs = {glider: df for (glider, df) in downloads}
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
def make_map():
fig, ax = plt.subplots(
figsize=(9, 9),
subplot_kw=dict(projection=ccrs.PlateCarree())
)
ax.coastlines(resolution="10m")
lon_formatter = LongitudeFormatter(zero_direction_label=True)
lat_formatter = LatitudeFormatter()
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
return fig, ax
fig, ax = make_map()
lons, lats = [], []
for glider, df in dfs.items():
lon, lat = df["longitude"], df["latitude"]
lons.extend(lon.array)
lats.extend(lat.array)
ax.plot(lon, lat)
dx = dy = 0.25
extent = min(lons)-dx, max(lons)+dx, min(lats)+dy, max(lats)+dy
ax.set_extent(extent)
ax.set_xticks([extent[0], extent[1]], crs=ccrs.PlateCarree())
ax.set_yticks([extent[2], extent[3]], crs=ccrs.PlateCarree());
def glider_scatter(df, ax):
ax.scatter(df["temperature"], df["salinity"],
s=10, alpha=0.25)
fig, ax = plt.subplots(figsize=(9, 9))
ax.set_ylabel("salinity")
ax.set_xlabel("temperature")
ax.grid(True)
for glider, df in dfs.items():
glider_scatter(df, ax)
ax.axis([5.5, 30, 30, 38]);
from netCDF4 import Dataset
e.constraints = None
e.protocol = "tabledap"
e.dataset_id = "whoi_406-20160902T1700"
opendap_url = e.get_download_url(
response="opendap",
)
print(opendap_url)
with Dataset(opendap_url) as nc:
print(nc.summary)
e.response = "nc"
e.variables = ["longitude", "latitude", "temperature", "salinity"]
nc = e.to_ncCF()
print(nc.Conventions)
print(nc["temperature"])
ds = e.to_xarray(decode_times=False)
ds
row_size = ds["rowSize"].values
lon = ds["longitude"].values
lat = ds["latitude"].values
lons, lats = [], []
for x, y, r in zip(lon, lat, row_size):
lons.extend([x]*r)
lats.extend([y]*r)
import numpy as np
data = ds["temperature"].values
depth = ds["depth"].values
mask = ~np.ma.masked_invalid(depth).mask
data = data[mask]
depth = depth[mask]
lons = np.array(lons)[mask]
lats = np.array(lats)[mask]
mask = depth <= 5
data = data[mask]
depth = depth[mask]
lons = lons[mask]
lats = lats[mask]
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
dx = dy = 1.5
extent = (
ds.geospatial_lon_min-dx, ds.geospatial_lon_max+dx,
ds.geospatial_lat_min-dy, ds.geospatial_lat_max+dy
)
fig, ax = make_map()
cs = ax.scatter(lons, lats, c=data, s=50, alpha=0.5, edgecolor="none")
cbar = fig.colorbar(cs, orientation="vertical",
fraction=0.1, shrink=0.9, extend="both")
ax.set_extent(extent)
ax.coastlines("10m");
import warnings
# Iris warnings are quire verbose!
with warnings.catch_warnings():
warnings.simplefilter("ignore")
cubes = e.to_iris()
print(cubes)
cubes.extract_strict("sea_water_temperature")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup a New Directory and Change Paths
Step2: Define the Model Extent, Grid Resolution, and Characteristics
Step3: Create the MODFLOW Model Object
Step4: Discretization Package
Step5: Basic Package
Step6: Layer Property Flow Package
Step7: Well Package
Step8: Output Control
Step9: Preconditioned Conjugate Gradient Solver
Step10: Recharge Package
Step11: Writing the MODFLOW Input Files
Step12: Yup. It's that simple, the model datasets are written using a single command (mf.write_input).
Step13: Running the Model
Step14: Post Processing the Results
Step15: Look at the bottom of the MODFLOW output file (ending with a *.list) and note the water balance reported.
Step16: Testing your Skills
Step17: Why is there less water in the system than when the gw divide was not simulated with no-flow cells?
Step18: Why when the gw divide is simulated as a no-flow might the total flux to Green Swamp be different (and lower)?
Step19: P4.3 Part d.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
import os
import shutil
import numpy as np
from subprocess import check_output
# Import flopy
import flopy
# Set the name of the path to the model working directory
dirname = "P4-3_Hubbertville"
datapath = os.getcwd()
modelpath = os.path.join(datapath, dirname)
print 'Name of model path: ', modelpath
# Now let's check if this directory exists. If not, then we will create it.
if os.path.exists(modelpath):
print 'Model working directory already exists.'
else:
print 'Creating model working directory.'
os.mkdir(modelpath)
# model domain and grid definition
# for clarity, user entered variables are all caps; python syntax are lower case or mixed case
# In a contrast to P4.1 and P4.2, this is an areal 2D model
LX = 4500.
LY = 11000. # note that there is an added 500m on the top and bottom to represent the boundary conditions,that leaves an aqufier lenght of 10000 m
ZTOP = 1030. # the system is unconfined so set the top above land surface so that the water table never > layer top
ZBOT = 980.
NLAY = 1
NROW = 22
NCOL = 9
DELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)
DELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)
DELV = (ZTOP - ZBOT) / NLAY
BOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)
HK = 50.
VKA = 1.
RCH = 0.001
WELLQ = 0. #recall MODFLOW convention, negative means pumped out of the model domain (=aquifer)
print "DELR =", DELR, " DELC =", DELC, ' DELV =', DELV
print "BOTM =", BOTM
print "Recharge =", RCH
print "Pumping well rate =", WELLQ
# Assign name and create modflow model object
modelname = 'P4-3'
#exe_name = os.path.join(datapath, 'mf2005.exe') # for Windows OS
exe_name = os.path.join(datapath, 'mf2005') # for Mac OS
print 'Model executable: ', exe_name
MF = flopy.modflow.Modflow(modelname, exe_name=exe_name, model_ws=modelpath)
# Create the discretization object
TOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)
DIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,
top=TOP, botm=BOTM[1:], laycbd=0)
# print DIS_PACKAGE #uncomment this on far left to see information about the flopy object
# Variables for the BAS package
IBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)
# make the top of the profile specified head by setting the IBOUND = -1
IBOUND[:, 0, :] = -1 #don't forget arrays are zero-based!
IBOUND[:, -1, :] = -1 #-1 is Python for last in array
print IBOUND
STRT = 1015 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 1010 m through out model domain
STRT[:, 0, :] = 1000. # river stage for setting constant head
STRT[:, -1, :] = 1000. # wetland stage for setting constant head
print STRT
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object
LPF_PACKAGE = flopy.modflow.ModflowLpf(MF, laytyp=1, hk=HK, vka=VKA) # we defined the K and anisotropy at top of file
# print LPF_PACKAGE # uncomment this at far left to see the information about the flopy LPF object
WEL_PACKAGE = flopy.modflow.ModflowWel(MF, stress_period_data=[0,6,4,WELLQ]) # remember python 0 index, layer 0 = layer 1 in MF
#print WEL_PACKAGE # uncomment this at far left to see the information about the flopy WEL object
OC_PACKAGE = flopy.modflow.ModflowOc(MF) # we'll use the defaults for the model output
# print OC_PACKAGE # uncomment this at far left to see the information about the flopy OC object
PCG_PACKAGE = flopy.modflow.ModflowPcg(MF, mxiter=500, iter1=100, hclose=1e-04, rclose=1e-03, relax=0.98, damp=0.5)
# print PCG_PACKAGE # uncomment this at far left to see the information about the flopy PCG object
RCH_PACKAGE = flopy.modflow.ModflowRch(MF, rech=RCH)
# print RCH_PACKAGE # uncomment this at far left to see the information about the flopy RCH object
#Before writing input, destroy all files in folder to prevent reusing old files
#Here's the working directory
print modelpath
#Here's what's currently in the working directory
modelfiles = os.listdir(modelpath)
print modelfiles
#delete these files to prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files
MF.write_input()
# return current working directory
print "You can check the newly created files in", modelpath
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
#Create a contour plot of heads
FIG = plt.figure(figsize=(15,13))
#setup contour levels and plot extent
LEVELS = np.arange(1000., 1011., 0.5)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
print 'Contour Levels: ', LEVELS
print 'Extent of domain: ', EXTENT
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.set_xlabel("x")
AX1.set_ylabel("y")
YTICKS = np.arange(0, 11000, 500)
AX1.set_yticks(YTICKS)
AX1.set_title("Hubbertville contour map")
AX1.text(2000, 10500, r"River", fontsize=10, color="blue")
AX1.text(1800, 340, r"Green Swamp", fontsize=10, color="green")
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
AX2.set_xlabel("x")
AX2.set_ylabel("y")
AX2.set_yticks(YTICKS)
AX2.set_title("Hubbertville color flood")
AX2.text(2000, 10500, r"River", fontsize=10, color="black")
AX2.text(1800, 340, r"Green Swamp", fontsize=10, color="black")
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)
#look at the head in column = 4 from headobj, and then plot it
#print HEAD along a column; COL is a variable that allows us to change this easily
COL = 4
print HEAD[0,:,COL]
# we see this is what we want, but is flipped because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it
Y = np.flipud(HEAD[0,:,COL])
print Y
#for our cross section create X-coordinates to match with heads
XCOORD = np.arange(0, 11000, 500) + 250
print XCOORD
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1)
TITLE = 'cross section of head along Column = ({0})'.format(COL)
ax.set_title(TITLE)
ax.set_xlabel('y')
ax.set_ylabel('head')
ax.set_xlim(0, 11000.)
ax.set_ylim(980.,1020.)
ax.text(10480, 998, r"River", fontsize=10, color="blue",rotation='vertical')
ax.text(300, 998, r"Green Swamp", fontsize=10, color="green",rotation='vertical')
ax.text(5300,1009., r"Groundwater Divide", fontsize=10, color="black",rotation='vertical')
ax.plot(XCOORD, Y)
#calculate the flux to Green Swamp
HEAD_ADJACENT_CELLS = HEAD[0,-2,:]
print "heads in cells next to Green Swamp =", HEAD_ADJACENT_CELLS
FLUX_TO_SWAMP = 0
THICK = (HEAD[0,-2,5]+1000.)/2 - ZBOT #the thickness is approximated using the average saturated thickness
for NODEHEAD in HEAD_ADJACENT_CELLS:
NODEFLUX = (HK * ((NODEHEAD-1000.)/(DELC)) * (DELR * THICK)) # Q = KIA
FLUX_TO_SWAMP += NODEFLUX
print 'gradient =', (NODEHEAD-1000)/(DELC), ' Kh =', HK, ' thickness=', THICK, ' Grid spacing =', DELC, ' Node flux =', NODEFLUX
print "Total Flux to Swamp =", FLUX_TO_SWAMP, "cubic meters per day"
#calculate the flux to River
HEAD_ADJACENT_CELLS = HEAD[0,1,:]
print "heads in cells next to River =", HEAD_ADJACENT_CELLS
FLUX_TO_RIVER = 0
THICK = (HEAD[0,-2,5]+1000.)/2 - ZBOT #the thickness is approximated using the average saturated thickness
for NODEHEAD in HEAD_ADJACENT_CELLS:
NODEFLUX = (HK * (NODEHEAD-1000.)/(DELC) * DELR * THICK) # Q = KIA
FLUX_TO_RIVER += NODEFLUX
print 'gradient =', (NODEHEAD-1000)/(DELC), ' Kh =', HK, ' thickness=', THICK, ' Grid spacing =', DELC, ' Node flux =', NODEFLUX
print "Total Flux to River =", FLUX_TO_RIVER, "cubic meters per day"
print 'Flux to Green Swamp =', FLUX_TO_SWAMP, ' Flux to River =', FLUX_TO_RIVER
BCFLUX = FLUX_TO_SWAMP + FLUX_TO_RIVER
Q = WELLQ * -1
print 'Flux to BCs =', BCFLUX,', Well pumping =', Q,', Total Vol Out =', BCFLUX+Q, 'cubic meters per day'
# let's print out some heads by row to see which has the highest head (=the gw divide); don't forget arrays are zero-based!
print HEAD[0,9,:]
print HEAD[0,10,:]
print HEAD[0,11,:]
print HEAD[0,12,:]
# Rows 10 and 11 have highest heads; let's save these rows of heads for later
ROW10_HEAD = HEAD[0,10,:]
ROW11_HEAD = HEAD[0,11,:]
#let's reset Rows 10 and 11 to a no flow boundary (set that row to 0 in the MODFLOW IBOUND array)
IBOUND[:, 10, :] = 0
IBOUND[:, 11, :] = 0
print IBOUND
#we have to update the MODFLOW's BAS Package with the new IBOUND array
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
# added MODFLOW solver here again for testing of solver convergence; the problem will solve with these settings
#PCG_PACKAGE = flopy.modflow.ModflowPcg(MF, mxiter=500, iter1=100, hclose=1e-04, rclose=1e-03, relax=0.98, damp=0.5)
# but you can play with other settings then execute the code blocks from here on down to see effect on convergence
PCG_PACKAGE = flopy.modflow.ModflowPcg(MF, mxiter=500, iter1=100, hclose=1e-04, rclose=1e-03, relax=0.98, damp=0.5)
#Before writing input, destroy all files in folder to prevent reusing old files
#Here's the working directory
print modelpath
#Here's what's currently in the working directory
modelfiles = os.listdir(modelpath)
print modelfiles
#delete these files to prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files
MF.write_input()
print "New MODFLOW input files = ", modelfiles
print "You can check the newly created files in", modelpath
#rerun MODFLOW-2005
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
#As before, let's look at the results and compare to P4-3 Part a.
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
#-999.99 is the Inactive node flag so we'll use our previous contour settings
#Create a contour plot of heads
FIG = plt.figure(figsize=(15,13))
#setup contour levels and plot extent
LEVELS = np.arange(1000., 1011., 0.5)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
print 'Contour Levels: ', LEVELS
print 'Extent of domain: ', EXTENT
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.set_xlabel("x")
AX1.set_ylabel("y")
YTICKS = np.arange(0, 11000, 500)
AX1.set_yticks(YTICKS)
AX1.set_title("Hubbertville contour map")
AX1.text(2000, 10500, r"River", fontsize=10, color="blue")
AX1.text(1800, 340, r"Green Swamp", fontsize=10, color="green")
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
AX2.set_xlabel("x")
AX2.set_ylabel("y")
AX2.set_yticks(YTICKS)
AX2.set_title("Hubbertville color flood")
AX2.text(2000, 10500, r"River", fontsize=10, color="black")
AX2.text(1800, 340, r"Green Swamp", fontsize=10, color="black")
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest', vmin=998.2)
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)
COL = 4
# recall we need to flip because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it
Y = np.flipud(HEAD[0,:,COL])
print Y
#for our cross section create X-coordinates to match with heads
XCOORD = np.arange(0, 11000, 500) + 250
print XCOORD
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1)
TITLE = 'cross section of head along Column = ({0})'.format(COL)
ax.set_title(TITLE)
ax.set_xlabel('y')
ax.set_ylabel('head')
ax.set_xlim(0, 11000.)
ax.set_ylim(980.,1020.)
ax.text(10480, 998, r"River", fontsize=10, color="blue",rotation='vertical')
ax.text(300, 998, r"Green Swamp", fontsize=10, color="green",rotation='vertical')
ax.text(5400,1006., r"Groundwater Divide / Inactive cells", fontsize=10, color="black",rotation='vertical')
ax.plot(XCOORD, Y)
#calculate the flux to Green Swamp
HEAD_ADJACENT_CELLS = HEAD[0,-2,:]
print "heads in cells next to Green Swamp =", HEAD_ADJACENT_CELLS
FLUX_TO_SWAMP_NO_FLOW = 0
THICK = (HEAD[0,-2,5]+1000.)/2 - ZBOT #the thickness is approximated using the average saturated thickness
for NODEHEAD in HEAD_ADJACENT_CELLS:
NODEFLUX = (HK * (NODEHEAD-1000.)/(DELC) * DELR * THICK) # Q = KIA
FLUX_TO_SWAMP_NO_FLOW += NODEFLUX
print 'gradient =', (NODEHEAD-1000)/(DELC), ' Kh =', HK, ' thickness=', THICK, ' Grid spacing =', DELC, ' Node flux =', NODEFLUX
print "Total Flux to Swamp (No Flow) =", FLUX_TO_SWAMP_NO_FLOW, "cubic meters per day"
# Rows 10 and 11 had highest heads; reset Row 10 and 11 to a specified head boundary (set that row to -1 in the MODFLOW IBOUND array)
IBOUND[:, 10, :] = -1
IBOUND[:, 11, :] = -1
print IBOUND
#MODFLOW uses the starting heads to set the specified head boundary elevations
#we need to reset the starting heads in Rows 10 and 11 to what they were originally
#recall we saved these heads, and can print them to check
print "Row 10 heads =", ROW10_HEAD
print "Row 11 heads =", ROW11_HEAD
STRT[:, 10, :] = ROW10_HEAD # setting starting heads Row 10 to heads calculated in Part a.
STRT[:, 11, :] = ROW11_HEAD # setting starting heads Row 10 to heads calculated in Part a.
print STRT
#we have to update the MODFLOW's BAS Package with the new STRT heads
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
#delete old files to prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files
MF.write_input()
print "New MODFLOW input files = ", modelfiles
print "You can check the newly created files in", modelpath
#rerun MODFLOW-2005
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
#As before, let's look at the results and compare to P4-3 Part a.
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
#Create a contour plot of heads
FIG = plt.figure(figsize=(15,13))
#setup contour levels and plot extent
LEVELS = np.arange(1000., 1011., 0.5)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.set_xlabel("x")
AX1.set_ylabel("y")
YTICKS = np.arange(0, 11000, 500)
AX1.set_yticks(YTICKS)
AX1.set_title("Hubbertville contour map")
AX1.text(2000, 10500, r"River", fontsize=10, color="blue")
AX1.text(1800, 340, r"Green Swamp", fontsize=10, color="green")
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
AX2.set_xlabel("x")
AX2.set_ylabel("y")
AX2.set_yticks(YTICKS)
AX2.set_title("Hubbertville color flood")
AX2.text(2000, 10500, r"River", fontsize=10, color="black")
AX2.text(1800, 340, r"Green Swamp", fontsize=10, color="black")
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest', vmin=998.2)
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)
#as before let's plot a north-south cross section
COL = 4
# recall we need to flip because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it
Y = np.flipud(HEAD[0,:,COL])
#for our cross section create X-coordinates to match with heads
XCOORD = np.arange(0, 11000, 500) + 250
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1)
TITLE = 'cross section of head along Column = ({0})'.format(COL)
ax.set_title(TITLE)
ax.set_xlabel('y')
ax.set_ylabel('head')
ax.set_xlim(0, 11000.)
ax.set_ylim(980.,1020.)
ax.text(10480, 998, r"River", fontsize=10, color="blue",rotation='vertical')
ax.text(300, 998, r"Green Swamp", fontsize=10, color="green",rotation='vertical')
ax.text(5400,1007., r"Groundwater Divide", fontsize=10, color="black",rotation='vertical')
ax.plot(XCOORD, Y)
#calculate the flux to Green Swamp
HEAD_ADJACENT_CELLS = HEAD[0,-2,:]
print "heads in cells next to Green Swamp =", HEAD_ADJACENT_CELLS
FLUX_TO_SWAMP_SPEC_HEAD = 0
THICK = (HEAD[0,-2,5]+1000.)/2 - ZBOT #the thickness is approximated using the average saturated thickness
for NODEHEAD in HEAD_ADJACENT_CELLS:
NODEFLUX = (HK * (NODEHEAD-1000.)/(DELC) * DELR * THICK) # Q = KIA
FLUX_TO_SWAMP_SPEC_HEAD += NODEFLUX
print 'gradient =', (NODEHEAD-1000)/(DELC), ' Kh =', HK, ' thickness=', THICK, ' Grid spacing =', DELC, ' Node flux =', NODEFLUX
print "Total Flux to Swamp (Specified Head) =", FLUX_TO_SWAMP_SPEC_HEAD, "cubic meters per day"
#let's compare the three formulations:
#1) gw divide simulated; 2) gw divide as no flow BC; and 3) gw divide as specified head BC
print "Flux to Swamp (simulated) =", FLUX_TO_SWAMP
print "Flux to Swamp (no flow) = ", FLUX_TO_SWAMP_NO_FLOW
print "Flux to Swamp (spec head) = ", FLUX_TO_SWAMP_SPEC_HEAD
#We have to recreate the IBOUND and STRT heads of Part a.
#This is just copied directly from Part a. above to start clean
# Variables for the BAS package
IBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)
# make the top of the profile specified head by setting the IBOUND = -1
IBOUND[:, 0, :] = -1 #don't forget arrays are zero-based!
IBOUND[:, -1, :] = -1 #-1 is Python for last in array
print IBOUND
#BUT in Part c. the river to the north is a head-dependent BCs, not specified head
#so we have to change it from -1 (specified head) to 1 (active cells)
IBOUND[:, 0, :] = 1 #don't forget arrays are zero-based!
print IBOUND
#In the same way, we need to reset the Starting Head array, but only need
# values set for the specified head boundary used for Green Swamp in the south
STRT = 1015 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 1010 m through out model domain
STRT[:, -1, :] = 1000. # wetland stage for setting constant head
print STRT
#we have to update the MODFLOW's BAS Package with the new IBOUND and STRT heads
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
#now we need to add a HDB - the RIV Package is a good choice
#recall that a RIV node has a river stage, a conductance, and a bottom elevation
RIV_STAGE = 1000.
Kv_RIVER = 5.
b_RIVER = 1.
WIDTH_RIVER = 500.
SED_BOT_RIVER = 995.
# conductance = leakance x cross-sectional area
# leakance = Kv/b
RIV_LEAKANCE = Kv_RIVER / b_RIVER
print "River sediment leakance =", RIV_LEAKANCE
#area is the nodal area, DELR x DELC which was entered above
print "DELR = ", DELR
print "River width =", WIDTH_RIVER
print "River area in node =", DELR * WIDTH_RIVER
#conductance is leakance x area
RIV_COND = RIV_LEAKANCE * DELR * WIDTH_RIVER
print 'River Conductance =', RIV_COND
#We enter RIV Package data by "layer-row-column-data" = lrcd
stress_period_data = [
[0, 0, 0, RIV_STAGE, RIV_COND, SED_BOT_RIVER], #layer, row, column, stage conductance, river bottom
[0, 0, 1, RIV_STAGE, RIV_COND, SED_BOT_RIVER], #remember Python indexing is zero based
[0, 0, 2, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 3, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 4, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 5, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 6, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 7, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 8, RIV_STAGE, RIV_COND, SED_BOT_RIVER]]
print stress_period_data
riv = flopy.modflow.ModflowRiv(MF, stress_period_data=stress_period_data)
#delete old files to prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files
MF.write_input()
print "New MODFLOW input files = ", modelfiles
print "You can check the newly created files in", modelpath
#rerun MODFLOW-2005
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
#As before, let's look at the results and compare to P4-3 Part a.
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Create a contour plot of heads
FIG = plt.figure(figsize=(15,13))
#setup contour levels and plot extent
LEVELS = np.arange(1000., 1011., 0.5)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.set_xlabel("x")
AX1.set_ylabel("y")
YTICKS = np.arange(0, 11000, 500)
AX1.set_yticks(YTICKS)
AX1.set_title("Hubbertville contour map")
AX1.text(2000, 10500, r"River", fontsize=10, color="blue")
AX1.text(1800, 340, r"Green Swamp", fontsize=10, color="green")
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
AX2.set_xlabel("x")
AX2.set_ylabel("y")
AX2.set_yticks(YTICKS)
AX2.set_title("Hubbertville color flood")
AX2.text(2000, 10500, r"River", fontsize=10, color="black")
AX2.text(1800, 340, r"Green Swamp", fontsize=10, color="black")
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest', vmin=998.2)
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)
#as before let's plot a north-south cross section
COL = 4
# recall we need to flip because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it
Y = np.flipud(HEAD[0,:,COL])
#for our cross section create X-coordinates to match with heads
XCOORD = np.arange(0, 11000, 500) + 250
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1)
TITLE = 'cross section of head along Column = ({0})'.format(COL)
ax.set_title(TITLE)
ax.set_xlabel('y')
ax.set_ylabel('head')
ax.set_xlim(0, 11000.)
ax.set_ylim(980.,1020.)
ax.text(10480, 998, r"River", fontsize=10, color="blue",rotation='vertical')
ax.text(300, 998, r"Green Swamp", fontsize=10, color="green",rotation='vertical')
ax.text(5400,1007., r"Groundwater Divide", fontsize=10, color="black",rotation='vertical')
ax.plot(XCOORD, Y)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
WIDTH_RIVER = 5.
#area is the nodal area, DELR x DELC which was entered above
print "DELR = ", DELR
print "River width =", WIDTH_RIVER
print "River area in node =", DELR * WIDTH_RIVER
#conductance is leakance x area
print 'River Leakance = ', RIV_LEAKANCE
RIV_COND = RIV_LEAKANCE * DELR * WIDTH_RIVER
print 'River Conductance =', RIV_COND
#We enter RIV Package data by "layer-row-column-data" = lrcd
stress_period_data = [
[0, 0, 0, RIV_STAGE, RIV_COND, SED_BOT_RIVER], #layer, row, column, stage conductance, river bottom
[0, 0, 1, RIV_STAGE, RIV_COND, SED_BOT_RIVER], #remember Python indexing is zero based
[0, 0, 2, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 3, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 4, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 5, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 6, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 7, RIV_STAGE, RIV_COND, SED_BOT_RIVER],
[0, 0, 8, RIV_STAGE, RIV_COND, SED_BOT_RIVER]]
print stress_period_data
riv = flopy.modflow.ModflowRiv(MF, stress_period_data=stress_period_data)
#delete old files to prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files and rerun MODFLOW
MF.write_input()
print "New MODFLOW input files = ", modelfiles
print "You can check the newly created files in", modelpath
#rerun MODFLOW-2005
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
#As before, let's look at the results and compare to P4-3 Part a.
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Create a contour plot of heads
FIG = plt.figure(figsize=(15,13))
#setup contour levels and plot extent
LEVELS = np.arange(1000., 1011., 0.5)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.set_xlabel("x")
AX1.set_ylabel("y")
YTICKS = np.arange(0, 11000, 500)
AX1.set_yticks(YTICKS)
AX1.set_title("Hubbertville contour map")
AX1.text(2000, 10500, r"River", fontsize=10, color="blue")
AX1.text(1800, 340, r"Green Swamp", fontsize=10, color="green")
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
AX2.set_xlabel("x")
AX2.set_ylabel("y")
AX2.set_yticks(YTICKS)
AX2.set_title("Hubbertville color flood")
AX2.text(2000, 10500, r"River", fontsize=10, color="black")
AX2.text(1800, 340, r"Green Swamp", fontsize=10, color="black")
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest', vmin=998.2)
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)
#as before let's plot a north-south cross section
COL = 4
# recall we need to flip because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it
Y = np.flipud(HEAD[0,:,COL])
#for our cross section create X-coordinates to match with heads
XCOORD = np.arange(0, 11000, 500) + 250
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1)
TITLE = 'cross section of head along Column = ({0})'.format(COL)
ax.set_title(TITLE)
ax.set_xlabel('y')
ax.set_ylabel('head')
ax.set_xlim(0, 11000.)
ax.set_ylim(980.,1020.)
ax.text(10480, 998, r"River", fontsize=10, color="blue",rotation='vertical')
ax.text(300, 998, r"Green Swamp", fontsize=10, color="green",rotation='vertical')
ax.text(5400,1007., r"Groundwater Divide", fontsize=10, color="black",rotation='vertical')
ax.plot(XCOORD, Y)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-lr', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 现在假设你必须在两个字典中执行查找操作(比如先从 a 中找,如果找不到再在 b 中找)。 一个非常简单的解决方案就是使用 collections 模块中的 ChainMap 类。比如:
Step2: 讨论
Step3: 如果出现重复键,那么第一次出现的映射值会被返回。 因此,例子程序中的 c['z'] 总是会返回字典 a 中对应的值,而不是 b 中对应的值。
Step4: ChainMap 对于编程语言中的作用范围变量(比如 globals , locals 等)是非常有用的。 事实上,有一些方法可以使它变得简单:
Step5: 作为 ChainMap 的替代,你可能会考虑使用 update() 方法将两个字典合并。比如:
Step6: 这样也能行得通,但是它需要你创建一个完全不同的字典对象(或者是破坏现有字典结构)。 同时,如果原字典做了更新,这种改变不会反应到新的合并字典中去。比如:
Step7: ChainMap 使用原来的字典,它自己不创建新的字典。所以它并不会产生上面所说的结果,比如:
|
<ASSISTANT_TASK:>
Python Code:
a = {"x" : 1, "z" : 3}
b = {"y" : 2, "z" : 4}
from collections import ChainMap
c = ChainMap(a, b)
print(c["x"])
print(c["y"])
print(c["z"])
len(c)
list(c.keys())
list(c.values())
c["z"] = 10
c["w"] = 40
del c["x"]
a
del c["y"]
values = ChainMap()
values["x"] = 1
# Add a new mapping
values = values.new_child()
values["x"] = 2
# Add a new mapping
values = values.new_child()
values["x"] = 3
values
values["x"]
# Discard last mapping
values = values.parents
values["x"]
values = values.parents
values["x"]
values
a = {"x" : 1, "z" : 3}
b = {"y" : 2, "z" : 4}
merged = dict(b)
merged
merged.update(a)
merged
merged["x"]
merged["y"]
merged["z"]
a["x"] = 13
merged["x"]
a = {"x" : 1, "z" : 3}
b = {"y" : 2, "z" : 4}
merged = ChainMap(a, b)
merged
merged["x"]
a["x"] = 42
merged["x"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters and read data
Step2: Now, we estimate a set of xDAWN filters for the epochs (which contain only
Step3: Epochs are denoised by calling apply, which by default keeps only the
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Barachant <alexandre.barachant@gmail.com>
#
# License: BSD (3-clause)
from mne import (io, compute_raw_covariance, read_events, pick_types, Epochs)
from mne.datasets import sample
from mne.preprocessing import Xdawn
from mne.viz import plot_epochs_image
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(vis_r=4)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin') # replace baselining with high-pass
events = read_events(event_fname)
raw.info['bads'] = ['MEG 2443'] # set bad channels
picks = pick_types(raw.info, meg=True, eeg=False, stim=False, eog=False,
exclude='bads')
# Epoching
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
# Plot image epoch before xdawn
plot_epochs_image(epochs['vis_r'], picks=[230], vmin=-500, vmax=500)
# Estimates signal covariance
signal_cov = compute_raw_covariance(raw, picks=picks)
# Xdawn instance
xd = Xdawn(n_components=2, signal_cov=signal_cov)
# Fit xdawn
xd.fit(epochs)
epochs_denoised = xd.apply(epochs)
# Plot image epoch after Xdawn
plot_epochs_image(epochs_denoised['vis_r'], picks=[230], vmin=-500, vmax=500)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a meshed screen with a central hole
Step2: The screen is placed at the origin. A beam is assumed to propagate in z direction<br>
Step3: define the timing
Step4: Write the file
Step5: Check the file after running the test
Step6: spectral distribution
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import constants
import scipy.integrate
import scipy.special as func
from MeshedFields import *
import pygmsh
Ra = 0.020
Ri = 0.002
lca = 0.003
lci = 0.0003
geom = pygmsh.built_in.Geometry()
# we create the initial geometry as a streched ellipse to create
# different scaling lengths (cell sizes) along the different axes
p1 = geom.add_point([Ra, 0.0, 0.0], lcar=lca)
p2 = geom.add_point([0.0, Ra, 0.0], lcar=lca)
p3 = geom.add_point([-Ra, 0.0, 0.0], lcar=lca)
p4 = geom.add_point([0.0, -Ra, 0.0], lcar=lca)
p1i = geom.add_point([Ri, 0.0, 0.0], lcar=lci)
p2i = geom.add_point([0.0, Ri, 0.0], lcar=lci)
p3i = geom.add_point([-Ri, 0.0, 0.0], lcar=lci)
p4i = geom.add_point([0.0, -Ri, 0.0], lcar=lci)
pc = geom.add_point([0.0, 0.0, 0.0])
pa = geom.add_point([1.0, 0.0, 0.0])
# the mesh is circumscribed with four elliptic arcs
e1 = geom.add_ellipse_arc(start=p1, center=pc, point_on_major_axis=pa, end=p2)
e2 = geom.add_ellipse_arc(start=p2, center=pc, point_on_major_axis=pa, end=p3)
e3 = geom.add_ellipse_arc(start=p3, center=pc, point_on_major_axis=pa, end=p4)
e4 = geom.add_ellipse_arc(start=p4, center=pc, point_on_major_axis=pa, end=p1)
# the hole is circumscribed with four elliptic arcs
e1i = geom.add_ellipse_arc(start=p1i, center=pc, point_on_major_axis=pa, end=p2i)
e2i = geom.add_ellipse_arc(start=p2i, center=pc, point_on_major_axis=pa, end=p3i)
e3i = geom.add_ellipse_arc(start=p3i, center=pc, point_on_major_axis=pa, end=p4i)
e4i = geom.add_ellipse_arc(start=p4i, center=pc, point_on_major_axis=pa, end=p1i)
# these are combined into a line loop
hole = geom.add_line_loop([e1i,e2i,e3i,e4i])
outline = geom.add_line_loop([e1,e2,e3,e4])
geom.add_plane_surface(outline,holes=[hole])
# now we can create the mesh
mesh = pygmsh.generate_mesh(geom, dim=2, verbose=False)
tris = mesh.cells['triangle']
screen = MeshedField(mesh.points,tris)
print("%d points" % len(screen.points))
print("%d triangles" % len(screen.triangles))
area = screen.MeshArea()
normals = screen.MeshNormals()
average = np.sum(normals, axis=0)/screen.Np
print("total mesh area = %7.3f cm²" % (1.0e4*np.sum(area)))
print("screen normal = %s" % average)
screen.ShowMeshedField(showAxes=True)
# time step
screen.dt = 2.0e-13
# all points use the same timing grid
screen.Nt = 500
screen.t0 = np.ones(screen.Np)*(-screen.Nt//2*screen.dt)
filename="../tests/DiffractionScreen.h5"
screen.WriteMeshedField(filename)
filename = "../tests/DiffractionScreenWithFields.h5"
computed = MeshedField.ReadMeshedField(filename)
print("%d points" % len(computed.points))
print("%d triangles" % len(computed.triangles))
area = computed.MeshArea()
normals = computed.MeshNormals()
average = np.sum(normals, axis=0)/computed.Np
print("total mesh area = %7.3f cm²" % (1.0e4*np.sum(area)))
print("screen normal = %s" % average)
area = computed.MeshArea()
S = [np.linalg.norm(computed.EnergyFlowVector(i)) for i in range(computed.Np)]
peak_index = np.argmax(S)
Pz = [computed.NormalEnergyFlow(i) for i in range(computed.Np)]
print("peak energy density = %.6f J/m² index=%d" % (S[peak_index],peak_index))
print("total pulse energy = %.3f µJ" % (1e6*np.dot(area,Pz)))
computed.ShowMeshedField(scalars=Pz,scalarTitle="Pz",highlight=[peak_index],showGrid=True)
computed.ShowFieldTrace(peak_index)
def pick(id):
if id>0 and id<computed.Np:
print("cell No. %d pos=%s" % (id,computed.pos[id]))
print("pointing vector S=%s" % computed.EnergyFlowVector(id))
computed.ShowFieldTrace(id)
computed.ShowMeshedField(scalars=Pz,scalarTitle="Pz",pickAction=pick,showGrid=False)
t = 3.34e-9
fields = computed.FieldsAtTime(t)
Ex = [f[0] for f in fields]
Ey = [f[1] for f in fields]
computed.ShowMeshedField(scalars=Ey,scalarTitle="Ey",showGrid=False,lut=phaseLUT())
Q = 100.0e-12
N_e = Q / constants.e
N_e_sq = N_e * N_e
γ = 20
β = np.sqrt(1-1/(γ*γ))
ugf_0 = np.power(constants.e,2) /( 4*np.power(np.pi,3)*constants.epsilon_0*constants.c)
def d2Ugf(β,Θ):
return ugf_0 * (np.power(β,2)*np.power(np.sin(Θ),2))/np.power(1-np.power(β,2)*np.power(np.cos(Θ),2),2)
def T(ω,Θ,a):
arg1 = ω*a/(β*γ*constants.c)
arg2 = ω*a*np.sin(Θ)/constants.c
arg3 = ω*a/(constants.c*np.power(β*γ,2)*np.sin(Θ))
return arg1*func.j0(arg2)*func.k1(arg1)+arg3*func.j1(arg2)*func.k0(arg1)
def TT(ω,Θ,a1,a2):
return np.square(T(ω,Θ,a1)-T(ω,Θ,a2))
def angular_integral(ω_i,r_i,r_a):
def dUdωdΘ(Θ):
return d2Ugf(β,Θ)*2*np.pi*np.sin(Θ)
def spect_integrand(Θ):
return dUdωdΘ(Θ)*TT(ω_i,Θ,r_i,r_a)
integral = scipy.integrate.quadrature(spect_integrand,0,np.pi/2,tol=1e-40,rtol=1e-6,miniter=20,maxiter=400)
return integral[0]
f_th = np.power(10.0,np.linspace(-2.0,1.0,300))
th1 = np.array([angular_integral(2*np.pi*f*1e12,0.002,0.030) for f in f_th])*2*np.pi*1e9
E_opt = 1000*N_e_sq * scipy.integrate.trapz(th1,f_th)
print("total pulse energy (100pC) = %g µJ" % E_opt)
nele = Q/constants.e
nots = computed.Nt
nf = nots
dt = computed.dt
df = 1.0/dt/nf
fmax = 1.0/dt
f = np.linspace(0.0,fmax,nf)[:nots//2]
spectrum = np.zeros(nots//2)
for index in range(computed.Np):
trace = computed.A[index]
data = trace.transpose()
Ex = data[0]
Ey = data[1]
Ez = data[2]
Bx = data[3]
By = data[4]
Bz = data[5]
spectEx = np.fft.fft(Ex)[:nots//2]
spectEy = np.fft.fft(Ey)[:nots//2]
spectEz = np.fft.fft(Ez)[:nots//2]
spectBx = np.fft.fft(Bx)[:nots//2]
spectBy = np.fft.fft(By)[:nots//2]
spectBz = np.fft.fft(Bz)[:nots//2]
amp = np.real(spectEx*np.conj(spectBy)-spectEy*np.conj(spectBx))/constants.mu_0*2*dt/(df*nf)
spectrum += amp*area[index]
E_opt = scipy.integrate.trapz(spectrum,f)
print("total pulse energy = %g µJ" % (1e6*E_opt))
plt.rcParams["figure.figsize"] = (12,8)
plt.rcParams["font.family"] = 'serif'
plt.rcParams["font.size"] = 20
plt.rcParams['axes.labelsize'] = 'large'
fig, ax = plt.subplots()
ax.semilogx(1e-12*f,1e9*spectrum/(nele*nele)/constants.e,marker='.',markersize=10.0,linestyle='None')
ax.semilogx(f_th,th1/constants.e)
ax.set_xlabel('f [THz]')
ax.set_ylabel('dW/dω [eV/GHz]')
ax.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make sure you can read it back in!
|
<ASSISTANT_TASK:>
Python Code:
# improve on the "stopword" filters here
#
# :-) (ask me about a smilie lexicon)
# not-so-simple words? (ask me about a regex for compound words)
# python variables names with underscores? (regex)
f = os.path.join(DATA_PATH, 'text.csv.gz')
df.to_csv(f, encoding='utf8', compression='gzip', quoting=pd.io.common.csv.QUOTE_NONNUMERIC)
import gzip
with gzip.open(os.path.join(DATA_PATH, 'text.csv.gz'), 'rb') as f:
df = pd.read_csv(f)
df = pd.DataFrame.from_csv(os.path.join(DATA_PATH, 'text.csv.gz'))
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's simulate some data to use throughout examples here. The number of observations isn't too important. We keep it low to understand what is going on more easily. However, we do include both numeric and categoric variables to see how formulae handles these types in different scenarios.
Step2: Creating and accessing design matrices
Step3: Under dm.common we find an object of class CommonEffectsMatrix which stores the design matrix for the common part of the model, as well as other information about the terms in the matrix.
Step4: Use .design_matrix to obtain the design matrix for the common part of the model.
Step5: Or np.array(dm.common)
Step6: Or call the .as_dataframe() method to return a data frame.
Step7: Note the response term can be omitted and we still obtain the design matrix for the common predictors
Step8: Categorical predictors
Step9: But if we don't have an intercept term, there's no need to use reference encoding for the categorical variables to avoid linear dependencies between the columns and formulae uses cell-means encoding.
Step10: Suppose that z actually represents a certain categorization represented by numbers. We can use "y1 ~ C(z)" and z will be internally interpreted as categorical instead of numeric.
Step11: By default, the C() wrapper takes the first level as the reference level (after being sorted with sorted()), but we can override this by passing a list with a custom ordering.
Step12: Finally, if we want to convert one variable into a binary variable, we can use the internal binary() function
Step13: which also works for categorical variables
Step14: Adding interactions
Step15: The * operator is known as the full interaction operator because x*z is equivalent to x + z + x
Step16: And both interaction operators can be used with mixtures of numerical and categorical variables.
Step17: Function calls
Step18: We can also use our own custom functions
Step19: Or even nested function calls!
Step20: Built-in transformations
Step21: These transformations are known as stateful transformations because they remember the original values of parameters involved in the transformation (the mean and the standard deviation). This feature is critical when one wants to derive a design matrix based on new data, as required when evaluating the fit of a model on a new dataset.
Step22: "y1 ~ -1 + x" and "y1 ~ x - 1" are equivalent alternatives.
Step23: As you may have noticed, the output shows I() instead of {}. That's because {} is translated into I() internally.
Step24: The .design_vector array is a dummy based design matrix where each column represents a level of the response
Step25: It is also possible to override the default behavior and indicate the reference level with some syntax sugar. If you say response[level], formulae will interpret response as a binary variable and set the 'level' category as reference. For example
Step26: If it is the case that the reference level contains spaces, we can wrap the response within quotes
Step27: As with the predictors, the binary() function can be used on the LHS as well
Step28: Note binary() outputs a numeric variable, so the response is recognized as numeric instead of categoric.
Step29: If we now use (x|g) we get both the group-specific slope, as well as the intercept, because it is included by default.
Step30: We can access the different sub-matrices corresponding to each term as follows
Step31: But as with the common part, this default intercept can be removed
Step32: As a final remark, we note that the expr part of a group-specific can also be a categorical variable or an interaction. Similar rules apply for the factor part.
Step33: And now suppose we have new data for a set of new subjects. We can derive both common and group-specific design matrices for these new individuals without having to specify the model again. We can instead use the .evaluate_new_data() method. This method takes a data frame and returns either a new CommonEffectsMatrix or a new GroupEffectsMatrix depending on which object the method was called.
Step34: And if we explore the design matrix for the common terms we notice the scaling of the new data used the mean and standard deviation x in the first dataset.
Step35: Something similar can be done with the group-specific part.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from formulae import design_matrices
rng = np.random.default_rng(7355608)
SIZE = 10
data = pd.DataFrame(
{
"y1": rng.normal(size=SIZE),
"y2": rng.choice(["A", "B", "C"], size=SIZE),
"x": rng.normal(size=SIZE),
"z": rng.choice([1, 2, 3], size=SIZE),
"g": rng.choice(["Group 1", "Group 2", "Group 3"], size=SIZE),
}
)
data
dm = design_matrices("y1 ~ x", data)
dm.common
dm.common.design_matrix
np.array(dm.common)
dm.common.as_dataframe().head()
dm = design_matrices("x", data)
dm.common.as_dataframe().head()
dm = design_matrices("g", data)
dm.common.as_dataframe().head()
dm = design_matrices("0 + g", data)
dm.common.as_dataframe().head()
dm = design_matrices("C(z)", data)
dm.common.as_dataframe().head()
# 2 is taken as baseline, and then it comes 3 and 1.
lvl = [2, 3, 1]
dm = design_matrices("C(z, levels=lvl)", data)
dm.common.as_dataframe().head()
dm = design_matrices("binary(z, 2)", data)
dm.common.as_dataframe().head()
dm = design_matrices("binary(g, 'Group 1')", data)
dm.common.as_dataframe().head()
dm = design_matrices("x:z", data)
dm.common.as_dataframe().head()
dm = design_matrices("x*z", data)
dm.common.as_dataframe().head()
dm = design_matrices("0 + x*g", data)
dm.common.as_dataframe().head()
dm = design_matrices("np.exp(x)", data)
dm.common.as_dataframe().head()
def add(x, how_much):
return x + how_much
dm = design_matrices("add(x, 10)", data)
dm.common.as_dataframe().head()
dm = design_matrices("add(np.exp(x), 5)", data)
dm.common.as_dataframe().head()
dm = design_matrices("scale(x)", data)
dm.common.as_dataframe().head()
# check mean is 0 and std is 1
print(np.std(dm.common.as_dataframe()["scale(x)"]))
print(np.mean(dm.common.as_dataframe()["scale(x)"]))
dm = design_matrices("bs(x, df=6)", data)
dm.common.as_dataframe()
dm = design_matrices("0 + x", data)
dm.common.as_dataframe().head()
dm = design_matrices("I(x + z)", data)
dm.common.as_dataframe().head()
dm = design_matrices("{x / z}", data)
dm.common.as_dataframe().head()
dm = design_matrices("y2 ~ x", data)
dm.response
dm.response.design_vector
np.array(dm.response)
dm = design_matrices("y2[B] ~ x", data)
dm.response
# 'y2' is equal to B in four cases
np.array(dm.response)
dm = design_matrices("g['Group 3'] ~ x", data)
dm.response
dm = design_matrices("binary(g, 'Group 3') ~ x", data)
dm.response
dm = design_matrices("y1 ~ x + (1|g)", data)
dm.group
dm.common.as_dataframe()
np.array(dm.group)
dm = design_matrices("y1 ~ x + (x|g)", data)
dm.group
dm.group["1|g"] # same as before
dm.group["x|g"]
dm = design_matrices("y1 ~ x + (0 + x|g)", data)
dm.group # note there is no 1|x in the terms
data = pd.DataFrame({
"y": rng.normal(size=50),
"x": rng.normal(loc=200, scale=20, size=50),
"g": rng.choice(["A", "B", "C"], size=50)
})
dm = design_matrices("y ~ scale(x) + (scale(x)|g)", data)
print(dm.common)
print(dm.group)
data2 = pd.DataFrame({
"x": rng.normal(loc=180, scale=20,size=10),
"g": rng.choice(["A", "B", "C"], size=10)
})
common_new = dm.common.evaluate_new_data(data2)
common_new
common_new.as_dataframe()
group_new = dm.group.evaluate_new_data(data2)
group_new
group_new["scale(x)|g"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can optionally save the parameters in the directory variable 'model_prefix'. We first create data iterators for MXNet, with each batch of data containing 100 images.
Step2: 2. Building the Network Architecture
Step3: 2.2 Bernoulli MLP as decoder
Step4: 2.3 Joint Loss Function for the Encoder and the Decoder
Step6: 3. Training the model
Step7: As expected, the ELBO is monotonically increasing over epoch, and we reproduced the results given in the paper Auto-Encoding Variational Bayes. Now we can extract/load the parameters and then feed the network forward to calculate $y$ which is the reconstructed image, and we can also calculate the ELBO for the test set.
Step8: 4. All together
Step9: One can directly call the class VAE to do the training
|
<ASSISTANT_TASK:>
Python Code:
mnist = mx.test_utils.get_mnist()
image = np.reshape(mnist['train_data'],(60000,28*28))
label = image
image_test = np.reshape(mnist['test_data'],(10000,28*28))
label_test = image_test
[N,features] = np.shape(image) #number of examples and features
f, (ax1, ax2, ax3, ax4) = plt.subplots(1,4, sharex='col', sharey='row',figsize=(12,3))
ax1.imshow(np.reshape(image[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax2.imshow(np.reshape(image[1,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax3.imshow(np.reshape(image[2,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax4.imshow(np.reshape(image[3,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
plt.show()
model_prefix = None
batch_size = 100
nd_iter = mx.io.NDArrayIter(data={'data':image},label={'loss_label':label},
batch_size = batch_size)
nd_iter_test = mx.io.NDArrayIter(data={'data':image_test},label={'loss_label':label_test},
batch_size = batch_size)
## define data and loss labels as symbols
data = mx.sym.var('data')
loss_label = mx.sym.var('loss_label')
## define fully connected and activation layers for the encoder, where we used tanh activation function.
encoder_h = mx.sym.FullyConnected(data=data, name="encoder_h",num_hidden=400)
act_h = mx.sym.Activation(data=encoder_h, act_type="tanh",name="activation_h")
## define mu and log variance which are the fully connected layers of the previous activation layer
mu = mx.sym.FullyConnected(data=act_h, name="mu",num_hidden = 5)
logvar = mx.sym.FullyConnected(data=act_h, name="logvar",num_hidden = 5)
## sample the latent variables z according to Normal(mu,var)
z = mu + np.multiply(mx.symbol.exp(0.5 * logvar),
mx.symbol.random_normal(loc=0, scale=1, shape=np.shape(logvar.get_internals()["logvar_output"])))
# define fully connected and tanh activation layers for the decoder
decoder_z = mx.sym.FullyConnected(data=z, name="decoder_z",num_hidden=400)
act_z = mx.sym.Activation(data=decoder_z, act_type="tanh",name="activation_z")
# define the output layer with sigmoid activation function, where the dimension is equal to the input dimension
decoder_x = mx.sym.FullyConnected(data=act_z, name="decoder_x",num_hidden=features)
y = mx.sym.Activation(data=decoder_x, act_type="sigmoid",name='activation_x')
# define the objective loss function that needs to be minimized
KL = 0.5*mx.symbol.sum(1+logvar-pow( mu,2)-mx.symbol.exp(logvar),axis=1)
loss = -mx.symbol.sum(mx.symbol.broadcast_mul(loss_label,mx.symbol.log(y))
+ mx.symbol.broadcast_mul(1-loss_label,mx.symbol.log(1-y)),axis=1)-KL
output = mx.symbol.MakeLoss(sum(loss),name='loss')
# set up the log
nd_iter.reset()
logging.getLogger().setLevel(logging.DEBUG)
# define function to trave back training loss
def log_to_list(period, lst):
def _callback(param):
The checkpoint function.
if param.nbatch % period == 0:
name, value = param.eval_metric.get()
lst.append(value)
return _callback
# define the model
model = mx.mod.Module(
symbol = output ,
data_names=['data'],
label_names = ['loss_label'])
# training the model, save training loss as a list.
training_loss=list()
# initilize the parameters for training using Normal.
init = mx.init.Normal(0.01)
model.fit(nd_iter, # train data
initializer=init,
# if eval_data is supplied, test loss will also be reported
# eval_data = nd_iter_test,
optimizer='sgd', # use SGD to train
optimizer_params={'learning_rate':1e-3,'wd':1e-2},
# save parameters for each epoch if model_prefix is supplied
epoch_end_callback = None if model_prefix==None else mx.callback.do_checkpoint(model_prefix, 1),
batch_end_callback = log_to_list(N/batch_size,training_loss),
num_epoch=100,
eval_metric = 'Loss')
ELBO = [-training_loss[i] for i in range(len(training_loss))]
plt.plot(ELBO)
plt.ylabel('ELBO');plt.xlabel('epoch');plt.title("training curve for mini batches")
plt.show()
arg_params = model.get_params()[0]
# if saved the parameters, can load them using `load_checkpoint` method at e.g. 100th epoch
# sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, 100)
# assert sym.tojson() == output.tojson()
e = y.bind(mx.cpu(), {'data': nd_iter_test.data[0][1],
'encoder_h_weight': arg_params['encoder_h_weight'],
'encoder_h_bias': arg_params['encoder_h_bias'],
'mu_weight': arg_params['mu_weight'],
'mu_bias': arg_params['mu_bias'],
'logvar_weight':arg_params['logvar_weight'],
'logvar_bias':arg_params['logvar_bias'],
'decoder_z_weight':arg_params['decoder_z_weight'],
'decoder_z_bias':arg_params['decoder_z_bias'],
'decoder_x_weight':arg_params['decoder_x_weight'],
'decoder_x_bias':arg_params['decoder_x_bias'],
'loss_label':label})
x_fit = e.forward()
x_construction = x_fit[0].asnumpy()
# learning images on the test set
f, ((ax1, ax2, ax3, ax4)) = plt.subplots(1,4, sharex='col', sharey='row',figsize=(12,3))
ax1.imshow(np.reshape(image_test[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax1.set_title('True image')
ax2.imshow(np.reshape(x_construction[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax2.set_title('Learned image')
ax3.imshow(np.reshape(x_construction[999,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax3.set_title('Learned image')
ax4.imshow(np.reshape(x_construction[9999,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax4.set_title('Learned image')
plt.show()
# calculate the ELBO which is minus the loss for test set
metric = mx.metric.Loss()
model.score(nd_iter_test, metric)
from VAE import VAE
# can initilize weights and biases with the learned parameters as follows:
# init = mx.initializer.Load(params)
# call the VAE, output model contains the learned model and training loss
out = VAE(n_latent=2, x_train=image, x_valid=None, num_epoch=200)
# encode test images to obtain mu and logvar which are used for sampling
[mu,logvar] = VAE.encoder(out,image_test)
# sample in the latent space
z = VAE.sampler(mu,logvar)
# decode from the latent space to obtain reconstructed images
x_construction = VAE.decoder(out,z)
f, ((ax1, ax2, ax3, ax4)) = plt.subplots(1,4, sharex='col', sharey='row',figsize=(12,3))
ax1.imshow(np.reshape(image_test[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax1.set_title('True image')
ax2.imshow(np.reshape(x_construction[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax2.set_title('Learned image')
ax3.imshow(np.reshape(x_construction[999,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax3.set_title('Learned image')
ax4.imshow(np.reshape(x_construction[9999,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax4.set_title('Learned image')
plt.show()
z1 = z[:,0]
z2 = z[:,1]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z1,z2,'ko')
plt.title("latent space")
#np.where((z1>3) & (z2<2) & (z2>0))
#select the points from the latent space
a_vec = [2,5,7,789,25,9993]
for i in range(len(a_vec)):
ax.plot(z1[a_vec[i]],z2[a_vec[i]],'ro')
ax.annotate('z%d' %i, xy=(z1[a_vec[i]],z2[a_vec[i]]),
xytext=(z1[a_vec[i]],z2[a_vec[i]]),color = 'r',fontsize=15)
f, ((ax0, ax1, ax2, ax3, ax4,ax5)) = plt.subplots(1,6, sharex='col', sharey='row',figsize=(12,2.5))
for i in range(len(a_vec)):
eval('ax%d' %(i)).imshow(np.reshape(x_construction[a_vec[i],:],(28,28)), interpolation='nearest', cmap=cm.Greys)
eval('ax%d' %(i)).set_title('z%d'%i)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute daily flakiness of kubeflow presubmit tests
Step2: Daily flake rate of all presubmit tests over time
Step3: Daily build and commit consistency of all presubmit tests
Step4: Flakiness per job
Step6: Overal failure rate per test type
Step8: Overal failure rate per workflow phase tests
Step13: Flakiness per junit tests
|
<ASSISTANT_TASK:>
Python Code:
%load_ext google.cloud.bigquery
%%bigquery daily_flakiness
select
job,
start_date,
round(sum(if(flaked=1,passed,runs))/sum(runs),3) build_consistency,
round(1-sum(flaked)/count(distinct commit),3) commit_consistency,
round (sum(flaked)/count(distinct commit),3) flake_rate,
sum(flaked) flakes,
sum(runs) runs,
sum(passed) passed,
sum(flaky_runs) flaky_runs,
sum(failed) failed
from ( /* Determine whether a (job, pr-num, commit) flaked */
select
job,
start_date,
num,
commit,
if(passed = runs or passed = 0, 0, 1) flaked,
if(passed = runs or passed = 0, 0, runs-passed) flaky_runs,
if(passed = 0, runs, 0) failed,
passed,
CAST(runs as INT64) runs
from (
select /* Count the runs and passes for each (job, pr-num, commit) */
max(start_date) start_date,
num,
commit,
sum(if(result='SUCCESS',1,0)) passed,
count(result) runs,
job
from (
SELECT /* all runs of any job for the past week, noting the commit and whether it passed */
job,
regexp_extract(path, r'/(\d+)\/') as num, /* pr number */
regexp_extract(m.value, r'[^,]+,\d+:([a-f0-9]+)"') commit, /* extract the first commit id from the repo flag */
EXTRACT(DATE FROM started) start_date,
result
FROM `k8s-gubernator.build.all` , UNNEST(metadata) as m
where
started > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 672 HOUR)
and (m.key = "repos") and STRPOS(job,'kubeflow') > 0 and STRPOS(job,'pr:') > 0
)
group by job, num, commit
)
)
group by job, start_date
order by
flakes desc,
build_consistency,
commit_consistency,
job
import pandas as pd
overal_flakes = pd.DataFrame(daily_flakiness).groupby("start_date",as_index=False).agg(
{ 'flake_rate':'mean',
'flakes' :'sum',
'runs' : 'sum'
})
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from datetime import datetime
matplotlib.rc('font', size=16)
ax=overal_flakes.flake_rate.plot(xticks=overal_flakes.index,figsize=(14,8), rot=45)
plt.title('Daily flake rate')
plt.xlabel('time')
plt.ylabel('flake_rate')
ax.set_xticklabels(overal_flakes['start_date'])
plt.ylim([0,1])
plt.show()
import pandas as pd
overal_consistency = pd.DataFrame(daily_flakiness).groupby("start_date",as_index=False).agg(
{ 'build_consistency':'mean',
'commit_consistency' :'mean'
})
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from datetime import datetime
matplotlib.rc('font', size=16)
ax1=overal_consistency.build_consistency.plot(xticks=overal_consistency.index,figsize=(14,8), rot=45)
ax2=overal_consistency.commit_consistency.plot(xticks=overal_consistency.index,figsize=(14,8), rot=45)
plt.title('Daily build and commit consistency')
plt.xlabel('time')
plt.ylabel('percentage of consistency')
ax1.set_xticklabels(overal_consistency['start_date'])
plt.legend()
plt.ylim([0,1])
plt.show()
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
job_flakiness=pd.DataFrame(daily_flakiness).groupby("job",as_index=False).agg(
{ 'passed':'sum',
'failed' :'sum',
'flaky_runs' : 'sum'
})
matplotlib.rc('font', size=16)
ax=job_flakiness[['passed','failed','flaky_runs']].plot(kind='bar', stacked=True, xticks=job_flakiness.index, figsize=(14,8), rot=90)
ax.set_xticklabels(job_flakiness['job'])
plt.title('job flakines')
plt.xlabel('job')
plt.ylabel('number of runs')
plt.show()
%%bigquery teststats
CREATE TEMP FUNCTION removeFirstChar(x STRING)
RETURNS STRING
LANGUAGE js AS
if (x.charAt(0) == '-') {
x=x.substr(1);
}
return x;
;
SELECT testname,runs,failures,filurepercentage
FROM(
SELECT
removeFirstChar(t.name) testname,
SUM(CASE WHEN t.failed=TRUE THEN 1 ELSE 0 END) failures,
COUNT(*) runs,
ROUND(SUM(CASE WHEN t.failed=TRUE THEN 1 ELSE 0 END)/COUNT(*)*100, 2) filurepercentage
FROM
`k8s-gubernator.build.all`, UNNEST(test) as t
WHERE
job LIKE '%kubeflow-presubmit%'
GROUP BY
testname
) WHERE testname not LIKE '%kubeflow-presubmit%'
order by filurepercentage DESC
%%bigquery teststats
CREATE TEMP FUNCTION getWorkflowTestName(x STRING)
RETURNS STRING
LANGUAGE js AS
var r=/\\d/;
var y=x.replace("-e2e","-endtoend");
var fd=r.exec(y);
y=y.substring(0, y.indexOf(fd) - 1);
y=y.replace("-endtoend","-e2e");
return y;
;
Select
testname,
SUM(CASE WHEN mvalue ="Succeeded" THEN 0 ELSE 1 END) failures,
COUNT(*) runs,
ROUND(SUM(CASE WHEN mvalue ="Succeeded" THEN 0 ELSE 1 END)/COUNT(*)*100, 2) filurepercentage
From(
Select getWorkflowTestName(mkey) testname,mvalue
FROM(
SELECT m.key mkey, m.value mvalue
FROM
`k8s-gubernator.build.all`,UNNEST(metadata) as m
WHERE
job LIKE '%kubeflow-presubmit%' and ENDS_WITH(m.key, "-phase")
)
)
GROUP BY testname
ORDER BY filurepercentage DESC
%%bigquery testkfisready
CREATE TEMP FUNCTION removeFirstChar(x STRING)
RETURNS STRING
LANGUAGE js AS
if (x.charAt(0) == '-') {
x=x.substr(1);
}
return x;
;
SELECT
testname,
job,
runs,
test_failed,
test_passed,
job_passed,
job_failed,
num,
path,
commit,
test_elapsed_time
FROM(
SELECT
testname,
count(*) runs,
sum(job_passed) job_passed,
count(*) - sum(job_passed) job_failed,
max(start_date) start_date,
sum(test_failed) test_failed,
sum(test_passed) test_passed,
num,
array_agg(path) path,
commit,
job,
array_agg(test_elapsed_time) test_elapsed_time
FROM(
SELECT /* collect stats per (commit, testname)*/
t.name testname,
CASE WHEN t.failed=TRUE THEN 1 ELSE 0 END test_failed,
CASE WHEN t.failed=TRUE THEN 0 ELSE 1 END test_passed,
job_passed,
t.time test_elapsed_time,
num,
path path,
job,
start_date start_date,
regexp_extract(commit, r'[^,]+,\d+:([a-f0-9]+)"') commit /* extract the first commit id from the repo flag */
FROM( /* collect kubeflow commit rows */
SELECT
path,
m.value commit,
test,
job,
EXTRACT(DATE FROM started) start_date,
regexp_extract(path, r'/(\d+)\/') as num, /* pr number */
CASE WHEN result='SUCCESS' THEN 1 ELSE 0 END job_passed
FROM
`k8s-gubernator.build.all`, UNNEST(metadata) as m
WHERE
job LIKE '%kubeflow-presubmit%'
and started > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 700 HOUR)
and (m.key = "repos") and STRPOS(job,'kubeflow') > 0 and STRPOS(job,'pr:') > 0
), UNNEST(test) as t
where t.name not LIKE '%kubeflow-presubmit%'
)
GROUP BY testname,commit,num,job
) where job_passed>0 and job_failed>0 and test_failed>0
import pandas as pd
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(testkfisready)
%%bigquery testkfisready
CREATE TEMP FUNCTION getWorkflowTestName(x STRING)
RETURNS STRING
LANGUAGE js AS
var r=/\\d/;
var y=x.replace("-e2e","-endtoend");
var fd=r.exec(y);
y=y.substring(0, y.indexOf(fd) - 1);
y=y.replace("-endtoend","-e2e");
return y;
;
SELECT
testname,
runs,
wf_passed,
wf_failed,
passed,
failed,
start_date,
num,
job,
commit,
path,
IF(failed>0 and passed>0 and wf_failed>0, 1,0) wf_flake,
if(failed>0 and passed> 0, 1,0) flake
FROM(
SELECT
testname,
count(*) runs,
sum(wf_passed) wf_passed,
sum(wf_failed) wf_failed,
sum(passed) passed,
sum(failed) failed,
max(start_date) start_date,
num,
array_agg(path) path,
job,
commit
FROM(
SELECT
getWorkflowTestName(m.key) testname,
IF(m.value="Succeeded", 1, 0) wf_passed,
IF(m.value="Succeeded", 0, 1) wf_failed,
regexp_extract(commit, r'[^,]+,\d+:([a-f0-9]+)"') commit,
job,
path,
start_date,
num,
passed,
failed
FROM(
SELECT
path,
test,
job,
metadata,
EXTRACT(DATE FROM started) start_date,
regexp_extract(path, r'/(\d+)\/') as num, /* pr number */
CASE WHEN result='SUCCESS' THEN 1 ELSE 0 END passed,
CASE WHEN result='SUCCESS' THEN 0 ELSE 1 END failed,
(SELECT m.value From UNNEST(metadata) as m where m.key = "repos") as commit
FROM
`k8s-gubernator.build.all`
WHERE
job LIKE '%kubeflow-presubmit%'
and started > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 168 HOUR)
and STRPOS(job,'kubeflow') > 0 and STRPOS(job,'pr:') > 0
), UNNEST(metadata) as m
where
job LIKE '%kubeflow-presubmit%' and ENDS_WITH(m.key, "-phase")
)
group by testname,job,commit,num
)WHERE failed>0 and passed>0 and wf_failed>0
%%bigquery testkfisready
CREATE TEMP FUNCTION getWorkflowTestName(x STRING)
RETURNS STRING
LANGUAGE js AS
var r=/\\d/;
var y=x.replace("-e2e","-endtoend");
var fd=r.exec(y);
y=y.substring(0, y.indexOf(fd) - 1);
y=y.replace("-endtoend","-e2e");
return y;
;
SELECT
testname,
sum(runs) runs,
sum(passed) passed,
sum(failed) failed,
sum(wf_passed) wf_passed,
sum(wf_failed) wf_failed,
sum(if(wf_failed>0 and flake>0,1,0)) wf_flakes,
sum(flake) flakes
FROM(
SELECT
testname,
runs,
wf_passed,
wf_failed,
passed,
failed,
start_date,
num,
job,
commit,
path,
if(failed>0 and passed> 0, 1,0) flake
FROM(
SELECT
testname,
count(*) runs,
sum(wf_passed) wf_passed,
sum(wf_failed) wf_failed,
sum(passed) passed,
sum(failed) failed,
max(start_date) start_date,
num,
array_agg(path) path,
job,
commit
FROM(
SELECT
getWorkflowTestName(m.key) testname,
IF(m.value="Succeeded", 1, 0) wf_passed,
IF(m.value="Succeeded", 0, 1) wf_failed,
regexp_extract(commit, r'[^,]+,\d+:([a-f0-9]+)"') commit,
job,
path,
start_date,
num,
passed,
failed
FROM(
SELECT
path,
test,
job,
metadata,
EXTRACT(DATE FROM started) start_date,
regexp_extract(path, r'/(\d+)\/') as num, /* pr number */
CASE WHEN result='SUCCESS' THEN 1 ELSE 0 END passed,
CASE WHEN result='SUCCESS' THEN 0 ELSE 1 END failed,
(SELECT m.value From UNNEST(metadata) as m where m.key = "repos") as commit
FROM
`k8s-gubernator.build.all`
WHERE
job LIKE '%kubeflow-presubmit%'
and started > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 600 HOUR)
and STRPOS(job,'kubeflow') > 0 and STRPOS(job,'pr:') > 0
and regexp_extract(path, r'/(\d+)\/') is not null
), UNNEST(metadata) as m
where
job LIKE '%kubeflow-presubmit%' and ENDS_WITH(m.key, "-phase")
)
group by testname,job,commit,num
)WHERE failed>0 and passed>0 and wf_failed>0
) group by testname
order by wf_flakes DESC
%%bigquery testkfisready
CREATE TEMP FUNCTION getWorkflowTestName(x STRING)
RETURNS STRING
LANGUAGE js AS
var r=/\\d/;
var y=x.replace("-e2e","-endtoend");
var fd=r.exec(y);
y=y.substring(0, y.indexOf(fd) - 1);
y=y.replace("-endtoend","-e2e");
return y;
;
SELECT
testname,
sum(runs) runs,
sum(passed) passed,
sum(failed) failed,
sum(wf_passed) wf_passed,
sum(wf_failed) wf_failed,
sum(if(wf_failed>0 and flake>0,1,0)) wf_flakes,
sum(flake) flakes,
array_agg(num) num,
array_agg(commit) co
FROM(
SELECT
testname,
runs,
wf_passed,
wf_failed,
passed,
failed,
start_date,
num,
job,
commit,
path,
if(failed>0 and passed> 0, 1,0) flake
FROM(
SELECT
testname,
count(*) runs,
sum(wf_passed) wf_passed,
sum(wf_failed) wf_failed,
sum(passed) passed,
sum(failed) failed,
max(start_date) start_date,
num,
array_agg(path) path,
job,
commit
FROM(
SELECT
getWorkflowTestName(m.key) testname,
IF(m.value="Succeeded", 1, 0) wf_passed,
IF(m.value="Succeeded", 0, 1) wf_failed,
regexp_extract(commit, r'[^,]+,\d+:([a-f0-9]+)"') commit,
job,
path,
start_date,
num,
passed,
failed
FROM(
SELECT
path,
test,
job,
metadata,
EXTRACT(DATE FROM started) start_date,
regexp_extract(path, r'/(\d+)\/') as num, /* pr number */
CASE WHEN result='SUCCESS' THEN 1 ELSE 0 END passed,
CASE WHEN result='SUCCESS' THEN 0 ELSE 1 END failed,
(SELECT m.value From UNNEST(metadata) as m where m.key = "repos") as commit
FROM
`k8s-gubernator.build.all`
WHERE
job LIKE '%kubeflow-presubmit%'
and started > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 600 HOUR)
and STRPOS(job,'kubeflow') > 0 and STRPOS(job,'pr:') > 0
and regexp_extract(path, r'/(\d+)\/') is not null
), UNNEST(metadata) as m
where
job LIKE '%kubeflow-presubmit%' and ENDS_WITH(m.key, "-phase")
)
group by testname,job,commit,num
)WHERE failed>0 and passed>0 and wf_failed>0 and commit is not null and num is not null
) group by testname
order by wf_flakes DESC
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Detect Onsets
Step2: Load a fictional reference annotation.
Step3: Plot the estimated and reference onsets together.
Step4: Evaluate
Step5: Out of a possible 8 reference onsets, 7 estimated onsets matched, i.e. recall = 7/8 = 0.875.
|
<ASSISTANT_TASK:>
Python Code:
y, sr = librosa.load('audio/simple_piano.wav')
ipd.Audio(y, rate=sr)
est_onsets = librosa.onset.onset_detect(y=y, sr=sr, units='time')
est_onsets
ref_onsets = numpy.array([0, 0.270, 0.510, 1.02,
1.50, 2.02, 2.53, 3.01])
librosa.display.waveplot(y, sr=sr, alpha=0.5)
plt.vlines(est_onsets, -1, 1, color='r')
plt.scatter(ref_onsets, numpy.zeros_like(ref_onsets), color='k', s=100)
plt.legend(['Waveform', 'Estimated onsets', 'Reference onsets']);
mir_eval.onset.evaluate(ref_onsets, est_onsets)
mir_eval.onset.evaluate(ref_onsets, est_onsets, window=0.002)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overview
Step2: Problem
Step3: Now that our data is in an accessible format, let's try and get it into something we can do math with. Copy over the data from our dictionary into numpy arrays called x and y. Then plot the results.
Step4: Extra challenge
Step5: Test out a few lines and see what the loss function is. Combined with the plot above, see if you can get a reasonable fit to the data.
Step6: Finding the Best-Fit Line
Step7: For now, we'll minimize our loss function using the minimize package contained within scipy.optimize. Using the documentation, see if you can figure out how to use minimize to get the best-fit parameters theta based on our loss function loss.
Step8: Extra challenge
Step9: Challenge
Step10: Using your function, compute the number of goals needed to get to the finals and plot the distribution of the results as a histogram using plt.hist with Nbins=50 bins.
Step11: Challenge
Step12: Now, using the code from earlier and a for loop, resample the data and recompute the best-fit line Nmc=10000 times.
Step13: Once you're done, plot the data as a 2-D histogram as before.
Step14: Finally, re-compute the number of goals needed to get to the finals for these new $\boldsymbol{\Theta}$ samples and plot the distribution of the results as a histogram using plt.hist with Nbins=50 bins.
Step15: Extra challenge
Step16: Plot the weighted 2-D histogram of the fits and the weighted 1-D histogram of the goals needed to get to the finals. Note that meshgrid might be helpful here.
|
<ASSISTANT_TASK:>
Python Code:
# only necessary if you're running Python 2.7 or lower
from __future__ import print_function, division
from six.moves import range
# import matplotlib and define our alias
from matplotlib import pyplot as plt
# plot figures within the notebook rather than externally
%matplotlib inline
# numpy
import numpy as np
# scipy
import scipy
# load in JSON
import json
# load in our data
filename = 'worldcup_2014.json'
with open(filename, 'r') as f:
cup_data = json.load(f) # world cup JSON data
# check what's in our dictionary
cup_data
# check the relevant keys
cup_data.keys()
# check group-level data
cup_data['rounds']
data = dict()
# read out world cup 2014 data from the dictionary
for matchup in cup_data['rounds']:
for match in matchup['matches']:
team1, team2 = match['team1']['name'], match['team2']['name'] # team names
score1, score2 = match['score1'], match['score2'] # scores
score = score1 - score2 # effective score
try:
data[team1]['goals'] += score
data[team2]['goals'] -= score
data[team1]['games'] += 1
data[team2]['games'] += 1
except:
data[team1] = {'goals': score, 'games': 1}
data[team2] = {'goals': -score, 'games': 1}
if matchup['name'] == 'Match for third place':
data[team1]['games'] -= 1
data[team2]['games'] -= 1
# Check data.
data
# copy over our data into x and y
x = ...
y = ...
# plot our results
# remember to label your axes!
...
# Linear fit (takes input *vector* `theta` -> (a,b))
def linear(theta):
...
# Loss function
def loss(theta):
...
# testing out the linear fit and loss function
# space if you want to try your hand at solving the linear system explicitly
# Minimize the loss function
results = minimize(...)
theta_best = ...
# Print out results
print(results)
print('Best-Fit:', theta_best)
# Plot comparison between data and results
...
Nmc, Nbins = 10000, 20
# Draw random samples around best-fit given errors.
thetas = ...
# Plot draws as 2-D histogram.
# Remember to label your axes!
...
# function to convert from theta to goals for a given round
# i.e., compute x from y=ax+b for y,a,b given
def goals_needed(theta, matchup): # we're avoiding "round" since it's special
...
# compute goals needed
final_preds = ...
# plot histogram of results
# Remember to label your axes!
Nbins = 50
...
# create example data
N_example = 10 # number of data points
x_example = np.arange(N_example) # x grid
y_example = np.arange(N_example) * 5 # y grid
print('Original data:')
print(x_example)
print(y_example)
# resample data
idxs = np.random.choice(N_example, size=N_example) # resample `N_example` indices
x_example_r = x_example[idxs] # select resampled x values
y_example_r = y_example[idxs] # select resampled y values
print('\nResampled data:')
print(idxs)
print(x_example_r)
print(y_example_r)
# define resampling function
def resample_data(x, y):
...
# redefine linear fit if necessary
def linear_resamp(theta):
...
# redefine loss function if necessary
def loss_resamp(theta):
...
Nmc, Nbins = 10000, 20
# resample data and re-fit line
thetas_resamp = ...
for i in range(Nmc):
x_resamp, y_resamp = resample_data(x, y) # resample data
results = minimize(...) # minimize resample data
... # assign value to thetas_resamp
# Plot draws as 2-D histogram.
# Remember to label your axes!
...
# compute goals needed
final_preds_resamp = ...
# plot histogram of results
# Remember to label your axes!
Nbins = 50
...
# define grids
a_grid = ...
b_grid = ...
# compute losses over grids
...
# compute weights
weights = ...
Nbins = 20
# Plot draws as 2-D histogram.
# Remember to label your axes!
...
# compute goals needed
final_preds_grid = ...
final_preds_weights = ...
# plot histogram of results
# Remember to label your axes!
Nbins = 50
...
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Connect girder client and set parameters
Step2: Let's inspect the ground truth codes file
Step3: Read and visualize mask
Step4: 2. Get contours from mask
Step5: Extract contours
Step6: Let's inspect the contours dataframe
Step7: 3. Get annotation documents from contours
Step8: As mentioned in the docs, this function wraps get_single_annotation_document_from_contours()
Step9: Let's get a list of annotation documents (each is a dictionary). For the purpose of this tutorial,
Step10: Let's examine one of the documents.
Step11: Post the annotation to the correct item/slide in DSA
|
<ASSISTANT_TASK:>
Python Code:
import os
CWD = os.getcwd()
import girder_client
from pandas import read_csv
from imageio import imread
from histomicstk.annotations_and_masks.masks_to_annotations_handler import (
get_contours_from_mask,
get_single_annotation_document_from_contours,
get_annotation_documents_from_contours)
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 7, 7
# APIURL = 'http://demo.kitware.com/histomicstk/api/v1/'
# SAMPLE_SLIDE_ID = '5bbdee92e629140048d01b5d'
APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'
SAMPLE_SLIDE_ID = '5d586d76bd4404c6b1f286ae'
# Connect to girder client
gc = girder_client.GirderClient(apiUrl=APIURL)
gc.authenticate(interactive=True)
# gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
# read GTCodes dataframe
GTCODE_PATH = os.path.join(
CWD, '..', '..', 'tests', 'test_files', 'sample_GTcodes.csv')
GTCodes_df = read_csv(GTCODE_PATH)
GTCodes_df.index = GTCodes_df.loc[:, 'group']
GTCodes_df.head()
# read mask
X_OFFSET = 59206
Y_OFFSET = 33505
MASKNAME = "TCGA-A2-A0YE-01Z-00-DX1.8A2E3094-5755-42BC-969D-7F0A2ECA0F39" + \
"_left-%d_top-%d_mag-BASE.png" % (X_OFFSET, Y_OFFSET)
MASKPATH = os.path.join(CWD, '..', '..', 'tests', 'test_files', 'annotations_and_masks', MASKNAME)
MASK = imread(MASKPATH)
plt.figure(figsize=(7,7))
plt.imshow(MASK)
plt.title(MASKNAME[:23])
plt.show()
print(get_contours_from_mask.__doc__)
# Let's extract all contours from a mask, including ROI boundary. We will
# be discarding any stromal contours that are not fully enclosed within a
# non-stromal contour since we already know that stroma is the background
# group. This is so things look uncluttered when posted to DSA.
groups_to_get = None
contours_df = get_contours_from_mask(
MASK=MASK, GTCodes_df=GTCodes_df, groups_to_get=groups_to_get,
get_roi_contour=True, roi_group='roi',
discard_nonenclosed_background=True,
background_group='mostly_stroma',
MIN_SIZE=30, MAX_SIZE=None, verbose=True,
monitorPrefix=MASKNAME[:12] + ": getting contours")
contours_df.head()
print(get_annotation_documents_from_contours.__doc__)
print(get_single_annotation_document_from_contours.__doc__)
# get list of annotation documents
annprops = {
'X_OFFSET': X_OFFSET,
'Y_OFFSET': Y_OFFSET,
'opacity': 0.2,
'lineWidth': 4.0,
}
annotation_docs = get_annotation_documents_from_contours(
contours_df.copy(), separate_docs_by_group=True, annots_per_doc=10,
docnamePrefix='demo', annprops=annprops,
verbose=True, monitorPrefix=MASKNAME[:12] + ": annotation docs")
ann_doc = annotation_docs[0].copy()
ann_doc['elements'] = ann_doc['elements'][:2]
for i in range(2):
ann_doc['elements'][i]['points'] = ann_doc['elements'][i]['points'][:5]
ann_doc
# deleting existing annotations in target slide (if any)
existing_annotations = gc.get('/annotation/item/' + SAMPLE_SLIDE_ID)
for ann in existing_annotations:
gc.delete('/annotation/%s' % ann['_id'])
# post the annotation documents you created
for annotation_doc in annotation_docs:
resp = gc.post(
"/annotation?itemId=" + SAMPLE_SLIDE_ID, json=annotation_doc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load image
Step2: Define Corner Parameters
Step3: Detect Corners
Step4: Mark Corners
Step5: View Image
|
<ASSISTANT_TASK:>
Python Code:
# Load image
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image_bgr = cv2.imread('images/plane_256x256.jpg')
image_gray = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2GRAY)
image_gray = np.float32(image_gray)
# Set corner detector parameters
block_size = 2
aperture = 29
free_parameter = 0.04
# Detect corners
detector_responses = cv2.cornerHarris(image_gray, block_size, aperture, free_parameter)
# Large corner markers
detector_responses = cv2.dilate(detector_responses, None)
# Only keep detector responses greater than threshold, mark as white
threshold = 0.02
image_bgr[detector_responses > threshold * detector_responses.max()] = [255,255,255]
# Convert to grayscale
image_gray = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2GRAY)
# Show image
plt.imshow(image_gray, cmap='gray'), plt.axis("off")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Base API
Step2: Advanced API
Step3: Note
Step4: By construction, p.mu is a Theano symbolic expression which depends on several inputs, in this case a and b. Accordingly, the actual variables (or hyper-parameters) that fully define p are the shared variables a (created explicitly) and sigma (created implicitly from the scalar 2.0). In particular, mu is not an hyper-parameter of p since it is itself defined from the variable a and the constant b.
Step5: Additionally, parameter expressions can be defined in terms of free Theano variables that are not (yet) tied to any value. These auxiliary inputs will need to be passed at evaluation. All required extra inputs are stored in p.observeds_.
Step6: Composing mixtures
Step7: Note that weights are automatically normalized such that they sum to 1.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import theano
import theano.tensor as T
from carl.distributions import Normal
p = Normal(mu=0.0, sigma=1.0)
reals = np.linspace(-5, 5, num=1000)
pdf = p.pdf(X=reals.reshape(-1, 1)) # X is a 2D array of shape n_samples x n_features
print(pdf[:10])
plt.plot(reals, pdf,label="pdf(x)")
plt.legend(loc="best")
plt.show()
reals = np.linspace(-5, 5, num=1000)
plt.plot(reals, p.nll(reals.reshape(-1, 1)), label="-log(pdf(x))")
plt.legend(loc="best")
plt.show()
reals = np.linspace(-5, 5, num=1000)
plt.plot(reals, p.cdf(reals.reshape(-1, 1)), label="cdf(x)")
plt.legend(loc="best")
plt.show()
reals = np.linspace(0, 1, num=1000)
plt.plot(reals, p.ppf(reals.reshape(-1, 1)), label="ppf(x)")
plt.legend(loc="best")
plt.show()
p.rvs(n_samples=10000)
a = theano.shared(1.0, name="a")
b = T.constant(0.5, name="b")
p = Normal(mu=a * b, sigma=2.0)
# Parameters are Theano symbolic expressions
print(type(p.mu))
print(type(p.sigma)) # sigma=2.0 was embedded into a shared variable
p.parameters_ # all input parameters (note that mu is not part of those!)
p.constants_ # all input constants`
a = T.dmatrix(name="a") # free input to be specified at evaluation
b = theano.shared(-1.0, name="b")
c = theano.shared(1.0, name="c")
p = Normal(mu=a*b + c)
p.parameters_
p.constants_
p.observeds_
p.pdf(X=np.array([[0.0], [0.0]]),
a=np.array([[1.0], [2.0]])) # specify the auxiliary input `a` at evaluation
# Plot pdf(x, a)
import mpl_toolkits.mplot3d.axes3d as axes3d
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
Xs = np.arange(-5, 5, 0.25)
As = np.arange(-5, 5, 0.25)
Xs, As = np.meshgrid(Xs, As)
Ps = p.pdf(X=Xs.reshape(-1, 1),
a=As.reshape(-1, 1))
Ps = Ps.reshape(Xs.shape)
ax.plot_wireframe(Xs, As, Ps, rstride=4, cstride=4, alpha=0.5)
ax.set_xlabel("x")
ax.set_ylabel("a")
ax.set_zlabel("p(x, a)")
plt.show()
from carl.distributions import Mixture
components = [
Normal(mu=-2.0, sigma=0.75), # c0
Normal(mu=0.0, sigma=2.0), # c1
Normal(mu=1.0, sigma=0.5) # c2 (bump)
]
g = theano.shared(0.05, name="g")
p = Mixture(components=components, weights=[0.5 - g / 2., 0.5 - g / 2., g])
p.parameters_ # union of all component parameters + g
reals = np.linspace(-5, 5, num=1000)
plt.plot(reals, p.pdf(reals.reshape(-1, 1)), label="pdf(x)")
plt.legend()
plt.show()
reals = np.linspace(-5, 5, num=1000)
plt.plot(reals, p.cdf(reals.reshape(-1, 1)), label="cdf(x)")
plt.legend()
plt.show()
p.weights
p.compute_weights()
# Target distribution
p0 = Mixture(components=[Normal(mu=1.0, sigma=1.0), Normal(mu=4.0, sigma=1.0)],
weights=[0.7, 0.3])
# Fit components[0].mu and mixture weights, freeze all others
w = theano.shared(0.5, name="w")
p1 = Mixture(components=[Normal(mu=0.0, sigma=T.constant(1.0)),
Normal(mu=T.constant(4.0), sigma=T.constant(1.0))],
weights=[w, 1.0 - w])
p1.parameters_
X = p0.rvs(10000)
p1.fit(X, bounds=[{"param": w, "bounds": (0.5, 1.0)}], use_gradient=False)
p1.components[0].mu.eval()
p1.compute_weights()
reals = np.linspace(-5, 5, num=1000)
plt.hist(X.ravel(), bins=100, normed=1, alpha=0.5, label="x~p0")
plt.plot(reals, p1.pdf(reals.reshape(-1, 1)), label="p1(x)")
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Watch the video of the type of activity that was recorded
Step2: The data is extracted from the .zip file on the UCI website
Step3: The dataset contains readings from mobile phone accelerometer and gyroscope as subjects perform a range of actions. The raw data is then manipulated to form 560 features from the initial 6 features (accelerometer 3-axis and gyroscope 3-axis).
Step4: A training set of 7352 readings (X_train) labeled as one of six activities (y_train) is used to train a random forest classifier.
Step5: A quick sanity check where the training data is classified by the model - should be 100% accurate or very close
Step6: A second unseen dataset of 2947 datapoints is used to test the classifier.
Step7: Predictions for the unseen dataset are created.
Step8: The correct labels are loaded and prediction accuracy is tested
Step9: The features with an importance >.015 are -
Step10: The original paper used a SVC from R to classify the data. Here is a comparison in Python using the same type of classifier.
Step11: The confusion matrix from the SVC in sklearn (Python) was very close to the result obtained by the original authors using R. (see paper https
Step12: An alternative model using a Random Forest Classifier
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.ensemble import RandomForestClassifier
from IPython.display import YouTubeVideo, HTML
YouTubeVideo("XOEN9W05_4A")
#The Donald Bren School of Information and Computer Sciences - University of California, Irvine
info_file = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00240/UCI%20HAR%20Dataset.names'
data_zip = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00240/UCI%20HAR%20Dataset.zip'
!curl $info_file
import requests, zipfile, io
r = requests.get(data_zip)
z = zipfile.ZipFile(io.BytesIO(r.content))
X_train_df = pd.read_csv(z.open('UCI HAR Dataset/train/X_train.txt'), skipinitialspace=True, sep=' ', header=None)
X_train = X_train_df.values
y_train_df = pd.read_csv(z.open('UCI HAR Dataset/train/y_train.txt'), header=None)
y_train = y_train_df[0].values
X_train.shape
y_train.shape
clf = RandomForestClassifier(n_jobs=2,n_estimators=150)
clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score, roc_auc_score
y_predict = clf.predict(X_train)
accuracy = accuracy_score(y_train, y_predict)
print('---'*20)
print('\n')
print("A Random Forest Classifier with 150 estimators was {}% accurate on the test data.".format(round(accuracy*100,2)))
print('\n')
print('---'*20)
X_test_df = pd.read_csv(z.open('UCI HAR Dataset/test/X_test.txt'), skipinitialspace=True, sep=' ', header=None)
X_test = X_test_df.values
y_test_df = pd.read_csv(z.open('UCI HAR Dataset/test/y_test.txt'), header=None)
y_test = y_test_df[0].values
X_test.shape
y_test.shape
y_predict_test = clf.predict(X_test)
test_score = accuracy_score(y_test, y_predict_test)
print('---'*20)
print('\n')
print('A random Forest Classifier with 150 estimators, trained on the training data, was {}% accurate at labelling the unseen test data.'.format(round(test_score*100,2)))
print('\n')
print('---'*20)
import matplotlib as mpl
order = clf.feature_importances_
order.sort()
mpl.pylab.plot(order)
importance = pd.DataFrame(clf.feature_importances_)
feature = pd.read_csv(z.open('UCI HAR Dataset/features.txt'),sep=' ',header=None)
feature['importance'] = clf.feature_importances_
feature.columns = [0, 'transform', 'importance']
for feat in feature.transform[feature.importance>.015].values:
print(feat,feature.importance[feature.transform==feat].values)
activity_labels = pd.read_csv(z.open('UCI HAR Dataset/activity_labels.txt'),index_col=0,sep=' ',header = None)
labels_activity = [x.lower() for x in activity_labels[1].tolist()]
labels_activity
from sklearn import metrics
# testing score
score_test = metrics.f1_score(y_test, y_predict_test, pos_label=list(set(y_test)),average = None)
# training score
score_train = metrics.f1_score(y_train, y_predict, pos_label=list(set(y_train)),average = None)
print('\n','metrics.f1_score for each category in training set','\n','----'*10)
for act,dec in zip(labels_activity,score_train):
print('{} - {}%'.format(act,round(dec*100,2)))
print('\n','metrics.f1_score for each category in test set','\n','----'*10)
for act,dec in zip(labels_activity,score_test):
print('{} - {}%'.format(act,round(dec*100,2)))
from sklearn import svm, datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (14.0, 10.0)
# Run classifier
classifier = svm.SVC(kernel='linear')
y_predictSVC = classifier.fit(X_train, y_train).predict(X_test)
y_values = [0,1,2,3,4,5]
activity = labels_activity
# Compute confusion matrix
conf_arr = confusion_matrix(y_test, y_predictSVC)
norm_conf = []
for i in conf_arr:
a = 0
tmp_arr = []
a = sum(i, 0)
for j in i:
tmp_arr.append(float(j)/float(a))
norm_conf.append(tmp_arr)
width = len(conf_arr)
height = len(conf_arr[0])
for x in range(width):
for y in range(height):
ax.annotate(str(conf_arr[x][y]), xy=(y, x),
horizontalalignment='center',
verticalalignment='center')
# Show confusion matrix in a separate window
cb = fig.colorbar(res)
plt.matshow(conf_arr,cmap=plt.cm.winter)
plt.title('Confusion matrix for SVM',y=1.5)
plt.colorbar(cmap=plt.cm.YlGn)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.yticks(y_values, activity)
plt.xticks(y_values, activity,rotation = 90,)
plt.grid(b = False, which='major', axis='both')
width = len(conf_arr)
height = len(conf_arr[0])
for x in range(width):
for y in range(height):
plt.annotate(str(conf_arr[x][y]), xy=(y, x),
horizontalalignment='center',
verticalalignment='center')
#plt.show()
accuracy = accuracy_score(y_test, y_predictSVC)
print('---'*20)
print('\n')
print("A SVC was {}% accurate on the test data.".format(round(accuracy*100,2)))
print('\n')
print('---'*20)
from sklearn.ensemble import RandomForestClassifier
#from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (14.0, 10.0)
# Run classifier
#classifier = svm.SVC(kernel='linear')
#y_predictSVC = classifier.fit(X_train, y_train).predict(X_test)
y_values = [0,1,2,3,4,5]
activity = labels_activity
# Compute confusion matrix
conf_arr = confusion_matrix(y_test, y_predict_test)
norm_conf = []
for i in conf_arr:
a = 0
tmp_arr = []
a = sum(i, 0)
for j in i:
tmp_arr.append(float(j)/float(a))
norm_conf.append(tmp_arr)
width = len(conf_arr)
height = len(conf_arr[0])
for x in range(width):
for y in range(height):
ax.annotate(str(conf_arr[x][y]), xy=(y, x),
horizontalalignment='center',
verticalalignment='center')
# Show confusion matrix in a separate window
cb = fig.colorbar(res)
plt.matshow(conf_arr,cmap=plt.cm.winter)
plt.title('Confusion matrix for Random Forest Classifier',y=1.5)
plt.colorbar(cmap=plt.cm.YlGn)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.yticks(y_values, activity)
plt.xticks(y_values, activity,rotation = 90,)
plt.grid(b = False, which='major', axis='both')
width = len(conf_arr)
height = len(conf_arr[0])
for x in range(width):
for y in range(height):
plt.annotate(str(conf_arr[x][y]), xy=(y, x),
horizontalalignment='center',
verticalalignment='center')
#plt.show()
accuracy = accuracy_score(y_test, y_predict_test)
print('---'*20)
print('\n')
print("A Random Forest Classifier with 150 estimators was {}% accurate on the test data.".format(round(accuracy*100,2)))
print('\n')
print('---'*20)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.