Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
1,900
|
<ASSISTANT_TASK:>
Python Code:
#text goes here
corpora = ""
for fname in os.listdir("codex"):
import sys
if sys.version_info >= (3,0):
with open("codex/"+fname, encoding='cp1251') as fin:
text = fin.read() #If you are using your own corpora, make sure it's read correctly
corpora += text
else:
with open("codex/"+fname) as fin:
text = fin.read().decode('cp1251') #If you are using your own corpora, make sure it's read correctly
corpora += text
print corpora[1000:1100]
#all unique characters go here
tokens = <all unique characters>
tokens = list(tokens)
#checking the symbol count. Validated on Python 2.7.11 Ubuntu x64.
#May be __a bit__ different on other platforms
#If you are sure that you have selected all unicode symbols - feel free to comment-out this assert
# Also if you are using your own corpora, remove it and just make sure your tokens are sensible
assert len(tokens) == 102
token_to_id = <dictionary of symbol -> its identifier (index in tokens list)>
id_to_token = < dictionary of symbol identifier -> symbol itself>
#Cast everything from symbols into identifiers
corpora_ids = <1D numpy array of symbol identifiers, where i-th number is an identifier of i-th symbol in corpora>
def sample_random_batches(source,n_batches=10, seq_len=20):
This function should take random subsequences from the tokenized text.
Parameters:
source - basicly, what you have just computed in the corpora_ids variable
n_batches - how many subsequences are to be sampled
seq_len - length of each of such subsequences
You have to return:
X - a matrix of int32 with shape [n_batches,seq_len]
Each row of such matrix must be a subsequence of source
starting from random index of corpora (from 0 to N-seq_len-2)
Y - a vector, where i-th number is one going RIGHT AFTER i-th row from X from source
Thus sample_random_batches(corpora_ids, 25, 10) must return
X, X.shape == (25,10), X.dtype == 'int32'
where each row is a 10-character-id subsequence from corpora_ids
Y, Y.shape == (25,), Y.dtype == 'int32'
where each element is 11-th element to the corresponding 10-symbol sequence from X
PLEASE MAKE SURE that y symbols are indeed going immediately after X sequences,
since it is hard to debug later (NN will train, but it will generate something useless)
The simplest approach is to first sample a matrix [n_batches, seq_len+1]
and than split it into X (first seq_len columns) and y (last column)
There will be some tests for this function, but they won't cover everything
my_function()
return X_batch, y_batch
#Training sequence length (truncation depth in BPTT)
seq_length = set your seq_length. 10 as a no-brainer?
#better start small (e.g. 5) and increase after the net learned basic syllables. 10 is by far not the limit.
#max gradient between recurrent layer applications (do not forget to use it)
grad_clip = ?
input_sequence = T.matrix('input sequence','int32')
target_values = T.ivector('target y')
from lasagne.layers import InputLayer,DenseLayer,EmbeddingLayer
from lasagne.layers import RecurrentLayer,LSTMLayer,GRULayer,CustomRecurrentLayer
l_in = lasagne.layers.InputLayer(shape=(None, None),input_var=input_sequence)
<Your neural network>
l_out = <last dense layer, returning probabilities for all len(tokens) options for y>
# Model weights
weights = lasagne.layers.get_all_params(l_out,trainable=True)
print weights
network_output = <NN output via lasagne>
#If you use dropout do not forget to create deterministic version for evaluation
loss = <Lost function - a simple cat crossentropy will do>
updates = <your favorite optimizer>
#training
train = theano.function([input_sequence, target_values], loss, updates=updates, allow_input_downcast=True)
#computing loss without training
compute_cost = theano.function([input_sequence, target_values], loss, allow_input_downcast=True)
# next character probabilities
probs = theano.function([input_sequence],network_output,allow_input_downcast=True)
def max_sample_fun(probs):
i generate the most likely symbol
return np.argmax(probs)
def proportional_sample_fun(probs)
i generate the next int32 character id randomly, proportional to probabilities
probs - array of probabilities for every token
you have to output a single integer - next token id - based on probs
return chosen token id
def generate_sample(sample_fun,seed_phrase=None,N=200):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
parameters:
sample_fun - max_ or proportional_sample_fun or whatever else you implemented
The phrase is set using the variable seed_phrase
The optional input "N" is used to set the number of characters of text to predict.
'''
if seed_phrase is None:
start = np.random.randint(0,len(corpora)-seq_length)
seed_phrase = corpora[start:start+seq_length]
print "Using random seed:",seed_phrase
while len(seed_phrase) < seq_length:
seed_phrase = " "+seed_phrase
if len(seed_phrase) > seq_length:
seed_phrase = seed_phrase[len(seed_phrase)-seq_length:]
assert type(seed_phrase) is unicode
sample_ix = []
x = map(lambda c: token_to_id.get(c,0), seed_phrase)
x = np.array([x])
for i in range(N):
# Pick the character that got assigned the highest probability
ix = sample_fun(probs(x).ravel())
# Alternatively, to sample from the distribution instead:
# ix = np.random.choice(np.arange(vocab_size), p=probs(x).ravel())
sample_ix.append(ix)
x[:,0:seq_length-1] = x[:,1:]
x[:,seq_length-1] = 0
x[0,seq_length-1] = ix
random_snippet = seed_phrase + ''.join(id_to_token[ix] for ix in sample_ix)
print("----\n %s \n----" % random_snippet)
print("Training ...")
#total N iterations
n_epochs=100
# how many minibatches are there in the epoch
batches_per_epoch = 1000
#how many training sequences are processed in a single function call
batch_size=100
for epoch in xrange(n_epochs):
print "Text generated proportionally to probabilities"
generate_sample(proportional_sample_fun,None)
print "Text generated by picking most likely letters"
generate_sample(max_sample_fun,None)
avg_cost = 0;
for _ in range(batches_per_epoch):
x,y = sample_random_batches(corpora_ids,batch_size,seq_length)
avg_cost += train(x, y[:,0])
print("Epoch {} average loss = {}".format(epoch, avg_cost / batches_per_epoch))
seed = u"Каждый человек должен" #if you are using non-russian text corpora, use seed in it's language instead
sampling_fun = proportional_sample_fun
result_length = 300
generate_sample(sampling_fun,seed,result_length)
seed = u"В случае неповиновения"
sampling_fun = proportional_sample_fun
result_length = 300
generate_sample(sampling_fun,seed,result_length)
And so on at your will
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Agenda
Step2: Constants
Step3: Input variables
Step4: Build the neural network
Step5: Compiling it
Step8: Law generation
Step9: Model training
Step10: A chance to speed up training and get bonus score
|
1,901
|
<ASSISTANT_TASK:>
Python Code:
__author__ = 'Adam Foster and Nick Dingwall'
from centering_and_scaling import *
%matplotlib inline
# A dataset:
data = np.random.multivariate_normal(
mean=[4, 0], cov=[[5, 2], [2, 3]], size=250)
X, y = data[:, 0], data[:, 1]
# Subtract the mean from the features:
empirical_mean = X.mean()
Z = X - empirical_mean
# Before and after comparison:
compare_normalizations(
X, Z, y, "Before mean-centering", "After mean-centering")
# The case of `x`:
x_star = 10
lr = LinearRegression()
lr.fit(X.reshape(-1,1), y)
x_prediction = lr.predict(x_star)[0]
print("The prediction at x = {} is {:.4}".format(
x_star, x_prediction))
# The case of `z`:
z_star = x_star - empirical_mean
lr.fit(Z.reshape(-1,1), y)
z_prediction = lr.predict(z_star)[0]
print("The prediction at z = {} - {:.4} = {:.4} is {:.4}".format(
x_star, empirical_mean, z_star, z_prediction))
Z = X / X.std()
compare_normalizations(
X, Z, y,
title1="Before rescaling",
title2="After rescaling")
standard_deviation = X.std()
compare_transformed_normalizations(
X, y,
transform=(lambda x : x/standard_deviation),
model=LinearRegression(),
title1="Trained using original data",
title2="Trained using rescaled data")
from sklearn.linear_model import Lasso
compare_transformed_normalizations(
X, y,
transform=lambda x : x/3,
model=Lasso(),
title1="Trained using original data",
title2="Trained using scaled data")
compare_transformed_normalizations(
X, y,
transform=(lambda x : 3*x),
model=Lasso(),
title1="Trained using original data",
title2="Trained using scaled data")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TL;DR
Step2: Notice that we have a different intercept, but the same slope. The predictions from these two models will be identical. For instance, the prediction at $x = 10$ is around $2.5$ on the upper graph. The corresponding $z$ is $z = 10 - \bar{x} \approx 6$ with precisely the same prediction on the lower graph. The following code works through this in detail
Step3: Ignoring the intercept
Step4: These two models certainly look different. It's important to remember that the input to the two models are different, though. For the top model, we input $x = 10$ and get a prediction about $2.5$. For the bottom model we need to input $z = 10/\sigma \approx 4.5$ which then gives a prediction somewhere between $2$ and $3$.
Step5: It should be completely unsurprising that the lower scatterplot now matches the upper scatterplot. The more notable thing is that the regression line, which was fitted in $z$-space and then transformed, is the same as the line fitted in $x$-space to start with. Thus, we see clearly that scaling doesn't affect the unregularized model.
Step6: We're seeing exactly what we expected. By making everything three times as small, the model penalized much more strongly, shrinking the regression coefficient exactly to zero.
|
1,902
|
<ASSISTANT_TASK:>
Python Code:
# Tensorflow
import tensorflow as tf
print('Tested with TensorFlow 1.2.0')
print('Your TensorFlow version:', tf.__version__)
# Feeding function for enqueue data
from tensorflow.python.estimator.inputs.queues import feeding_functions as ff
# Rnn common functions
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
# Model builder
from tensorflow.python.estimator import model_fn as model_fn_lib
# Run an experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Helpers for data processing
import pandas as pd
import numpy as np
import argparse
import random
# data from: http://ai.stanford.edu/~amaas/data/sentiment/
TRAIN_INPUT = 'data/train.csv'
TEST_INPUT = 'data/test.csv'
# data manually generated
MY_TEST_INPUT = 'data/mytest.csv'
# wordtovec
# https://nlp.stanford.edu/projects/glove/
# the matrix will contain 400,000 word vectors, each with a dimensionality of 50.
word_list = np.load('word_list.npy')
word_list = word_list.tolist() # originally loaded as numpy array
word_list = [word.decode('UTF-8') for word in word_list] # encode words as UTF-8
print('Loaded the word list, length:', len(word_list))
word_vector = np.load('word_vector.npy')
print ('Loaded the word vector, shape:', word_vector.shape)
baseball_index = word_list.index('baseball')
print('Example: baseball')
print(word_vector[baseball_index])
max_seq_length = 10 # maximum length of sentence
num_dims = 50 # dimensions for each word vector
first_sentence = np.zeros((max_seq_length), dtype='int32')
first_sentence[0] = word_list.index("i")
first_sentence[1] = word_list.index("thought")
first_sentence[2] = word_list.index("the")
first_sentence[3] = word_list.index("movie")
first_sentence[4] = word_list.index("was")
first_sentence[5] = word_list.index("incredible")
first_sentence[6] = word_list.index("and")
first_sentence[7] = word_list.index("inspiring")
# first_sentence[8] = 0
# first_sentence[9] = 0
print(first_sentence.shape)
print(first_sentence) # shows the row index for each word
with tf.Session() as sess:
print(tf.nn.embedding_lookup(word_vector, first_sentence).eval().shape)
from os import listdir
from os.path import isfile, join
positiveFiles = ['positiveReviews/' + f for f in listdir('positiveReviews/') if isfile(join('positiveReviews/', f))]
negativeFiles = ['negativeReviews/' + f for f in listdir('negativeReviews/') if isfile(join('negativeReviews/', f))]
numWords = []
for pf in positiveFiles:
with open(pf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Positive files finished')
for nf in negativeFiles:
with open(nf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Negative files finished')
numFiles = len(numWords)
print('The total number of files is', numFiles)
print('The total number of words in the files is', sum(numWords))
print('The average number of words in the files is', sum(numWords)/len(numWords))
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(numWords, 50)
plt.xlabel('Sequence Length')
plt.ylabel('Frequency')
plt.axis([0, 1200, 0, 8000])
plt.show()
max_seq_len = 250
ids_matrix = np.load('ids_matrix.npy').tolist()
# Parameters for training
STEPS = 15000
BATCH_SIZE = 32
# Parameters for data processing
REVIEW_KEY = 'review'
SEQUENCE_LENGTH_KEY = 'sequence_length'
POSITIVE_REVIEWS = 12500
# copying sequences
data_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]
# generating labels
data_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]
# also creating a length column, this will be used by the Dynamic RNN
# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
data_length = [max_seq_len for i in range(len(ids_matrix))]
data = list(zip(data_sequences, data_labels, data_length))
random.shuffle(data) # shuffle
data = np.asarray(data)
# separating train and test data
limit = int(len(data) * 0.9)
train_data = data[:limit]
test_data = data[limit:]
LABEL_INDEX = 1
def _number_of_pos_labels(df):
pos_labels = 0
for value in df:
if value[LABEL_INDEX] == [1, 0]:
pos_labels += 1
return pos_labels
pos_labels_train = _number_of_pos_labels(train_data)
total_labels_train = len(train_data)
pos_labels_test = _number_of_pos_labels(test_data)
total_labels_test = len(test_data)
print('Total number of positive labels:', pos_labels_train + pos_labels_test)
print('Proportion of positive labels on the Train data:', pos_labels_train/total_labels_train)
print('Proportion of positive labels on the Test data:', pos_labels_test/total_labels_test)
def get_input_fn(df, batch_size, num_epochs=1, shuffle=True):
def input_fn():
sequences = np.asarray([v for v in df[:,0]], dtype=np.int32)
labels = np.asarray([v for v in df[:,1]], dtype=np.int32)
length = np.asarray(df[:,2], dtype=np.int32)
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data
dataset = (
tf.contrib.data.Dataset.from_tensor_slices((sequences, labels, length)) # reading data from memory
.repeat(num_epochs) # repeat dataset the number of epochs
.batch(batch_size)
)
# for our "manual" test we don't want to shuffle the data
if shuffle:
dataset = dataset.shuffle(buffer_size=100000)
# create iterator
review, label, length = dataset.make_one_shot_iterator().get_next()
features = {
REVIEW_KEY: review,
SEQUENCE_LENGTH_KEY: length,
}
return features, label
return input_fn
features, label = get_input_fn(test_data, 2, shuffle=False)()
with tf.Session() as sess:
items = sess.run(features)
print(items[REVIEW_KEY])
print(sess.run(label))
train_input_fn = get_input_fn(train_data, BATCH_SIZE, None)
test_input_fn = get_input_fn(test_data, BATCH_SIZE)
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01,
embed_dim=128):
def model_fn(features, labels, mode):
review = features[REVIEW_KEY]
sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], tf.int32)
# Creating embedding
data = tf.Variable(tf.zeros([BATCH_SIZE, max_seq_len, 50]),dtype=tf.float32)
data = tf.nn.embedding_lookup(word_vector, review)
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# Runs the RNN model dynamically
# more about it at:
# https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs, sequence_length)
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(
last_activations, units, activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
predictions_softmax = tf.nn.softmax(predictions)
loss = None
train_op = None
eval_op = None
preds_op = {
'prediction': predictions_softmax,
'label': labels
}
if mode == tf.estimator.ModeKeys.EVAL:
eval_op = {
"accuracy": tf.metrics.accuracy(
tf.argmax(input=predictions_softmax, axis=1),
tf.argmax(input=labels, axis=1))
}
if mode != tf.estimator.ModeKeys.PREDICT:
loss = tf.losses.softmax_cross_entropy(labels, predictions)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer=optimizer,
learning_rate=learning_rate)
return model_fn_lib.EstimatorSpec(mode,
predictions=predictions_softmax,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_op)
return model_fn
model_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers
label_dimension=2, # since are just 2 classes
dnn_layer_sizes=[128, 64], # size of units in the dense layers on top of the RNN
optimizer='Adam',
learning_rate=0.001,
embed_dim=512)
# create experiment
def generate_experiment_fn():
Create an experiment function given hyperparameters.
Returns:
A function (output_dir) -> Experiment where output_dir is a string
representing the location of summaries, checkpoints, and exports.
this function is used by learn_runner to create an Experiment which
executes model code provided in the form of an Estimator and
input functions.
All listed arguments in the outer function are used to create an
Estimator, and input functions (training, evaluation, serving).
Unlisted args are passed through to Experiment.
def _experiment_fn(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn,
train_steps=STEPS
)
return _experiment_fn
# run experiment
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir='testing2'))
def string_to_array(s, separator=' '):
return s.split(separator)
def generate_data_row(sentence, label, max_length):
sequence = np.zeros((max_length), dtype='int32')
for i, word in enumerate(string_to_array(sentence)):
sequence[i] = word_list.index(word)
return sequence, label, max_length
def generate_data(sentences, labels, max_length):
data = []
for s, l in zip(sentences, labels):
data.append(generate_data_row(s, l, max_length))
return np.asarray(data)
sentences = ['i thought the movie was incredible and inspiring',
'this is a great movie',
'this is a good movie but isnt the best',
'it was fine i guess',
'it was definitely bad',
'its not that bad',
'its not that bad i think its a good movie',
'its not bad i think its a good movie']
labels = [[1, 0],
[1, 0],
[1, 0],
[0, 1],
[0, 1],
[1, 0],
[1, 0],
[1, 0]] # [1, 0]: positive, [0, 1]: negative
my_test_data = generate_data(sentences, labels, 10)
estimator = tf.estimator.Estimator(model_fn=model_fn,
config=tf.contrib.learn.RunConfig(model_dir='tensorboard/batch_32'))
preds = estimator.predict(input_fn=get_input_fn(my_test_data, 1, 1, shuffle=False))
print()
for p, s in zip(preds, sentences):
print('sentence:', s)
print('good review:', p[0], 'bad review:', p[1])
print('-' * 10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading Data
Step2: We can search our word list for a word like "baseball", and then access its corresponding vector through the embedding matrix.
Step3: Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence "I thought the movie was incredible and inspiring". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete.
Step4: TODO### Insert image
Step5: Before creating the ids matrix for the whole training set, let’s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have.
Step6: We can also use the Matplot library to visualize this data in a histogram format.
Step7: From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set.
Step8: Data
Step9: Parameters
Step10: Separating train and test data
Step11: Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing.
Step12: Verifying if the train and test data have enough positive and negative examples
Step13: Input functions
Step14: Creating the Estimator model
Step16: Create and Run Experiment
Step17: Making Predictions
|
1,903
|
<ASSISTANT_TASK:>
Python Code:
with open('../csv_files/metro_edges_no_duplicated_edges_no_cycles_speed_networkx.csv') as f:
f.readline()
# Source,Target,Weight,edge_name,edge_color,travel_seconds,longitude_Source,latitude_Source,longitude_Target,latitude_Target,distance_meters,speed_ms
g = nx.parse_edgelist(f, delimiter=',', nodetype=int, data=(('Weight', float), ('edge_name', str), ('edge_color', str), ('travel_seconds', float), ('longitude_Source', float), ('latitude_Source', float), ('longitude_Target', float), ('latitude_Target', float), ('distance_meters', float), ('speed_ms', float) ), create_using = nx.MultiGraph())
with open('../csv_files/metro_gephi_nodes_coordinates.csv') as f:
reader = csv.DictReader(f)
node_latitudes = {}
node_longitudes = {}
node_names = {}
for row in reader:
node_latitudes[ int(row['Id']) ] = float(row['latitude'])
node_longitudes[ int(row['Id']) ] = float(row['longitude'])
node_names[ int(row['Id']) ] = row['Label']
nx.set_node_attributes(g, name = 'latitude', values = node_latitudes)
nx.set_node_attributes(g, name = 'longitude', values = node_longitudes)
nx.set_node_attributes(g, name = 'name', values = node_names)
def top_n_stations_by_attribute(graph, attr_name, n, ascending = False):
return pd.DataFrame.from_records(map(lambda x: x[1], list(graph.nodes(data=True)) ))[['name', attr_name]].sort_values(attr_name, ascending = ascending)[:(n+1)].reset_index(drop=True).shift()[1:]
nx.set_node_attributes(g, name = 'degree', values = dict(g.degree))
stations_and_their_neighbours = top_n_stations_by_attribute(g, 'degree', len(g.edges()))
stations_and_their_neighbours[:35]
sns.plt.figure(figsize=(15, 10))
sns.distplot(stations_and_their_neighbours['degree'], kde=False, rug=False, bins = np.arange(8)-0.5, hist_kws=dict(edgecolor='k', linewidth=2))
sns.plt.show()
stations_and_their_neighbours.groupby('degree').count().rename(columns={'name': 'count'})
nx.set_node_attributes(g, name = 'closeness_centrality', values = nx.closeness_centrality(g, distance = 'travel_seconds'))
top_n_stations_by_attribute(g, 'closeness_centrality', 20)
nx.set_node_attributes(g, name = 'betweenness_centrality', values = nx.betweenness_centrality(g, normalized = True, weight = 'Weight'))
top_n_stations_by_attribute(g, 'betweenness_centrality', 20)
dict_nodes = g.node_dict_factory(g.nodes)
articulation_stations = list(nx.articulation_points(nx.Graph(g)))
print('%i articulation stations out of %i stations (%0.3f %%) \n' % (len(articulation_stations), len(dict_nodes.keys()), (len(articulation_stations) * 100)/len(dict_nodes.keys()) ) )
aux_list = []
for i in articulation_stations:
aux_list.append(dict_nodes[i]['name'])
aux_list.sort()
aux_list
art_points_dict = {}
for i in articulation_stations:
g_copy = g.copy()
g_copy.remove_node(i)
art_points_dict[dict_nodes[i]['name']] = {'n_ccs': nx.number_connected_components(g_copy)}
# print(dict_nodes[i]['name'])
# print('')
# print('Connected components if that node (station) is removed: %i' % art_points_dict[dict_nodes[i]['name']]['n_ccs'])
cc_i = 1
n_nodes_ccs = []
for cc in nx.connected_components(g_copy):
art_points_dict[dict_nodes[i]['name']]['nodes_cc_' + str(cc_i)] = len(cc)
# print('Nodes (stations) in connected component %i: %i' % (cc_i, art_points_dict[dict_nodes[i]['name']]['nodes_cc_' + str(cc_i)]))
n_nodes_ccs.append(art_points_dict[dict_nodes[i]['name']]['nodes_cc_' + str(cc_i)])
cc_i += 1
n_nodes_ccs.sort(reverse = True)
art_points_dict[dict_nodes[i]['name']]['n_isolated_nodes'] = sum(n_nodes_ccs[1:])
# print('Number of isolated nodes (stations): %i' % art_points_dict[dict_nodes[i]['name']]['n_isolated_nodes'])
# print('-------------------------------------------------------------------------------------------')
del g_copy
gc.collect()
art_points_info_df = pd.DataFrame.from_dict(art_points_dict, orient='index')[['n_ccs', 'n_isolated_nodes', 'nodes_cc_1', 'nodes_cc_2', 'nodes_cc_3']]
art_points_info_df.sort_values('n_isolated_nodes', ascending = False)[:20]
nx.set_node_attributes(g, name = 'eigenvector_centrality', values = nx.eigenvector_centrality_numpy(g, weight = 'Weight', max_iter = 1000))
top_n_stations_by_attribute(g, 'eigenvector_centrality', 20)
aux_edges_dict = {}
i = 0
for edge in list(g.edges(data = True)):
edge[2]['Source'] = node_names[edge[0]]
edge[2]['Target'] = node_names[edge[1]]
aux_edges_dict[i] = edge[2]
i += 1
edges_pd = pd.DataFrame.from_dict(aux_edges_dict, orient='index')[['Source', 'Target', 'edge_name', 'distance_meters', 'travel_seconds', 'speed_ms', 'Weight', 'edge_color', 'longitude_Source', 'latitude_Source', 'longitude_Target', 'latitude_Target']]
edges_pd.sort_values('distance_meters', ascending = False)[:20]
edges_pd.sort_values('distance_meters')[:20]
edges_dist_grouped = edges_pd[['edge_name', 'distance_meters']].groupby(by = 'edge_name').describe().iloc[[0, 4, 5, 6, 7, 8, 9, 10, 11, 1, 2, 3, 12],:]
edges_dist_grouped['distance_meters'].sort_values('mean', ascending = False)
metro_colors = ['#2DBEF0', '#ED1C24', '#FFD000', '#B65518', '#8FD400', '#98989B', '#EE7518', '#EC82B1', '#A60084', '#005AA9', '#009B3A', '#A49800', '#FFFFFF']
metro_order = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', 'R']
plt.figure(figsize=(15, 10))
sns.boxplot(data=edges_pd, x = 'edge_name', y = 'distance_meters', order = metro_order, palette = metro_colors).set_title('Distance by Metro Line (geodesic, meters)')
plt.show()
edges_dist_grouped = edges_dist_grouped.assign(dist_per_edge = edges_dist_grouped['distance_meters']['mean'] / (edges_dist_grouped['distance_meters']['count'] - 1) )
edges_dist_grouped = edges_dist_grouped.assign(dist_per_edge_std = edges_dist_grouped['distance_meters']['std'] / (edges_dist_grouped['distance_meters']['count'] - 1) )
edges_dist_grouped.sort_values('dist_per_edge', ascending = False).iloc[1:,:] # I removed the first line because it was useless (R line, only two stations)
edges_time_grouped = edges_pd[['edge_name', 'travel_seconds']].groupby(by = 'edge_name').describe().iloc[[0, 4, 5, 6, 7, 8, 9, 10, 11, 1, 2, 3, 12],:]
edges_time_grouped['travel_seconds'].sort_values('mean', ascending = False)
plt.figure(figsize=(15, 10))
sns.boxplot(data=edges_pd, x = 'edge_name', y = 'travel_seconds', order = metro_order, palette = metro_colors).set_title('Travel time by Metro Line (seconds)')
plt.show()
# Checking if data is normally distributed with a significance level (alpha) of 0.05.
# H0: the data is normally distributed
import scipy
alpha = 0.05
statistic, pvalue = scipy.stats.normaltest(edges_pd[['travel_seconds']])
pvalue[0]
for i in metro_lines_to_use:
_, pvalue = scipy.stats.levene(edges_pd[edges_pd.edge_name == '12'][['travel_seconds']],
edges_pd[edges_pd.edge_name == i][['travel_seconds']])
print('Line 12 and Line %s' % i)
print('p-value: %f' % pvalue)
print('p-value < %f ?: %r' % (alpha, pvalue[0] < alpha))
print('-----------------------------------------------')
metro_lines_to_use = edges_pd.edge_name.unique()[:-1]
metro_lines_to_use = np.delete(metro_lines_to_use, 8) # Removing 'R' line
for i in metro_lines_to_use:
_, pvalue = scipy.stats.mannwhitneyu(edges_pd[edges_pd.edge_name == '12'][['travel_seconds']],
edges_pd[edges_pd.edge_name == i][['travel_seconds']],
use_continuity=False, alternative='greater')
print('Line 12 and Line %s' % i)
print('p-value: %f' % pvalue)
print('p-value < %f ?: %r' % (alpha, pvalue < alpha))
print('-----------------------------------------------')
plt.figure(figsize=(15, 10))
sns.lmplot(x = 'distance_meters', y = 'speed_ms', col = 'edge_name', data = edges_pd, col_wrap = 4, size = 5,
hue = 'edge_name',
hue_order = metro_order,
palette = metro_colors,
col_order = metro_order)
plt.show()
plt.figure(figsize=(15, 10))
sns.regplot(data=edges_pd, x = 'distance_meters', y = 'speed_ms', fit_reg = True, scatter = True, scatter_kws={'facecolors': metro_colors}, line_kws={'color': 'black'})
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Top 35 stations with more neighbour stations
Step2: Neighbours' count histogram
Step3: Most of the stations are connected to two other stations.
Step4: Top 20 most important (according to Closeness Centrality algorithm) Metro stations are shown
Step5: Another metric to have in mind
Step6: Top 20 most important (according to Betweenness Centrality algorithm) Metro stations are shown
Step7: It seems Gregorio Marañón is quite important again, and thanks to the Betweenness Centrality we can say Gregorio Marañón is the Metro station that controls the most the Metro network because more information (people) can pass through it than on any other node (station).
Step8: As you can see that was a long list. However, there are Metro Stations which are more critical than others. For example, if Puerta del Sur would stop working, none of the Metro Sur (Line 12) stations could reach any Madrid Metro Station because Puerta del Sur is the only one station that connects the Line 12 with the rest of the Metro network.
Step9: Top 20 most critical articulation Metro stations
Step10: As you can see, Casa de Campo is the most critical Metro Station because removing it...
Step11: Top 20 most important Metro stations based on Eigenvector Centrality.
Step12: This looks interesting
Step13: Top 20 further edges (connections between stations)
Step14: Top 20 closest edges (connections between stations)
Step15: Exploring Metro lines distances and travel times. First let's take a look at distances.
Step16: Mean distance per edge (stations connections) per line
Step17: It's clear line 8 is the line with the highest mean distance between stations.
Step18: "Surprise"
Step19: Because p-value < alpha, the null hypothesis can be rejected
Step20: That means we cannot perform some types of hypothesis tests because some of them assume the data is normally distributed (among other conditions, like the equality of variances among groups. We tested that on the previous chunk and for every pair of groups we cannot reject nor accept H0 (that variances are equal)).
Step21: Take a look at the Line 12 and Line 8 case. The null hypothesis cannot be rejected in favor of the alternative hypothesis. If you see the box-plot, it's hard to say if the medians are equal or not, they are too close but we can't be sure. This test proves then that we can't say Line 12 has the worst travel times. However, I must say that, from my experience, using Metro Sur (Line 12) can be a real pain in the ass because it's too slow and I can't say the same about Line 8.
|
1,904
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def generate_prob(n = 100, p = 0.8, eps = 0.2):
@param: n (int): 子样个数
@param: p (float): 伯努利分布成功概率
@param: eps (float): 容忍偏差, bias
sample = [np.abs(np.random.binomial(n=1, p=p, size=n).mean() - p) for i in range(10000)] # 10000 仿真次数
Prob = (np.array(sample) >= eps).mean()
return Prob
generate_prob(n = 100, p = 0.8, eps = 0.2)
import matplotlib.pyplot as plt
import math
%matplotlib inline
epsilon = np.linspace(start=0.02, stop=0.25, num=200)
various_eps = [generate_prob(n = 100, p = 0.8, eps = eps) for eps in epsilon] # fix n=100 and p = 0.8
upper_Chebyshev = [1.0 / (4 * 100 * eps ** 2) for eps in epsilon]
upper_Chernoff = [2 * math.exp(-2 * 100 * eps ** 2) for eps in epsilon]
plt.semilogy(epsilon, various_eps, 'r-',label="Simulation")
plt.semilogy(epsilon, upper_Chebyshev, 'g-',label="Chebyshev upper bound")
plt.semilogy(epsilon, upper_Chernoff, 'b-',label="Chernoff upper bound")
plt.legend()
plt.xlabel("$\epsilon$")
plt.ylabel("$Prob$")
plt.title("$n = 100, p = 0.8$")
plt.savefig("./img/dynamic_eps.png", dpi=200)
plt.show()
n_range = list(range(10,201))
various_n = [generate_prob(n = n, p = 0.8, eps = 0.14) for n in n_range] # fix eps = 0.14 and p = 0.8
upper_Chebyshev = [1.0 / (4 * n * 0.14 ** 2) for n in n_range]
upper_Chernoff = [2 * math.exp(-2 * n * 0.14 ** 2) for n in n_range]
plt.semilogy(n_range, various_n, 'r-',label="Simulation")
plt.semilogy(n_range, upper_Chebyshev, 'g-',label="Chebyshev upper bound")
plt.semilogy(n_range, upper_Chernoff, 'b-',label="Chernoff upper bound")
plt.legend()
plt.xlabel("$n$")
plt.ylabel("$Prob$")
plt.title("$\epsilon = 0.14, p = 0.8$")
plt.savefig("./img/dynamic_n.png", dpi=200)
plt.show()
p_range = np.linspace(start=0.01, stop=0.99, num=200)
various_p = [generate_prob(n = 100, p = p, eps = 0.14) for p in p_range] # fix n=100 and eps = 0.14
upper_Chebyshev = [p * (1 - p) / (100 * 0.14 ** 2) for p in p_range]
upper_Chernoff = [2 * math.exp(-2 * 100 * 0.14 ** 2) for p in p_range]
plt.semilogy(p_range, various_p, 'r-',label="Simulation")
plt.semilogy(p_range, upper_Chebyshev, 'g-',label="Chebyshev upper bound")
plt.semilogy(p_range, upper_Chernoff, 'b-',label="Chernoff upper bound")
plt.legend()
plt.xlabel("$p$")
plt.ylabel("$Prob$")
plt.title("$\epsilon = 0.14, n = 100$")
plt.savefig("./img/dynamic_p.png", dpi=200)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Plots
Step3: 动态改变 $\epsilon$
Step4: 动态改变 $n$
Step5: 动态改变 $p$
|
1,905
|
<ASSISTANT_TASK:>
Python Code:
# The following code is adopted from Pat's Rolling Rain N-Year Threshold.pynb
# Loading in hourly rain data from CSV, parsing the timestamp, and adding it
# as an index so it's more useful
rain_df = pd.read_csv('data/ohare_full_precip_hourly.csv')
rain_df['datetime'] = pd.to_datetime(rain_df['datetime'])
rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime']))
# Data does not really exist before 1973, data between 11/1991 and 8/1992 is all 0s...
rain_df = rain_df[(rain_df['datetime'] > '1973') &
((rain_df['datetime'] < '19911101') | (rain_df['datetime'] > '19920801'))]
chi_rain_series = rain_df['HOURLYPrecip'].resample('1H').max().dropna()
# We take maximum, because when there are multiple reports within the same hour,
# the values are *accumulated* (and then reset at the next full hour). Thus we
# want to take the maximum reading from any given hour.
ax = chi_rain_series.resample('24H').sum().plot()
_ = ax.set_title('Daily Precipitation (in) at O\'Hare')
n_year_threshes = pd.read_csv('data/n_year_definitions.csv')
n_year_threshes = n_year_threshes.set_index('Duration')
n_year_threshes
chi_rain_series['20110130':'20110204'].resample('24H').sum().plot(figsize=(6,4))
print('Total of {:.2f} inches of precip reported.'.format(
chi_rain_series['20110130':'20110204'].sum()
))
dur_str_to_hours = {
'5-min':5/60.0,
'10-min':10/60.0,
'15-min':15/60.0,
'30-min':0.5,
'1-hr':1.0,
'2-hr':2.0,
'3-hr':3.0,
'6-hr':6.0,
'12-hr':12.0,
'18-hr':18.0,
'24-hr':24.0,
'48-hr':48.0,
'72-hr':72.0,
'5-day':5*24.0,
'10-day':10*24.0
}
def plot_thresh(duration_str, n_years, ax=None):
'''
For a given duration and a given n, the number of years, plot the
rolling amount of rain of the given duration, and the amount
of rain in the given duration that constitutes an n-year storm.
duration_str: duration as a string, see index of n_year_threshes
n_years : number of years, must be column of n_year_threshes
ax : optional, matplotlib axis object on which to plot
>>> plot_thresh('48-hour', 100)
>>> plot_thresh('5-day', 10)
'''
global rain_df
global n_year_threshes
global dur_str_to_hours
if ax is None:
ax = plt.gca()
thresh = n_year_threshes.loc[duration_str, str(n_years) + '-year']
duration = dur_str_to_hours[duration_str]
duration = max(duration, 1) # cannot upsample to more frequent than hourly
# TODO: want to throw warning?
# Create plot
rain_line = chi_rain_series.rolling(window=int(duration), min_periods=0).sum().plot(
ax=ax, color=sns.color_palette()[0])
x_limits = ax.get_xlim()
ax.plot(x_limits, [thresh, thresh], color=sns.color_palette()[1])
ax.set_ylim([0, ax.get_ylim()[1]])
ax.legend(['moving cumulative rain',
str(n_years) + '-year ' + duration_str + ' threshold'],
loc='best')
return ax
ax = plot_thresh('24-hr', 100)
def get_independent_storms():
'''
TODO
See page 21 of http://www.isws.illinois.edu/atmos/statecli/PDF/b70-all.pdf,
Section 3: Independence of Observations, also quoted above.
'''
pass
ps = np.linspace(0,1,1000)
f, (ax1, ax2) = plt.subplots(1, 2, sharex=True, figsize=(11,4))
ax1.set_title('Quantiles')
ax1.plot(chi_rain_series.rolling(window=int(24), min_periods=0).sum().quantile(ps))
ax1.hold(True)
ax1.plot(chi_rain_series.resample('24H').sum().dropna().quantile(ps), '--')
ax2.set_title('Difference')
junk = ax2.plot(ps,
chi_rain_series.rolling(window=int(24), min_periods=0).sum().quantile(ps) -
chi_rain_series.resample('24H').sum().dropna().quantile(ps)
)
# We hope that these will match well. One is generated using (somewhat) independent
# observations, and the other is generated using highly dependent observations.
def new_recurrence_intervals():
'''
TODO
'''
global rain_df
global n_year_threshes
global dur_str_to_hours
new_recur_ints = n_year_threshes.ix[:-4,:].copy(deep=True)
for recurrence in new_recur_ints.columns:
for dur_str in new_recur_ints.index:
thresh = n_year_threshes.ix[dur_str, recurrence]
dur = dur_str_to_hours[dur_str]
rain = chi_rain_series.rolling(window=int(dur), min_periods=0).sum()
# 24 * 365.25 = 8766 hours per year
# rain.size * dur is the number of total hours we are looking at, so
# rain.size * dur / 8766 is total number of years in dataset
new_recur_ints.ix[dur_str, recurrence] = rain.size * dur / 8766. / sum(rain > thresh)
return new_recur_ints
new_recur_ints = new_recurrence_intervals()
print(new_recur_ints.to_string())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Rainfall Equivalent
Step2: Plotting Rainfall vs. n-year storm threshold
Step3: Calculate new n-year storm definitions
|
1,906
|
<ASSISTANT_TASK:>
Python Code:
import os
import math
import torch
import pyro
import pyro.distributions as dist
from pyro.infer.autoguide import AutoDelta
from pyro.optim import Adam
from pyro.infer import SVI, Trace_ELBO, config_enumerate
from pyro.contrib.tracking.extended_kalman_filter import EKFState
from pyro.contrib.tracking.distributions import EKFDistribution
from pyro.contrib.tracking.dynamic_models import NcvContinuous
from pyro.contrib.tracking.measurements import PositionMeasurement
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.7.0')
dt = 1e-2
num_frames = 10
dim = 4
# Continuous model
ncv = NcvContinuous(dim, 2.0)
# Truth trajectory
xs_truth = torch.zeros(num_frames, dim)
# initial direction
theta0_truth = 0.0
# initial state
with torch.no_grad():
xs_truth[0, :] = torch.tensor([0.0, 0.0, math.cos(theta0_truth), math.sin(theta0_truth)])
for frame_num in range(1, num_frames):
# sample independent process noise
dx = pyro.sample('process_noise_{}'.format(frame_num), ncv.process_noise_dist(dt))
xs_truth[frame_num, :] = ncv(xs_truth[frame_num-1, :], dt=dt) + dx
# Measurements
measurements = []
mean = torch.zeros(2)
# no correlations
cov = 1e-5 * torch.eye(2)
with torch.no_grad():
# sample independent measurement noise
dzs = pyro.sample('dzs', dist.MultivariateNormal(mean, cov).expand((num_frames,)))
# compute measurement means
zs = xs_truth[:, :2] + dzs
def model(data):
# a HalfNormal can be used here as well
R = pyro.sample('pv_cov', dist.HalfCauchy(2e-6)) * torch.eye(4)
Q = pyro.sample('measurement_cov', dist.HalfCauchy(1e-6)) * torch.eye(2)
# observe the measurements
pyro.sample('track_{}'.format(i), EKFDistribution(xs_truth[0], R, ncv,
Q, time_steps=num_frames),
obs=data)
guide = AutoDelta(model) # MAP estimation
optim = pyro.optim.Adam({'lr': 2e-2})
svi = SVI(model, guide, optim, loss=Trace_ELBO(retain_graph=True))
pyro.set_rng_seed(0)
pyro.clear_param_store()
for i in range(250 if not smoke_test else 2):
loss = svi.step(zs)
if not i % 10:
print('loss: ', loss)
# retrieve states for visualization
R = guide()['pv_cov'] * torch.eye(4)
Q = guide()['measurement_cov'] * torch.eye(2)
ekf_dist = EKFDistribution(xs_truth[0], R, ncv, Q, time_steps=num_frames)
states= ekf_dist.filter_states(zs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, let's specify the measurements. Notice that we only measure the positions of the particle.
Step2: We'll use a Delta autoguide to learn MAP estimates of the position and measurement covariances. The EKFDistribution computes the joint log density of all of the EKF states given a tensor of sequential measurements.
|
1,907
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import GPy
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
from pylab import *
np.random.seed(113321)
# Prepare the data
N,D,Q = 500, 100, 3
pi = 0.2
# sample from 3 random sine waves
X = np.sin(2*np.pi*(np.random.rand(Q)[None,:]+.5)*(np.linspace(0.,3.,N)[:,None]-np.random.rand(Q)[None,:]))
# set 80% of the values to zeros
X = X*((np.random.rand(N,Q)<pi)*1)
# plot the samples
_ = plot(X)
noise_sigma = 0.1
# Generate a weight matrix
W = np.random.randn(D,Q)
# Project to the observed space plus some Gaussian noise
y = np.dot(X,W.T)+np.random.randn(N,D)*noise_sigma
y = (y-y.mean())/y.std()
# Plot the
_ = plot(y[:,:2])
m = GPy.models.SSGPLVM(y, Q, kernel=GPy.kern.Linear(Q,ARD=True),num_inducing=Q)
# Show the model parameters
m.likelihood.variance = .01
m.X.variance = .1
print m
# Optimize the model parameters
m.optimize('bfgs',messages=1,max_iters=10000)
# Show the model parameters after optimization
print m
print m.variational_prior.pi
# Plot the posterior distribution of X
# The left figure shows the mean and variance (the thickness) of
# the slab part (the gray color shows the other dimensions for comparison)
# The right figure shows the posterior probability of the binary variable for having the slab
a = m.X.plot()
# In order to compare the recovered X, we flipped the sign of the 2nd, 3rd dimensions, and rescale the values.
# The figure shows the absolute difference between the recovered X and the original X
Xcom = m.X.mean.copy()*np.array([1.,-1.,-1.])
Xcom = Xcom/np.abs(Xcom).max(axis=0)
_ = plot(np.abs(Xcom-X))
# Plot the distribution of the recovered X (only 1st and 3rd dimension)
# As shown in the figure, it follows a spike and slab distribution
_ = m.plot_latent()
m = GPy.models.BayesianGPLVM(y, Q, kernel=GPy.kern.Linear(Q,ARD=True),num_inducing=Q)
# Show the model parameters
print m
# Optimize the model parameters
m.optimize('bfgs',messages=1,max_iters=10000)
# Show the model parameters after optimization
print m
# Plot the posterior distribution of X
a = m.X.plot()
# In order to compare the recovered X, we swapped the 2nd and 3rd dimension,
# flipped the sign of the 2nd dimension, and rescale the values.
# The figure shows the absolute difference between the recovered X and the original X
Xcom = np.dot(m.X.mean.copy(),np.array([[1.,0.,0.],[0.,0.,1.],[0.,-1.,0.]]))
Xcom = Xcom/np.abs(Xcom).max(axis=0)
_ = plot(np.abs(Xcom-X))
# Plot the distribution of the recovered X (only 1st and 2nd dimension)
# As shown in the figure, it follows a spike and slab distribution
_ = m.plot_latent()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The obersved data $Y$ is generated by projecting the samples of the 3 sine waves onto a 100D space with a randomly generated weight matrix $W$. The first two dimensions of $Y$ is shown in the figure.
Step2: Applying Spike and Slab GP-LVM
Step3: Apply Bayesian GP-LVM
|
1,908
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as plt
import seaborn as sns
users = pd.read_csv('timeseries_users.csv')
users.head()
events = pd.read_csv('timeseries_events.csv')
events.index = pd.to_datetime(events['event_date'], format='%Y-%m-%d %H:%M:%S')
del events['event_date']
events.tail()
users.describe()
# 2. Check for NaNs:
print(users.isnull().values.any())
print(events.isnull().values.any())
# 3. Check for duplicated entries
users_duplicated = users[users.duplicated() == True ]
print('Users: duplicated entries {}'.format(len(users_duplicated)))
events_duplicated = events[events.duplicated() == True ]
print('Events: duplicated entries {}'.format(len(events_duplicated)))
# 1. count all events for each user:
events_per_user = events.groupby('user_id').size()
events_per_user.head()
# Select only 30+ male users:
for user_id in events_per_user.index:
if user_id in users['user_id'].values:
user = users[ users['user_id'] ==user_id]
age = user['age'].values[0]
gender = user['gender'].values[0]
if ( age < 30 ) or (gender == 'f'):
del events_per_user[user_id]
else:
del events_per_user[user_id]
print(type(events_per_user))
events_per_user.values
sns.set(style="ticks")
# Show the results of a linear regression within each dataset
ax = sns.distplot(events_per_user.values)
ax.set_title('Event per male users of age 30+ old')
ax.set_ylabel('Normalized distribution')
ax.set_xlabel('Counts')
def get_inter_events(events_per_user):
From a list of events for a given user, gets a list of inter time events in dates.
from datetime import datetime
nanosecond_to_days=float(1.15741e-14)
inter_times = []
for event_index in range(1,len(events_per_user)):
time1 = events_per_user[event_index-1]
time2 = events_per_user[event_index]
time_diff = time2 - time1
# Convert from nanoseconds to days:
time_diff = int(float(time_diff)*nanosecond_to_days)
inter_times.append(time_diff)
return inter_times
# Cycle by user
inter_event_intervals=[]
for user_id in users['user_id'].values:
# Get events for this user:
events_per_user = events[events['user_id']==user_id].sort_index()
events_per_user = events_per_user.index.values
if len(events_per_user) > 1:
inter_event_intervals_this = get_inter_events(events_per_user)
inter_event_intervals = list(inter_event_intervals)+ list(inter_event_intervals_this)
inter_event_intervals=np.array(inter_event_intervals)
type(inter_event_intervals)
print(len(inter_event_intervals))
print(inter_event_intervals.shape)
sns.set(style="ticks")
# Show the results of a linear regression within each dataset
ax = sns.distplot(inter_event_intervals)
ax.set_ylim(0,0.005)
ax.set_title('Inter-event intervals')
ax.set_ylabel('Normalized distribution')
ax.set_xlabel('Inter-event interval (days)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='data_exploration'></a>
Step2: User's age mean is from 24 to 63 years old, with a mean of 41 years old.
Step3: Many duplicated entries are found in the events dataset.
Step5: <a id = 'inter_events'></a>
|
1,909
|
<ASSISTANT_TASK:>
Python Code:
#@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip uninstall -y telluride-decoding
# To get test versions and install them on this machine.
# !pip install mock pyedflib # Need to list these since they are not on test.pypi
# !pip install --index-url https://test.pypi.org/project telluride-decoding
# To install the latest released version:
!pip install telluride-decoding
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
tf.compat.v1.enable_v2_behavior()
from telluride_decoding import brain_data
from telluride_decoding import brain_model
from telluride_decoding import decoding
from telluride_decoding import regression
from telluride_decoding import regression_data
# Reset the plot library backend to the default.
import matplotlib
matplotlib.use('module://ipykernel.pylab.backend_inline')
# Force flags to be parsed (none here) to set flag defaults. There are some parts
# of the toolbox that still use FLAGS, but shouldn't :-(
from absl import flags
flags.FLAGS(['colab']) # Cause flags library to set the default.
telluride_data = regression_data.RegressionDataTelluride4()
cache_dir = '/tmp'
telluride4_url = 'https://drive.google.com/uc?id=0ByZjGXodIlspWmpBcUhvenVQa1k'
if not telluride_data.is_data_local(cache_dir):
print('Downloading the Telluride4 data...')
telluride_data.download_data(telluride4_url, cache_dir)
tf_dir = '/tmp/telluride_tf'
!mkdir {tf_dir}
if not telluride_data.is_data_ingested(tf_dir):
print('Ingesting the Telluride4 data...')
telluride_data.ingest_data(cache_dir, tf_dir, 100)
!ls {tf_dir}
!cat {tf_dir}/README.txt
# If you have problems or get unexplained errors, you might want to turn on
# log messages with these calls. Change the False to True to enable the logging.
if False:
from absl import logging
logging.set_verbosity(logging.INFO)
logging.set_stderrthreshold('info')
logging.info('Testing')
telluride4_options = decoding.DecodingOptions()
telluride4_options.input_field = 'eeg'
telluride4_options.output_field = 'intensity'
telluride4_options.input2_field = 'intensity'
telluride4_options.tfexample_dir = tf_dir
telluride4_options.dnn_regressor = 'cca'
telluride4_options.post_context = 21
telluride4_options.input2_pre_context = 15
telluride4_options.input2_post_context = 15
telluride4_options.test_metric = 'cca_pearson_correlation_first'
telluride4_options.shuffle_buffer_size = 0 # No need when training a CCA model
telluride4_options.cca_dimensions = 5
print(telluride4_options.experiment_parameters('\n'))
telluride4_data = regression.get_brain_data_object(telluride4_options)
# Get the actual TF dataset, as an example of the full TDT data object.
telluride4_dataset = telluride4_data.create_dataset('train')
# Create the brain model (CCA as specified above.)
brain_model = decoding.create_brain_model(telluride4_options, telluride4_dataset)
# Now train the regressor. Since this regressor is built with CCA, only one
# pass through the data is needed to collect the statistics and create the model.
train_results, test_results = decoding.train_and_test(telluride4_options, telluride4_data, brain_model)
test_results
telluride4_regression = regression.Telluride4CCA(telluride4_options)
reg_values = np.power(10.0, np.arange(-3, 2, 1))
results = telluride4_regression.jackknife_over_regularizations(telluride4_options,
reg_values)
results
reg_values = list(results.keys())
result_means = [results[k][0] for k in reg_values]
result_stddev = [results[k][1] for k in reg_values]
plt.errorbar(reg_values, result_means, result_stddev)
matplotlib.pyplot.xscale('log')
plt.xlabel('Regularization Value')
plt.ylabel('Jackknifed Correlation');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the right software
Step2: Decode the Telluride4 EEG Data
Step3: Run complete jackknife test
|
1,910
|
<ASSISTANT_TASK:>
Python Code:
synden = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
if r[-2] != 0:
synden[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = np.float(r[-1])/np.float(r[-2])
x_sum = [0] * len(synden[:,0,0])
for i in range(len(synden[:,0,0])):
x_sum[i] = sum(sum(synden[i,:,:]))
y_sum = [0] * len(synden[0,:,0])
for i in range(len(synden[0,:,0])):
y_sum[i] = sum(sum(synden[:,i,:]))
z_sum = [0] * len(synden[0,0,:])
for i in range(len(synden[0,0,:])):
z_sum[i] = sum(sum(synden[:,:,i]))
unique_x = np.unique(sorted_x)
unique_y = np.unique(sorted_y)
unique_z = np.unique(sorted_z)
plt.figure()
plt.figure(figsize=(28,7))
plt.subplot(131)
plt.bar(unique_x, x_sum, 1)
plt.xlim(450, 3600)
plt.ylabel('density in synapses/voxel',fontsize=20)
plt.xlabel('x-coordinate',fontsize=20)
plt.title('Total Density across Each X-Layer',fontsize=20)
plt.subplot(132)
plt.bar(unique_y, y_sum, 1)
plt.xlim(1570, 3190)
plt.ylabel('density in synapses/voxel',fontsize=20)
plt.xlabel('y-coordinate',fontsize=20)
plt.title('Total Density across Each Y-Layer',fontsize=20)
plt.subplot(133)
plt.bar(unique_z, z_sum, 1)
plt.ylabel('density in synapses/voxel',fontsize=20)
plt.xlabel('z-coordinate',fontsize=20)
plt.title('Total Density across Each Z-Layer',fontsize=20)
from scipy.signal import argrelextrema
y_local_mins_idcs = np.asarray(argrelextrema(np.asarray(y_sum), np.less))
y_local_mins = [0]*len(y_local_mins_idcs)
for i, y in enumerate(y_local_mins_idcs):
y_local_mins[i] = unique_y[y_local_mins_idcs[i]]
print "Y local minima: ", y_local_mins[0][3:len(y_local_mins[0])]
import urllib2
import sklearn.mixture as mixture
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:]
csv_clean = csv[np.logical_not(csv[:,3] == 0)]
csv_clean = csv_clean[csv_clean[:,0] >= 409]
csv_clean = csv_clean[csv_clean[:,0] <= 3529]
csv_clean = csv_clean[csv_clean[:,1] >= 1564]
csv_clean = csv_clean[csv_clean[:,1] <= 3124]
csv_clean_no_ratio = csv_clean
csv_clean[:,4] = np.divide(csv_clean[:,4],csv_clean[:,3])
csv_clean[:,4] = csv_clean[:,4]*(64**3)
# pull out y-layers
boundaries = y_local_mins[0][2:len(y_local_mins[0])]
for i in range (0,len(boundaries)-1):
y_layer = csv_clean[np.logical_and(csv_clean[:,1] >= boundaries[i],csv_clean[:,1] <= boundaries[i+1])]
# run bic on the y-layer
bics = []
max_clusters = 20
for i in range(1,30):
bic = np.array([])
i = np.array(range(1, max_clusters))
for idx in range(1, max_clusters):
gmm = mixture.GMM(n_components=idx, n_iter=1000, covariance_type='diag')
gmm.fit(y_layer)
bic = np.append(bic, gmm.bic(y_layer))
bics.append(bic)
bic = np.asarray(bics)
bic_mean = np.max(bic,0)
plt.figure(figsize=(7,7))
plt.plot(i, 1.0/bic_mean)
plt.title('BIC')
plt.ylabel('score')
plt.xlabel('number of clusters')
plt.show()
from mpl_toolkits.mplot3d import axes3d
import urllib2
# Original
total_unmasked = 0
total_syn = 0
for r in rows:
total_unmasked = total_unmasked + r[-2]
total_unmasked = total_unmasked + r[-1]
np.set_printoptions(precision=3, suppress=True)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
# chopping data based on thresholds on x and y coordinates
x_bounds = (409, 3529)
y_bounds = (1564, 3124)
def check_in_bounds(row, x_bounds, y_bounds):
if row[0] < x_bounds[0] or row[0] > x_bounds[1]:
return False
if row[1] < y_bounds[0] or row[1] > y_bounds[1]:
return False
if row[3] == 0:
return False
return True
indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv, x_bounds, y_bounds))
data_thresholded = csv[indices_in_bound]
n = data_thresholded.shape[0]
def synapses_over_unmasked(row):
s = (row[4]/row[3])*(64**3)
return [row[0], row[1], row[2], s]
syn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded)
syn_normalized = syn_unmasked
a = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded)
# Spike
spike = a[np.logical_and(a <= 0.0015, a >= 0.0012)]
print "Average Density: ", np.mean(spike)
print "Std Deviation: ", np.std(spike)
# Histogram
n, bins, _ = plt.hist(spike, 2000)
plt.title('Histogram of Synaptic Density')
plt.xlabel('Synaptic Density (syn/voxel)')
plt.ylabel('frequency')
bin_max = np.where(n == n.max())
print 'maxbin', bins[bin_max][0]
bin_width = bins[1]-bins[0]
syn_normalized[:,3] = syn_normalized[:,3]/(64**3)
spike = syn_normalized[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]
print "There are ", len(spike), " points in the 'spike'"
import sklearn.mixture as mixture
bics = []
max_clusters = 20
for i in range(1,30):
bic = np.array([])
i = np.array(range(1, max_clusters))
for idx in range(1, max_clusters):
gmm = mixture.GMM(n_components=idx, n_iter=1000, covariance_type='diag')
gmm.fit(spike)
bic = np.append(bic, gmm.bic(spike))
bics.append(bic)
bic = np.asarray(bics)
bic_mean = np.max(bic,0)
plt.figure(figsize=(7,7))
plt.plot(i, 1.0/bic_mean)
plt.title('BIC')
plt.ylabel('score')
plt.xlabel('number of clusters')
plt.show()
from scipy import ndimage as nd
smoothed = nd.filters.gaussian_filter(data_thresholded,1)
a = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded)
# Histogram
n, bins, _ = plt.hist(a, 2000)
plt.title('Histogram of Synaptic Density')
plt.xlabel('Synaptic Density (syn/voxel)')
plt.ylabel('frequency')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Looking at the y-layer
Step2: 3. How are synapses distributed within these possible cortex layers? Are they uniform?
Step3: Surprisingly, it looks like the y-layers derived from our bounds above are not very uniformly distributed at all. From my understanding, it seems that we would expect some uniformity in the cortical layers. This definitely requires further investigation into how the synapse density within the cortex layers is expected to look like.
Step4: Now we know that the "spike" occurs at a synaptic density value of about 0.0013149.
Step5: Are the points in the spike uniformly distributed?
Step6: It does look as if the optimal number of clusters for the spike is 0. Thus, the spike may likely be uniformly distributed and we suspect it may be noise.
|
1,911
|
<ASSISTANT_TASK:>
Python Code:
import os, sys
import itertools
import re
import json
%matplotlib inline
from random import randint
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import gzip
from math import log, e
from scipy import stats
from math import sqrt
mdf = pd.read_csv('metric_codes.tsv', delimiter="\t", index_col=0, names=['Code', 'Metric'], header=None)
mdf.head(30)
mdf.tail(30)
dtypes={'Genome': object, 'Metric':np.int64, 'Accuracy':np.float64, 'Precision':np.float64, 'Recall':np.float64, 'Specificity':np.float64, 'F1 score':np.float64}
measures = pd.read_csv("phispy_metrics_tptn.tsv", delimiter="\t", dtype=dtypes, header=None, names=["Genome", "Metric", "Accuracy", "Precision", "Recall", "Specificity", 'F1 score'])
measures.head()
mdf[mdf['Metric'].str.contains('none') & mdf['Metric'].str.contains('noannotation') & mdf['Metric'].str.contains('pg0')]
measures[(measures['Metric'] == 45)].head()
mdf[~mdf['Metric'].str.contains('none') & ~mdf['Metric'].str.contains('noannotation') & ~mdf['Metric'].str.contains('pg0') & mdf['Metric'].str.contains('orf_length_med') & mdf['Metric'].str.contains('shannon_slope') & mdf['Metric'].str.contains('phmms')]
fig, ax = plt.subplots(figsize=(22,16))
m_an = measures[(measures['Metric'] == 45) | (measures['Metric'] == 464)]
sns.violinplot(ax = ax, x="Metric", y="F1 score", data=m_an, scale="count" )
sns.stripplot(ax = ax, x="Metric", y="F1 score", data=m_an, jitter=True, color="Black")
ax.set_xticklabels(['None', 'All'])
m_some = measures[(measures['Metric'] == 254) | (measures['Metric'] == 0) | (measures['Metric'] == 223)
| (measures['Metric'] == 57)
| (measures['Metric'] == 464)
]
fig, ax = plt.subplots(figsize=(22,16))
sns.violinplot(ax = ax, x="Metric", y="F1 score", data=m_some, scale="count" )
sns.stripplot(ax = ax, x="Metric", y="F1 score", data=m_some, jitter=True, color="Black")
ax.set_xticklabels(list(mdf.iloc[m_some['Metric'].unique(),0]), rotation=45)
m_alone = measures[((measures['Metric'] >8) & (measures['Metric'] <17))
| (measures['Metric'] == 464)
]
fig, ax = plt.subplots(figsize=(22,16))
labels = list(mdf.iloc[m_alone['Metric'].unique(),0])
labels[-1] = "All"
labels = [i.replace('none ', '') for i in labels]
sns.violinplot(ax = ax, x="Metric", y="F1 score", data=m_alone, scale="count" )
sns.stripplot(ax = ax, x="Metric", y="F1 score", data=m_alone, jitter=True, color="Black")
ax.set_xticklabels(labels)
fig.savefig('metrics_alone.png')
st = measures[['Metric', 'F1 score']].groupby('Metric').describe()
st.head()
fig, ax = plt.subplots(figsize=(22,16))
sns.barplot(ax = ax, x=st.index.values, y=('F1 score', 'mean'), data=st)
f_good = st[st[('F1 score', 'mean')] > 0.84]
f_good
pd.merge(f_good, mdf, left_index=True, right_index=True)
f_bad = st[st[('F1 score', 'mean')] < 0.5]
f_bad_code = pd.merge(f_bad, mdf, left_index=True, right_index=True)
f_bad_code
max(st[('F1 score', 'mean')])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the metrics
Step2: Read the results file.
Step3: Find the option with no metrics
Step4: But no metrics == no phages!
Step5: Find the option with all metrics
Step6: Plot none and all metrics
Step7: Plot some selected measures
Step8: Plot each metric alone
Step9: Summary of the metrics
Step10: Plot the different metrics
Step11: Find some good, and bad, metrics
|
1,912
|
<ASSISTANT_TASK:>
Python Code:
import mltoolbox.image.classification as model
from google.datalab.ml import *
bucket = 'gs://' + datalab_project_id() + '-lab'
preprocess_dir = bucket + '/flowerpreprocessedcloud'
model_dir = bucket + '/flowermodelcloud'
staging_dir = bucket + '/staging'
!gsutil mb $bucket
train_set = CsvDataSet('gs://cloud-datalab/sampledata/flower/train1000.csv', schema='image_url:STRING,label:STRING')
preprocess_job = model.preprocess_async(train_set, preprocess_dir, cloud={'num_workers': 10})
preprocess_job.wait() # Alternatively, you can query the job status by train_job.state. The wait() call blocks the notebook execution.
train_job = model.train_async(preprocess_dir, 30, 1000, model_dir, cloud=CloudTrainingConfig('us-central1', 'BASIC'))
train_job.wait() # Alternatively, you can query the job status by train_job.state. The wait() call blocks the notebook execution.
tb_id = TensorBoard.start(model_dir)
Models().create('flower')
ModelVersions('flower').deploy('beta1', model_dir)
images = [
'gs://cloud-ml-data/img/flower_photos/daisy/15207766_fc2f1d692c_n.jpg',
'gs://cloud-ml-data/img/flower_photos/tulips/6876631336_54bf150990.jpg'
]
# set resize=True to avoid sending large data in prediction request.
model.predict('flower.beta1', images, resize=True, cloud=True)
import google.datalab.bigquery as bq
bq.Dataset('flower').create()
eval_set = CsvDataSet('gs://cloud-datalab/sampledata/flower/eval670.csv', schema='image_url:STRING,label:STRING')
batch_predict_job = model.batch_predict_async(eval_set, model_dir, output_bq_table='flower.eval_results_full',
cloud={'temp_location': staging_dir})
batch_predict_job.wait()
%%bq query --name wrong_prediction
SELECT * FROM flower.eval_results_full WHERE target != predicted
wrong_prediction.execute().result()
ConfusionMatrix.from_bigquery('flower.eval_results_full').plot()
%%bq query --name accuracy
SELECT
target,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END) as correct,
COUNT(*) as total,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END)/COUNT(*) as accuracy
FROM
flower.eval_results_full
GROUP BY
target
accuracy.execute().result()
%%bq query --name logloss
SELECT feature, AVG(-logloss) as logloss, count(*) as count FROM
(
SELECT feature, CASE WHEN correct=1 THEN LOG(prob) ELSE LOG(1-prob) END as logloss
FROM
(
SELECT
target as feature,
CASE WHEN target=predicted THEN 1 ELSE 0 END as correct,
target_prob as prob
FROM flower.eval_results_full))
GROUP BY feature
FeatureSliceView().plot(logloss)
ModelVersions('flower').delete('beta1')
Models().delete('flower')
!gsutil -m rm -r {preprocess_dir}
!gsutil -m rm -r {model_dir}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preprocess
Step2: Train
Step3: Check your job status by running (replace the job id from the one shown above)
Step4: Predict
Step5: Online prediction is currently in alpha, it helps to ensure a warm start if the first call fails.
Step6: Batch Predict
Step7: Clean up
|
1,913
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
from skdaccess.framework.param_class import *
from skdaccess.astro.spectra.stream import DataFetcher
ap_spectra_url = AutoList([
'https://dr14.sdss.org/sas/dr14/eboss/spectro/redux/v5_10_0/spectra/lite/4055/spec-4055-55359-0596.fits',
])
df = DataFetcher([ap_spectra_url])
dw = df.output()
label, data = next(dw.getIterator())
header = dw.info(label)
plt.plot(10**data['loglam'], data['flux']);
plt.title(label.split('/')[-1]);
plt.ylabel('Flux ({})'.format(header['BUNIT']));
plt.xlabel('Wavelength (Ångströms)');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import data fetcher
Step2: Specify list of SDSS spectra URLs to retrieve
Step3: Create data fetcher
Step4: Access data and metadata
Step5: Plot spectra
|
1,914
|
<ASSISTANT_TASK:>
Python Code:
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
Q = cvx.matrix([[0.,0.],[0.,0.]])
p = cvx.matrix([-1., 4.])
G = cvx.matrix([[-3., 1., 0.],[1., 2., -1.]])
h = cvx.matrix([6., 4., 3.])
sol = cvx_solvers.qp(Q, p, G, h)
print(sol['x'])
import scipy.optimize as opt
import mystic.models
result = opt.minimize(mystic.models.zimmermann, [10., 1.], method='powell')
print(result.x)
import scipy.optimize as opt
import mystic.models
result = opt.minimize(mystic.models.fosc3d, [-5., 0.5], method='powell')
print(result.x)
import mystic
import mystic.models
result = mystic.solvers.fmin_powell(mystic.models.peaks, [0., -2.], bounds=[(-5.,5.)]*2)
print(result)
import numpy as np
import scipy.stats as stats
from mystic.solvers import fmin_powell
from mystic import reduced
# Define the function to fit.
def function(coeffs, x):
a,b,f,phi = coeffs
return a * np.exp(-b * np.sin(f * x + phi))
# Create a noisy data set around the actual parameters
true_params = [3, 2, 1, np.pi/4]
print("target parameters: {}".format(true_params))
x = np.linspace(0, 2*np.pi, 25)
exact = function(true_params, x)
noisy = exact + 0.3*stats.norm.rvs(size=len(x))
# Define an objective that fits against the noisy data
@reduced(lambda x,y: abs(x)+abs(y))
def objective(coeffs, x, y):
return function(coeffs, x) - y
# Use curve_fit to estimate the function parameters from the noisy data.
initial_guess = [1,1,1,1]
args = (x, noisy)
estimated_params = fmin_powell(objective, initial_guess, args=args)
print("solved parameters: {}".format(estimated_params))
# Differential Evolution solver
from mystic.solvers import DifferentialEvolutionSolver2
# Chebyshev polynomial and cost function
from mystic.models.poly import chebyshev8, chebyshev8cost
from mystic.models.poly import chebyshev8coeffs
# tools
from mystic.termination import VTR, CollapseAt, Or
from mystic.strategy import Best1Exp
from mystic.monitors import VerboseMonitor
from mystic.tools import random_seed
from mystic.math import poly1d
import numpy as np
if __name__ == '__main__':
print("Differential Evolution")
print("======================")
ndim = 9
random_seed(123)
# configure monitor
stepmon = VerboseMonitor(50,50)
# build a constraints function
def constraints(x):
x[-1] = 1.
return np.round(x)
stop = Or(VTR(0.0001), CollapseAt(0.0, generations=2))
# use DE to solve 8th-order Chebyshev coefficients
npop = 10*ndim
solver = DifferentialEvolutionSolver2(ndim,npop)
solver.SetRandomInitialPoints(min=[-100]*ndim, max=[100]*ndim)
solver.SetGenerationMonitor(stepmon)
solver.SetConstraints(constraints)
solver.enable_signal_handler()
solver.Solve(chebyshev8cost, termination=stop, strategy=Best1Exp, \
CrossProbability=1.0, ScalingFactor=0.9)
solution = solver.Solution()
# use monitor to retrieve results information
iterations = len(stepmon)
cost = stepmon.y[-1]
print("Generation %d has best Chi-Squared: %f" % (iterations, cost))
# use pretty print for polynomials
print(poly1d(solution))
# compare solution with actual 8th-order Chebyshev coefficients
print("\nActual Coefficients:\n %s\n" % poly1d(chebyshev8coeffs))
"Pressure Vessel Design"
def objective(x):
x0,x1,x2,x3 = x
return 0.6224*x0*x2*x3 + 1.7781*x1*x2**2 + 3.1661*x0**2*x3 + 19.84*x0**2*x2
bounds = [(0,1e6)]*4
# with penalty='penalty' applied, solution is:
xs = [0.72759093, 0.35964857, 37.69901188, 240.0]
ys = 5804.3762083
from mystic.constraints import as_constraint
from mystic.penalty import quadratic_inequality
def penalty1(x): # <= 0.0
return -x[0] + 0.0193*x[2]
def penalty2(x): # <= 0.0
return -x[1] + 0.00954*x[2]
def penalty3(x): # <= 0.0
from math import pi
return -pi*x[2]**2*x[3] - (4/3.)*pi*x[2]**3 + 1296000.0
def penalty4(x): # <= 0.0
return x[3] - 240.0
@quadratic_inequality(penalty1, k=1e12)
@quadratic_inequality(penalty2, k=1e12)
@quadratic_inequality(penalty3, k=1e12)
@quadratic_inequality(penalty4, k=1e12)
def penalty(x):
return 0.0
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=penalty,
npop=40, gtol=500, disp=True, full_output=True)
print(result[0])
def objective(x):
x0,x1 = x
return 2*x0**2 + x1**2 + x0*x1 + x0 + x1
bounds = [(0.0, None),(0.0, None)]
# with penalty='penalty' applied, solution is:
xs = [0.25, 0.75]
ys = 1.875
from mystic.math.measures import normalize
def constraint(x): # impose exactly
return normalize(x, 1.0)
if __name__ == '__main__':
from mystic.solvers import diffev2, fmin_powell
result = diffev2(objective, x0=bounds, bounds=bounds, npop=40,
constraints=constraint, disp=False, full_output=True)
print(result[0])
from mystic.termination import VTR, ChangeOverGeneration, And, Or
stop = Or(And(VTR(), ChangeOverGeneration()), VTR(1e-8))
from mystic.models import rosen
from mystic.monitors import VerboseMonitor
from mystic.solvers import DifferentialEvolutionSolver2
from pathos.pools import ThreadPool
if __name__ == '__main__':
solver = DifferentialEvolutionSolver2(3,40)
solver.SetRandomInitialPoints([-10,-10,-10],[10,10,10])
solver.SetGenerationMonitor(VerboseMonitor(10))
solver.SetMapper(ThreadPool().map) #NOTE: evaluation of objective in parallel
solver.SetTermination(stop)
solver.SetObjective(rosen)
solver.SetStrictRanges([-10,-10,-10],[10,10,10])
solver.SetEvaluationLimits(generations=600)
solver.Solve()
print(solver.bestSolution)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: EXERCISE
Step2: EXERCISE
Step3: EXERCISE
Step4: EXERCISE
Step5: EXERCISE
Step6: EXERCISE
Step7: EXERCISE
Step8: EXERCISE
|
1,915
|
<ASSISTANT_TASK:>
Python Code:
from nipype.interfaces.io import DataSink
ds = DataSink()
ds.inputs.base_directory = 's3://mybucket/path/to/output/dir'
ds.inputs.creds_path = '/home/neuro/aws_creds/credentials.csv'
ds.inputs.encrypt_bucket_keys = True
ds.local_copy = '/home/neuro/workflow_outputs/local_backup'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With the "s3
|
1,916
|
<ASSISTANT_TASK:>
Python Code:
census = list(csv.reader(open("census.csv", 'r')))
for index, column in enumerate(census[0]):
print("{} - {}: {}".format(index, column, census[1][index]))
def get_race_count(census, column_indexes):
return sum([int(census[1][index]) for index in column_indexes])
race_percentage = {
"Black": get_race_count(census, [12]),
"Asian/Pacific Islander": get_race_count(census, [14, 15]),
"White": get_race_count(census, [10]),
"Hispanic": get_race_count(census, [11]),
"Native American/Native Alaskan": get_race_count(census, [13])
}
race_rate_by_100k = {}
for key in race_counts:
race_rate_by_100k[key] = float(race_counts[key]) / race_percentage[key] * 100000
plot_counts(race_rate_by_100k, "race rate by 100 000")
homicide_by_race_counts = extract_counts([row for row in data if row[3] == 'Homicide'],
lambda row: row[7])
plot_counts(homicide_by_race_counts, "race (homicides)")
set([row[3] for row in data])
homicide_by_race_rate_by_100k = {}
for key in homicide_by_race_counts:
homicide_by_race_rate_by_100k[key] = float(homicide_by_race_counts[key]) / race_percentage[key] * 100000
plot_counts(homicide_by_race_rate_by_100k, "race rate by 100 000 (homicide)")
for death_type in set([row[3] for row in data]):
race_counts = extract_counts([row for row in data if row[3] == death_type],
lambda row: row[7])
plot_counts(race_counts, "race (%s)" % death_type)
race_rate_by_100k = {}
for key in race_counts:
race_rate_by_100k[key] = float(race_counts[key]) / race_percentage[key] * 100000
plot_counts(race_rate_by_100k, "race rate by 100 000 (%s)" % death_type)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Death by race
|
1,917
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function # Python 3
from sympy import init_printing
init_printing(use_latex='mathjax',use_unicode=False) # Affichage des résultats
%matplotlib inline
from sympy import plot
from sympy import sin
from sympy.abc import x
plot(sin(x))
plot(sin(x)/x, (x,-100,100))
plot(x**2+x-6, (x,-5,5), line_color='red', title='Youpi')
plot(x, x**2, x**3, (x, -2, 2), ylim=(-2,2))
p1 = plot(x, (x, -1, 1), show=False, line_color='b')
p2 = plot(x**2, (x, -1, 1), show=False, line_color='r')
p3 = plot(x**3, (x, -1, 1), show=False, line_color='g')
p1.extend(p2)
p1.extend(p3)
print(p1)
p1.show()
from sympy.plotting import plot3d
from sympy.abc import x,y
plot3d(x**2+y**2)
plot3d(sin(x*10)*cos(y*4), (x, -1, 1), (y, -1, 1))
from sympy import sin, cos
from sympy.abc import u, v
from sympy.plotting import plot_parametric
plot_parametric(cos(3*u), sin(2*u), (u, -5, 5))
from sympy.plotting import plot3d_parametric_line
plot3d_parametric_line(cos(u), sin(u), u, (u, -15, 15))
from sympy.plotting import plot3d_parametric_surface
X = cos(u)*(5+2*cos(v))
Y = sin(u)*(5+2*cos(v))
Z = 2*sin(v)
plot3d_parametric_surface(X, Y, Z, (u, -.5, 4), (v, -5, 5))
from sympy import plot_implicit, Eq
from sympy.abc import x, y
eq = Eq(x**2+y**2+x*y-2*x, 5)
eq
plot_implicit(eq)
plot_implicit(eq, (x,-2,5), (y,-5,3))
plot_implicit(y > 2*x+1)
from sympy import And
plot_implicit(And(y>2*x+1, y<5*x, x+y<5))
from sympy import mpmath # Sympy (installation normale)
import mpmath # SageMath
%matplotlib inline
mpmath.cplot(lambda z: z, [-10, 10], [-10, 10])
I = complex(0,1) # le nombre complexe I de Python
mpmath.cplot(lambda z: I*z, [-10, 10], [-10, 10])
mpmath.cplot(lambda z: z**5-1, [-2, 2], [-2, 2])
from mpmath import zeta
mpmath.cplot(zeta, [-10, 10], [-50, 50])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La librairie Sympy utilise matplotlib, une autre librairie de Python, pour faire des dessins. Pour activer l'affichage des graphiques dans Jupyter, on écrit d'abord ceci dans une cellule
Step2: Tracer une fonction $\RR\to\RR$
Step3: Un premier exemple. Par défaut, l'intervalle pour les $x$ est $[-10,10]$
Step4: <img src="images/sin_x.png" alt="image" width="264" />
Step5: <img src="images/sinx_x.png" alt="image" width="264" />
Step6: <img src="images/youpi.png" alt="image" width="264" />
Step7: <img src="images/x_x2_x3.png" alt="image" width="226" />
Step8: On ajoute à p1 les graphes p2 et p3
Step9: Maintenant p1 contient les trois graphes
Step10: On affiche le graphe des trois fonctions
Step11: <img src="images/x_x2_x3_colors.png" alt="image" width="226" />
Step12: Un premier exemple
Step13: <img src="images/x2_y2.png" alt="image" width="264" />
Step14: <img src="images/sin10x_cos4y.png" alt="image" width="264" />
Step15: La fonction plot_parametric permet de tracer des fonctions paramétrés $\RR\to\RR^2$. Par exemple, on trace la courbe de Lissajous lorsque $a=3$ et $b=2$
Step16: <img src="images/lissajous.png" alt="image" width="226" />
Step17: <img src="images/helice.png" alt="image" width="302" />
Step18: <img src="images/tore.png" alt="image" width="302" />
Step19: La fonction plot_implicit permet de tracer les solutions d'une équation implicite
Step20: <img src="images/rotated_ellipse.png" alt="image" width="453" />
Step21: Tracer une région de $\RR^2$
Step22: <img src="images/region.png" alt="image" width="302" />
Step23: <img src="images/region_bornee.png" alt="image" width="302" />
Step24: Rappelons que sans la ligne suivante, les dessins ne s'afficheront pas
Step25: La syntaxe des arguments n'est pas exactement la même que pour la fonction plot de SymPy. Il faut définir une fonction Python avec la commande def ou encore sur une ligne avec lambda. Par exemple, la fonction identité peut s'écrire lambda z
Step26: <img src="images/z.png" alt="image" width="264" />
Step27: Les pixels en rouges sont envoyés sur la droite réelle positive par la fonction lambda z
Step28: <img src="images/z5_1.png" alt="image" width="264" />
|
1,918
|
<ASSISTANT_TASK:>
Python Code:
%lsmagic
time print("hi")
%time
ls -l -h
!ls -l -h
files = !ls -l -h
files
%%!
ls -l
pwd
who
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
%timeit np.linalg.eigvals(np.random.rand(100,100))
%%timeit a = np.random.rand(100, 100)
np.linalg.eigvals(a)
%%capture capt
from __future__ import print_function
import sys
print('Hello stdout')
print('and stderr', file=sys.stderr)
capt.stdout, capt.stderr
capt.show()
%%writefile foo.py
print('Hello world')
%run foo
%%script python
import sys
print('hello from Python %s' % sys.version)
%%script python3
import sys
print('hello from Python: %s' % sys.version)
%%ruby
puts "Hello from Ruby #{RUBY_VERSION}"
%%bash
echo "hello from $BASH"
%%writefile ./lnum.py
print('my first line.')
print("my second line.")
print("Finished.")
%%script python ./lnum.py
#
%%bash
echo "hi, stdout"
echo "hello, stderr" >&2
%%bash --out output --err error
echo "hi, stdout"
echo "hello, stderr" >&2
print(error)
print(output)
%%ruby --bg --out ruby_lines
for n in 1...10
sleep 1
puts "line #{n}"
STDOUT.flush
end
ruby_lines
print(ruby_lines.read())
%load_ext cythonmagic
%%cython_pyximport foo
def f(x):
return 4.0*x
f(10)
%%cython
cimport cython
from libc.math cimport exp, sqrt, pow, log, erf
@cython.cdivision(True)
cdef double std_norm_cdf(double x) nogil:
return 0.5*(1+erf(x/sqrt(2.0)))
@cython.cdivision(True)
def black_scholes(double s, double k, double t, double v,
double rf, double div, double cp):
Price an option using the Black-Scholes model.
s : initial stock price
k : strike price
t : expiration time
v : volatility
rf : risk-free rate
div : dividend
cp : +1/-1 for call/put
cdef double d1, d2, optprice
with nogil:
d1 = (log(s/k)+(rf-div+0.5*pow(v,2))*t)/(v*sqrt(t))
d2 = d1 - v*sqrt(t)
optprice = cp*s*exp(-div*t)*std_norm_cdf(cp*d1) - \
cp*k*exp(-rf*t)*std_norm_cdf(cp*d2)
return optprice
black_scholes(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)
#%timeit black_scholes(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)
%%cython -lm
from libc.math cimport sin
print 'sin(1)=', sin(1)
%reload_ext rmagic
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
X = np.array([0,1,2,3,4])
Y = np.array([3,5,4,6,7])
plt.scatter(X, Y)
%Rpush X Y
%R lm(Y~X)$coef
%R resid(lm(Y~X)); coef(lm(X~Y))
b = %R a=resid(lm(Y~X))
%Rpull a
print(a)
assert id(b.data) == id(a.data)
%R -o a
from __future__ import print_function
v1 = %R plot(X,Y); print(summary(lm(Y~X))); vv=mean(X)*mean(Y)
print('v1 is:', v1)
v2 = %R mean(X)*mean(Y)
print('v2 is:', v2)
%%R -i X,Y -o XYcoef
XYlm = lm(Y~X)
XYcoef = coef(XYlm)
print(summary(XYlm))
par(mfrow=c(2,2))
plot(XYlm)
x = %octave [1 2; 3 4];
x
a = [1, 2, 3]
%octave_push a
%octave a = a * 2;
%octave_pull a
a
%%octave -i x -o y
y = x + 3;
y
%%octave -f svg
p = [12 -2.5 -8 -0.1 8];
x = 0:0.01:1;
polyout(p, 'x')
plot(x, polyval(p, x));
%%octave -s 500,500
# butterworth filter, order 2, cutoff pi/2 radians
b = [0.292893218813452 0.585786437626905 0.292893218813452];
a = [1 0 0.171572875253810];
freqz(b, a, 32);
%%octave -s 600,200 -f png
subplot(121);
[x, y] = meshgrid(0:0.1:3);
r = sin(x - 0.5).^2 + cos(y - 0.5).^2;
surf(x, y, r);
subplot(122);
sombrero()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 缺省情况下,Automagic开关打开,不需要输入%符号,将会自动识别。
Step2: 执行Shell脚本。
Step3: 执行多行shell脚本。
Step4: 下面开始体验一下魔法操作符的威力。
Step5: <!--====--> cell magics的简单例子
Step6: %%capture 魔法,用于捕获stdout/err, 可以直接显示,也可以存到变量里备用
Step7: %%writefile 魔法,将后续的语句写入文件中
Step8: <!--====--> Magics 运行其它的解释器。
Step9: IPython对通用的解释器创建了别名,可以直接使用, 譬如:bash, ruby, perl, etc.
Step10: 高级练习
Step11: 捕获输出。
Step12: 可以直接访问变量名了。
Step13: 后台运行 Scripts
Step14: 当后台线程保存输出时,有一个stdout/err pipes, 而不是输出的文本形式。
Step15: Cython Magic 函数扩展
Step16: %%cython_pyximport magic函数允许你在Cell中使用任意的Cython代码。Cython代码被写入.pyx 文件,保存在当前工作目录,然后使用pyximport 引用进来。需要指定一个模块的名称,所有的符号将被自动import。
Step18: The %cython magic
Step19: 测量一下运行时间。
Step20: Cython 允许使用额外的库与你的扩展进行链接,采用 -l 选项 (或者 --lib)。 注意,该选项可以使用多次,libraries, such as -lm -llib2 --lib lib3. 这里是使用 system math library的例子
Step21: 同样,可以使用 -I/--include 来指定包含头文件的目录, 以及使用 -c/--compile-args 编译选项,以及 extra_compile_args of the distutils Extension class. 请参考 the Cython docs on C library usage 获得更详细的说明。
Step22: 典型的用法是使用R来计算numpy的Array的统计指标。我们试一下简单线性模型,输出scatterplot。
Step23: 首先把变量赋给 R, 拟合模型并返回结果。 %Rpush 拷贝 rpy2中的变量. %R 对 rpy2 中的字符串求值,然后返回结果。在这里是线性模型的协方差-coefficients。
Step24: %R可以返回多个值。
Step25: 可以将 %R 结果传回 python objects. 返回值是一个“;”隔开的多行表达式,coef(lm(X~Y)).
Step26: Plotting and capturing output
Step27: Cell 级别的 magic
Step28: octavemagic
Step29: %%octave
Step30: Plotting
Step31: 使用 -s 参数调整大小
|
1,919
|
<ASSISTANT_TASK:>
Python Code:
%projects set ml-autoawesome
import os
PROJECT = 'ml-autoawesome' # CHANGE THIS
BUCKET = 'ml-autoawesome-cmle' # CHANGE THIS
REGION = 'us-central1' # CHANGE THIS
os.environ['PROJECT'] = PROJECT # for bash
os.environ['BUCKET'] = BUCKET # for bash
os.environ['REGION'] = REGION # for bash
%bash
echo "project=$PROJECT"
echo "bucket=$BUCKET"
echo "region=$REGION"
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
#gcloud beta ml init-project -q
import tensorflow as tf
import google.datalab.ml as ml
import apache_beam as beam
from tensorflow.python.lib.io import file_io
import json
import shutil
INDIR = '.'
OUTDIR = '.'
os.environ['OUTDIR'] = OUTDIR
def make_preprocessing_fn():
# stop-gap ...
def _scalar_to_vector(scalar):
# FeatureColumns expect shape (batch_size, 1), not just (batch_size)
return api.map(lambda x: tf.expand_dims(x, -1), scalar)
def preprocessing_fn(inputs):
result = {col: _scalar_to_vector(inputs[col]) for col in CSV_COLUMNS}
for name in SCALE_COLUMNS:
result[name] = _scalar_to_vector(mappers.scale_to_0_1(inputs[name]))
return result
return preprocessing_fn
def make_input_schema(mode):
input_schema = {}
if mode != tf.contrib.learn.ModeKeys.INFER:
input_schema[LABEL_COLUMN] = tf.FixedLenFeature(shape=[], dtype=tf.float32, default_value=0.0)
for name in ['dayofweek', 'key']:
input_schema[name] = tf.FixedLenFeature(shape=[], dtype=tf.string, default_value='null')
for name in ['hourofday']:
input_schema[name] = tf.FixedLenFeature(shape=[], dtype=tf.int64, default_value=0)
for name in SCALE_COLUMNS:
input_schema[name] = tf.FixedLenFeature(shape=[], dtype=tf.float32, default_value=0.0)
input_schema = dataset_schema.from_feature_spec(input_schema)
return input_schema
def make_coder(schema, mode):
import copy
column_names = copy.deepcopy(CSV_COLUMNS)
if mode == tf.contrib.learn.ModeKeys.INFER:
column_names.pop(LABEL_COLUMN)
coder = coders.CsvCoder(column_names, schema)
return coder
def preprocess_all(pipeline, training_data, eval_data, predict_data, output_dir, mode=tf.contrib.learn.ModeKeys.TRAIN):
path_constants = PathConstants()
work_dir = os.path.join(output_dir, path_constants.TEMP_DIR)
# create schema
input_schema = make_input_schema(mode)
# coder
coder = make_coder(input_schema, mode)
# 3) Read from text using the coder.
train_data = (
pipeline
| 'ReadTrainingData' >> beam.io.ReadFromText(training_data)
| 'ParseTrainingCsv' >> beam.Map(coder.decode))
evaluate_data = (
pipeline
| 'ReadEvalData' >> beam.io.ReadFromText(eval_data)
| 'ParseEvalCsv' >> beam.Map(coder.decode))
# metadata
input_metadata = dataset_metadata.DatasetMetadata(schema=input_schema)
_ = (input_metadata
| 'WriteInputMetadata' >> io.WriteMetadata(
os.path.join(output_dir, path_constants.RAW_METADATA_DIR),
pipeline=pipeline))
preprocessing_fn = make_preprocessing_fn()
(train_dataset, train_metadata), transform_fn = (
(train_data, input_metadata)
| 'AnalyzeAndTransform' >> tft.AnalyzeAndTransformDataset(
preprocessing_fn, work_dir))
# WriteTransformFn writes transform_fn and metadata to fixed subdirectories
# of output_dir, which are given by path_constants.TRANSFORM_FN_DIR and
# path_constants.TRANSFORMED_METADATA_DIR.
transform_fn_is_written = (transform_fn | io.WriteTransformFn(output_dir))
(evaluate_dataset, evaluate_metadata) = (
((evaluate_data, input_metadata), transform_fn)
| 'TransformEval' >> tft.TransformDataset())
train_coder = coders.ExampleProtoCoder(train_metadata.schema)
_ = (train_dataset
| 'SerializeTrainExamples' >> beam.Map(train_coder.encode)
| 'WriteTraining'
>> beam.io.WriteToTFRecord(
os.path.join(output_dir,
path_constants.TRANSFORMED_TRAIN_DATA_FILE_PREFIX),
file_name_suffix='.tfrecord.gz'))
evaluate_coder = coders.ExampleProtoCoder(evaluate_metadata.schema)
_ = (evaluate_dataset
| 'SerializeEvalExamples' >> beam.Map(evaluate_coder.encode)
| 'WriteEval'
>> beam.io.WriteToTFRecord(
os.path.join(output_dir,
path_constants.TRANSFORMED_EVAL_DATA_FILE_PREFIX),
file_name_suffix='.tfrecord.gz'))
if predict_data:
predict_mode = tf.contrib.learn.ModeKeys.INFER
predict_schema = make_input_schema(mode=predict_mode)
tsv_coder = make_coder(predict_schema, mode=predict_mode)
predict_coder = coders.ExampleProtoCoder(predict_schema)
_ = (pipeline
| 'ReadPredictData' >> beam.io.ReadFromText(predict_data,
coder=tsv_coder)
# TODO(b/35194257) Obviate the need for this explicit serialization.
| 'EncodePredictData' >> beam.Map(predict_coder.encode)
| 'WritePredictData' >> beam.io.WriteToTFRecord(
os.path.join(output_dir,
path_constants.TRANSFORMED_PREDICT_DATA_FILE_PREFIX),
file_name_suffix='.tfrecord.gz'))
# Workaround b/35366670, to ensure that training and eval don't start before
# the transform_fn is written.
train_dataset |= beam.Map(
lambda x, y: x, y=beam.pvalue.AsSingleton(transform_fn_is_written))
evaluate_dataset |= beam.Map(
lambda x, y: x, y=beam.pvalue.AsSingleton(transform_fn_is_written))
return transform_fn, train_dataset, evaluate_dataset
p = beam.Pipeline()
output_dataset = 'windmills_control'
transform_fn, train_dataset, eval_dataset = preprocess_all(
p, train_data_paths, eval_data_paths, predict_data_paths, output_dataset)
p.run()
import pandas as pd
import numpy as np
import datalab.bigquery as bq
# get data from BigQuery
query=
SELECT
speed, weight, angular_momentum, wind_dir, moisture_content, radar_reflectivity, radar_cap,
radar_distance_to_nearest_cloud, radar_x, radar_y
FROM
windmill_control
df = bq.Query(query).to_dataframe()
# Add a unique key (needed for batch prediction)
df['key'] = 1000 + df.index.values
# Use Pandas to create 90% training & 10% evaluation
df = df.reindex(np.random.permutation(df.index))
trainsize = (len(df)*9)/10
df_train = df.head(trainsize)
df_eval = df.tail(len(df) - trainsize)
df_train.to_csv('cleanedup-train.csv', header=False, index_label=False, index=False)
df_eval.to_csv('cleanedup-eval.csv', header=False, index_label=False, index=False)
df.head()
%bash
ls -lrt cleanedup*
!rm -rf ml_preproc ml_trained
train_bq = ml.BigQueryDataSet(
table_pattern=('cleanedup-train*'),
schema_file=os.path.join('ml.json'))
sd.local_preprocess(
dataset=train_bq,
output_dir=os.path.join(OUTDIR, 'ml_preproc'),
)
file_io.write_string_to_file(os.path.join(OUTDIR, 'ml_preproc/transforms.json'),
json.dumps(transforms, indent=2))
!cat $OUTDIR/ml_preproc/num*json
eval_bq = ml.BigQueryDataSet(
file_pattern=('cleanedup-eval*'),
schema_file=os.path.join(OUTDIR, 'ml.json'))
shutil.rmtree(os.path.join(OUTDIR, 'ml_trained'), ignore_errors=True)
sd.local_train(
train_dataset=train_bq,
eval_dataset=eval_bq,
preprocess_output_dir=os.path.join(OUTDIR, 'ml_preproc'),
transforms=os.path.join(OUTDIR, 'ml_preproc/transforms.json'),
output_dir=os.path.join(OUTDIR, 'ml_trained'),
model_type='dnn_regression',
max_steps=2500,
layer_sizes=[1024]*4
)
%bash
ls $OUTDIR/ml_trained
import pandas as pd
df = pd.read_csv('{}/batch_predict/predictions-00000-of-00001.csv'.format(OUTDIR), names=('key','true_cost','predicted_cost'))
df['true_cost'] = df['true_cost'] * 20000
df['predicted_cost'] = df['predicted_cost'] * 20000
df.head()
import seaborn as sns
sns.jointplot(x='true_cost', y="predicted_cost", data=df, kind='hex');
%bash
MODEL_NAME="windmill"
MODEL_VERSION="v3"
MODEL_LOCATION="/content/autoawesome/notebooks/ml_trained/model"
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
gcloud beta ml versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
gcloud beta ml models delete ${MODEL_NAME}
gcloud beta ml models create ${MODEL_NAME} --regions $REGION
gcloud beta ml versions create ${MODEL_VERSION} --model ${MODEL_NAME} --staging-bucket gs://${BUCKET} --origin ${MODEL_LOCATION}
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
#import google.cloud.ml.features as features
#from google.cloud.ml import session_bundle
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1beta1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1beta1_discovery.json')
request_data = {'instances':
[
# redacted to protect privacy of our windmill owners
]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'windmills', 'v3')
response = api.projects().predict(body=request_data, name=parent).execute()
print "response={0}".format(response)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> Step 1
Step3: <h2> 2. Preprocessing </h2>
Step4: <h2> Local preprocessing, training and prediction </h2>
Step5: <h3> Training </h3>
Step6: <h3> Prediction </h3>
Step7: <h2> Cloud deploy model and predict </h2>
Step8: After the model is deployed, we will want to test it. An easy way to test a deployed ML model is to write out a file with some hypothetical inputs and then use gcloud to invoke the REST API.
|
1,920
|
<ASSISTANT_TASK:>
Python Code:
HISTORY = 4
NRUNS = 50
parameters = {
'chaos': 0,
'risk_x_average_variance': 1,
'dividends': 1,
'discount_rate': 0.001,
'intensity_of_choice':2,
'fundamentalist_adaptive_parameter':1,
'chartist_adaptive_parameter':1.9,
'bubble_sensitivity':1800,
'fitness_memory_strenght':0.99,
'risk_adjustment':0,
'noise_std': 10,
'init_price_dev_fundament': -400,
'init_type2_agents': 0.5,
'init_type2_holdings': 0.5
}
# initialise model
time = 10000
chaos = 0
risk_av_variance = 1
dividends = 1
discount_rate = 0.001
intensity_of_choice = 2
fundamentalist_adaptive_parameter = 1
chartist_adaptive_parameter = 1.9
bubble_sensitivity = 1800
fitness_memory_strenght = 0.99
risk_adjustment = 0
noise_std = 10
init_price_dev_fundament = -400
init_type2_agents = 0.5
init_type2_holdings = 0.5
fundamental_price = dividends/discount_rate
fraction_type2 = [init_type2_agents for x in range(HISTORY)]
returns = [0 for x in range(HISTORY)]
price_deviation_from_fundamental = [init_price_dev_fundament for x in range(HISTORY)]
accumulated_profits1 = [0 for x in range(HISTORY - 1)]
accumulated_profits2 = [0 for x in range(HISTORY - 1)]
share_holdings_type1 = [(1 - init_type2_holdings) for x in range(HISTORY)]
share_holdings_type2 = [init_type2_holdings for x in range(HISTORY)]
normalized_acc_profits = [0 for x in range(HISTORY)]
# TODO make this a function?
pricing_noise = noise_std * np.random.randn(time)
price = [fundamental_price + x for x in price_deviation_from_fundamental]
pricing_noise
for t in range(4,time):
# calculate profits
profits_type1 = returns[t-1]* share_holdings_type1[t-2] - risk_adjustment * 0.5 * risk_av_variance * share_holdings_type1[t-2]**2
profits_type2 = returns[t-1]* share_holdings_type2[t-2] - risk_adjustment * 0.5 * risk_av_variance * share_holdings_type2[t-2]**2
# calculate accumulated profits
accumulated_profits1.append(profits_type1 + fitness_memory_strenght * accumulated_profits1[t-2])
accumulated_profits2.append(profits_type2 + fitness_memory_strenght * accumulated_profits2[t-2])
# normalization for logistice
norm = np.exp(intensity_of_choice * accumulated_profits1[t-1]) + np.exp(intensity_of_choice * accumulated_profits2[t-1])
normalized_acc_profits.append(norm)
# basic n2tilde (before adjustment)
n2tilde = np.exp( intensity_of_choice * accumulated_profits2[t-1]) / norm
# emergency check to make sure still in range, if not set to 0.5
if np.isnan(n2tilde):
n2tilde = 0.5
# adjustment to n, see paper
fraction_type2.append(n2tilde * np.exp( -(price_deviation_from_fundamental[t-1]) ** 2 / bubble_sensitivity))
# price (dev from fundamental) forecasts are formed
type1_forecast_p = fundamentalist_adaptive_parameter * (price_deviation_from_fundamental[t-1]) # type 1 price forecast for t+1
type2_forecast_p = price_deviation_from_fundamental[t-1] + chartist_adaptive_parameter * (price_deviation_from_fundamental[t-1]-price_deviation_from_fundamental[t-2]) # type 2 price forecast for t+1
# new price for today from t+1 forecasts (note timing)
price_deviation_from_fundamental.append(1/(1+discount_rate) * (((1-fraction_type2[t])* type1_forecast_p + fraction_type2[t]*type2_forecast_p ) + pricing_noise[t]))
price.append(price_deviation_from_fundamental[t] + fundamental_price)
# returns time path
# R[t-1] = p[t+1] - pstar - (1+r)*(p[t]-pstar) + dstd*np.randn(1)
returns.append(price_deviation_from_fundamental[t] - price_deviation_from_fundamental[t-1])
# portfolio decisions
share_holdings_type1.append(( type1_forecast_p - price_deviation_from_fundamental[t])/risk_av_variance)
share_holdings_type2.append(( type2_forecast_p - price_deviation_from_fundamental[t])/risk_av_variance)
[1, 2, 3, 4][-1]
# log return
lret = np.log(price[1:time]) - np.log(price[0:time-1])
# arithmetic return
ret = np.array(price[1:time]) / np.array(price[0:time-1])-1
ghret = np.array(price[1:time]) - dividends - (1+discount_rate) * np.array(price[0:time-1])
# plot price
fig_p, ax_p = plt.subplots()
ax_p.plot(range(time), price[0:])
plt.xlabel('Time')
plt.ylabel('Price')
plt.show()
fig_r, ax_r = plt.subplots()
ax_r.plot(range(time-1), lret[0:] )
plt.xlabel('Time')
plt.ylabel('Returns')
fraction_type2 = np.zeros([time, 1])
price = np.zeros([time, 1])
returns = np.zeros([time,1])
price_deviation_from_fundamental = np.zeros([time,1]) # previously x
accumulated_profits1 = np.zeros([time,1])
accumulated_profits2 = np.zeros([time, 1])
share_holdings_type1 = 0.5 * np.ones([time,1]) # previously z1
share_holdings_type2 = 0.5 * np.ones([time,1]) # previously z2
pricing_noise = noise_std * np.random.randn(time) # previously eps
normalized_acc_profits = np.zeros([time,1])
# fraction of agents
fraction_type2[0] = 0.5
fraction_type2[1] = fraction_type2[0]
fraction_type2[2] = fraction_type2[1]
fraction_type2[3] = fraction_type2[2]
# x = price - pstar
price_deviation_from_fundamental[0] = init_price_dev_fundament
price_deviation_from_fundamental[1] = price_deviation_from_fundamental[0]
price_deviation_from_fundamental[2] = price_deviation_from_fundamental[1]
price_deviation_from_fundamental[3] = price_deviation_from_fundamental[2]
p = fundamental_price + price_deviation_from_fundamental
# start at t = 5 to allow for lags
for t in range(4,time):
# update utility
# simplified equation from paper(see GHW equation(12))
# u1(t) = -0.5*(x(t-1)-v*x(t-3))^2;
# u2(t) = -0.5*(x(t-1) - x(t-3) - g*(x(t-3)-x(t-4)))^2;
# detaled one period profits using last period holdings
profits_type1 = returns[t-1]* share_holdings_type1[t-2] - risk_adjustment * 0.5 * risk_av_variance * share_holdings_type1[t-2]**2
profits_type2 = returns[t-1]* share_holdings_type2[t-2] - risk_adjustment * 0.5 * risk_av_variance * share_holdings_type2[t-2]**2
# accumulated fitness
accumulated_profits1[t-1] = profits_type1 + fitness_memory_strenght * accumulated_profits1[t-2]
accumulated_profits1[t-1] = profits_type2 + fitness_memory_strenght * accumulated_profits2[t-2]
# normalization for logistice
norm = np.exp( intensity_of_choice * accumulated_profits1[t-1]) + np.exp( intensity_of_choice * accumulated_profits1[t-1])
normalized_acc_profits[t] = norm
# basic n2tilde (before adjustment)
n2tilde = np.exp( intensity_of_choice * accumulated_profits2[t-1]) / norm
# emergency check to make sure still in range, if not set to 0.5
if np.isnan(n2tilde):
n2tilde = 0.5
# adjustment to n, see paper
fraction_type2[t] = n2tilde * np.exp( -(price_deviation_from_fundamental[t-1]) ** 2 / bubble_sensitivity)
# x(t+1) ( p(t+1)) forecasts
type1_forecast_p = fundamentalist_adaptive_parameter * (price_deviation_from_fundamental[t-1]) # type 1 price forecast for t+1
type2_forecast_p = price_deviation_from_fundamental[t-1] + chartist_adaptive_parameter * (price_deviation_from_fundamental[t-1]-price_deviation_from_fundamental[t-2]) # type 2 price forecast for t+1
# new price for today from t+1 forecasts (note timing)
price_deviation_from_fundamental[t] = 1/(1+discount_rate) * (((1-share_holdings_type2[t])* type1_forecast_p + share_holdings_type2[t]*type2_forecast_p ) + pricing_noise[t])
price[t] = price_deviation_from_fundamental[t] + fundamental_price
# returns time path
# R[t-1] = p[t+1] - pstar - (1+r)*(p[t]-pstar) + dstd*np.randn(1)
returns[t] = price_deviation_from_fundamental[t] - price_deviation_from_fundamental[t-1]
# portfolio decisions
share_holdings_type1[t] = ( type1_forecast_p - price_deviation_from_fundamental[t])/risk_av_variance
share_holdings_type2[t] = ( type2_forecast_p - price_deviation_from_fundamental[t])/risk_av_variance
# log return
lret = np.log(price[1:time]) - np.log(price[0:time-1])
# arithmetic return
ret = price[1:T] / price[0:time-1]-1
ghret = price[1:T] - dividends - (1+r) * price[0:time-1]
# plot price
fig_p, ax_p = plt.subplots()
ax_p.plot(range(T), p[0:])
plt.xlabel('Time')
plt.ylabel('Price')
fig_r, ax_r = plt.subplots()
ax_r.plot(range(T-1), lret[0:] )
plt.xlabel('Time')
plt.ylabel('Returns')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: initialization
Step2: Sequence of events / simulation
Step3: Log returns
Step4: Sequence of events / simulation
Step5: Log returns
|
1,921
|
<ASSISTANT_TASK:>
Python Code:
#CCL cosmology
cosmo_ccl = ccl.Cosmology(Omega_c = 0.30711 - 0.048254, Omega_b = 0.048254, h = 0.677, sigma8 = 0.8822714165197718, n_s=0.96, Omega_k = 0, transfer_function='eisenstein_hu')
#ccl_cosmo_set_high_prec (cosmo_ccl)
cosmo_numcosmo, dist, ps_lin, ps_nln, hmfunc = create_nc_obj (cosmo_ccl)
psf = hmfunc.peek_psf ()
#CosmoSim_proxy model
#M_0, z_0
theta_pivot = [3e14/0.71, 0.6]
#\mu_0, a_\mu^z, a_\mu^M
theta_mu = [3.19, -0.7, 2]
#\sigma_0, a_\sigma^z, a_\sigma^M
theta_sigma = [0.33, 0.,-0.08]
#Richness object
area = (0.25)*4*np.pi / 100.0
lnRl = 1.0
lnRu = 2.0
zl = 0.25
zu = 1.0
#Numcosmo_proxy model
cluster_z = nc.ClusterRedshift.new_from_name("NcClusterRedshiftNodist{'z-min': <%20.15e>, 'z-max':<%20.15e>}" % (zl, zu))
cluster_m = nc.ClusterMass.new_from_name("NcClusterMassAscaso{'M0':<%20.15e>,'z0':<%20.15e>,'lnRichness-min':<%20.15e>, 'lnRichness-max':<%20.15e>}" % (3e14/(0.71),0.6, lnRl, lnRu))
cluster_m.param_set_by_name('mup0', 3.19)
cluster_m.param_set_by_name('mup1', 2/np.log(10))
cluster_m.param_set_by_name('mup2', -0.7/np.log(10))
cluster_m.param_set_by_name('sigmap0', 0.33)
cluster_m.param_set_by_name('sigmap1', -0.08/np.log(10))
cluster_m.param_set_by_name('sigmap2', 0/np.log(10))
#Numcosmo Cluster Abundance
#First we need to define the multiplicity function here we will use the tinker
mulf = nc.MultiplicityFuncTinker.new()
mulf.set_linear_interp (True)
mulf.set_mdef(nc.MultiplicityFuncMassDef.CRITICAL)
mulf.set_Delta(200)
#Second we need to construct a filtered power spectrum
hmf = nc.HaloMassFunction.new(dist,psf,mulf)
hmf.set_area(area)
ca = nc.ClusterAbundance.new(hmf,None)
mset = ncm.MSet.new_array([cosmo_numcosmo,cluster_m,cluster_z])
ncount = Nc.DataClusterNCount.new (ca, "NcClusterRedshiftNodist", "NcClusterMassAscaso")
ncount.catalog_load ("ncount_ascaso.fits")
cosmo_numcosmo.props.Omegac_fit = True
cosmo_numcosmo.props.w0_fit = True
cluster_m.props.mup0_fit = True
mset.prepare_fparam_map ()
ncount.set_binned (False)
dset = ncm.Dataset.new ()
dset.append_data (ncount)
lh = Ncm.Likelihood (dataset = dset)
fit = Ncm.Fit.new (Ncm.FitType.NLOPT, "ln-neldermead", lh, mset, Ncm.FitGradType.NUMDIFF_FORWARD)
fitmc = Ncm.FitMC.new (fit, Ncm.FitMCResampleType.FROM_MODEL, Ncm.FitRunMsgs.SIMPLE)
fitmc.set_nthreads (3)
fitmc.set_data_file ("ncount_ascaso_mc_unbinned.fits")
fitmc.start_run ()
fitmc.run_lre (1000, 5.0e-3)
fitmc.end_run ()
ntests = 100.0
mcat = fitmc.mcat
mcat.log_current_chain_stats ()
mcat.calc_max_ess_time (ntests, Ncm.FitRunMsgs.FULL);
mcat.calc_heidel_diag (ntests, 0.0, Ncm.FitRunMsgs.FULL);
mset.pretty_log ()
mcat.log_full_covar ()
mcat.log_current_stats ()
be, post_lnnorm_sd = mcat.get_post_lnnorm ()
lnevol, glnvol = mcat.get_post_lnvol (0.6827)
Ncm.cfg_msg_sepa ()
print ("# Bayesian evidence: % 22.15g +/- % 22.15g" % (be, post_lnnorm_sd))
print ("# 1 sigma posterior volume: % 22.15g" % lnevol)
print ("# 1 sigma posterior volume (Gaussian approximation): % 22.15g" % glnvol)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define proxy modelling
Step2: initialize the ClusterAbundance object
|
1,922
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import numpy
import toyplot
import toyplot.pdf
import toyplot.png
import toyplot.svg
print('Pandas version: ', pandas.__version__)
print('Numpy version: ', numpy.__version__)
print('Toyplot version: ', toyplot.__version__)
column_names = ['MPG',
'Cylinders',
'Displacement',
'Horsepower',
'Weight',
'Acceleration',
'Model Year',
'Origin',
'Car Name']
data = pandas.read_table('auto-mpg.data',
delim_whitespace=True,
names=column_names,
index_col=False)
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-3,3,-44),
xlabel = 'Weight (lb)',
ylabel = 'Horsepower')
# Note that this data has some invalid measurements for Horsepower. Thus, we need
# to filter those rows out. That is what the [data['Horsepower'] != '?'] is for
axes.scatterplot(data['Weight'][data['Horsepower'] != '?'],
data['Horsepower'][data['Horsepower'] != '?'])
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
toyplot.pdf.render(canvas, 'XY_Scatterplot.pdf')
toyplot.svg.render(canvas, 'XY_Scatterplot.svg')
toyplot.png.render(canvas, 'XY_Scatterplot.png', scale=5)
sorted_data = data.sort_values('Weight')
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-3,3,-44),
xlabel = 'Weight (lb)',
ylabel = 'Horsepower')
axes.plot(sorted_data['Weight'][sorted_data['Horsepower'] != '?'],
sorted_data['Horsepower'][sorted_data['Horsepower'] != '?'])
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
toyplot.pdf.render(canvas, 'XY_Scatterplot_Trend_Bad.pdf')
toyplot.svg.render(canvas, 'XY_Scatterplot_Trend_Bad.svg')
toyplot.png.render(canvas, 'XY_Scatterplot_Trend_Bad.png', scale=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https
Step2: Use toyplot to plot the measurements of horsepower vs weight. We should expect a general trend to higher horsepower to weight with some outliers (such as for sports cars).
Step3: Repeat the plot, but with a trend line. This is a bad example as the order of the size of the cars is ore arbitrary than, for example, the year they were manufactured. It makes for a misleading line.
|
1,923
|
<ASSISTANT_TASK:>
Python Code:
from poppy.creatures import PoppyHumanoid
creature = PoppyHumanoid(simulator='vrep')
creature.reset_simulation()
import pypot
creature.stop_simulation()
pypot.vrep.close_all_connections()
from poppy.creatures import PoppyHumanoid
poppy = PoppyHumanoid(simulator='vrep')
from __future__ import print_function
print("Réponse:")
print( "j'ai", len( poppy.motors ), "moteurs")
print( "ils sont tous indexés dans une ", type( poppy.motors ), "qui s'appelle poppy.motors \n\n la voici: ")
for m in poppy.motors:
print( "-------------")
print( "nom du moteur: ", m.name)
print( "position actuelle du moteur: ", m.present_position, "degrès")
# éteindre la simulation précédente...
import pypot
creature.stop_simulation()
pypot.vrep.close_all_connections()
# ...avant d'en démarrer une nouvelle.
from poppy.creatures import PoppyHumanoid
poppy = PoppyHumanoid(simulator='vrep')
# Poppy dit oui
for i in range(2):
poppy.head_y.goal_position = -20
poppy.head_y.goal_position = +20
poppy.head_y.goal_position = 0
##### Il ne se passe rien... si !
##### mais Poppy va trop vite, essayons ça :
# Poppy dit oui
import time
for i in range(2):
poppy.head_y.goal_position = -20
time.sleep(1)
poppy.head_y.goal_position = +20
time.sleep(1)
poppy.head_y.goal_position = 0
poppy.l_shoulder_x.goto_position(90,2)
poppy.l_arm_z.goto_position(90,2)
poppy.abs_z.goto_position(10,2)
poppy.l_elbow_y.goto_position(-120,2,wait=True)
for i in range(3):
poppy.l_elbow_y.goto_position(-90,0.5,wait=True)
poppy.l_elbow_y.goto_position(-120,0.5,wait=True)
poppy.l_shoulder_x.goto_position(0,2)
poppy.l_arm_z.goto_position(0,2)
poppy.abs_z.goto_position(0,2)
poppy.l_elbow_y.goto_position(0,2)
for i in range(3):
poppy.head_y.goto_position(-20,1)
poppy.head_y.goto_position(+20,1)
poppy.head_y.goto_position(0,0.5)
print( "Torso, Ergo, et toute la family")
# si une simulation est active, n'oubliez pas de la quitter
from poppy.creatures import PoppyTorso
torso = PoppyTorso(simulator='vrep')
#si une simulation est active, n'oubliez pas de la quitter
from poppy.creatures import PoppyErgoJr
ergo = PoppyErgoJr(simulator='vrep')
# essaies ton propre code ;)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3 - Redémarrer la simulatiuon
Step2: 4 - Eteindre la simulation
Step3: 5 - Moteurs & capteurs
Step4: Explication
Step5: Explication
Step6: Explication
Step7: Plus d'informations sur les prochaines créatures Poppy sur ce topic
Step8: Instancier Ergo Jr
Step9: etc
|
1,924
|
<ASSISTANT_TASK:>
Python Code:
new_data = new_data.to_crs("+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ")
new_data['logBiomass'] = new_data.apply(lambda x : np.log(x.plotBiomass),axis=1)
new_data['newLon'] = new_data.apply(lambda c : c.geometry.x, axis=1)
new_data['newLat'] = new_data.apply(lambda c : c.geometry.y, axis=1)
new_data.plot(column='SppN')
new_data['logBiomass'] = np.log(new_data.plotBiomass)
new_data['logSppn'] = np.log(new_data.SppN)
new_data.logBiomass.plot.hist()
### Now with statsmodels.api
#xx = X.SppN.values.reshape(-1,1)
#xx = sm.add_constant(xx)
#model = sm.OLS(Y.values.reshape(-1,1),xx)
import statsmodels.formula.api as smf
model = smf.ols(formula='logBiomass ~ SppN',data=new_data)
results = model.fit()
param_model = results.params
results.summary()
### Now with statsmodels.api
#xx = X.SppN.values.reshape(-1,1)
#xx = sm.add_constant(xx)
#model = sm.OLS(Y.values.reshape(-1,1),xx)
import statsmodels.formula.api as smf
model = smf.ols(formula='logBiomass ~ logSppn',data=new_data)
results = model.fit()
param_model = results.params
results.summary()
new_data['residuals1'] = results.resid
fig, axes = plt.subplots(nrows=2, ncols=2)
#plt.scatter(new_data.newLon,new_data.residuals1,subplots=True, layout=(1,2))
#plt.scatter(new_data.newLat,new_data.residuals1,subplots=True, layout=(1,2))
new_data.residuals1.hist(ax=axes[0][0])
#axes[1][0] =
plt.scatter(new_data.newLat,new_data.residuals1)
#axes[1][1] =
plt.scatter(new_data.newLon,new_data.residuals1)
plt.set_title=("Residuals of: $log(Biomass) = Spp_richnees$")
# COnsider the the following subregion
section = new_data[lambda x: (x.LON > -90) & (x.LON < -80) & (x.LAT > 30) & (x.LAT < 32) ]
section.plot(column='residuals1')
section.plot(column='plotBiomass')
plt.scatter(section.newLon,section.residuals1)
plt.scatter(section.newLat,section.residuals1)
vg = tools.Variogram(section,'residuals1')
#vg.calculate_empirical(n_bins=50)
%time vg.plot(num_iterations=90,n_bins=10,refresh=True)
# COnsider the the following subregion
section = new_data[lambda x: (x.LON > -90) & (x.LON < -88) & (x.LAT > 30) & (x.LAT < 40) ]
section.plot(column='residuals1')
section.plot(column='plotBiomass')
vg = tools.Variogram(section,'residuals1')
#vg.calculate_empirical(n_bins=50)
%time vg.plot(num_iterations=90,n_bins=30,refresh=True)
xx = pd.DataFrame({'dist' : vg.distance_coordinates.flatten() , 'y' : vg.distance_responses.flatten()})
vg = tools.Variogram(section,'residuals1',using_distance_threshold=3000)
#vg.calculate_empirical(n_bins=50)
%time vg.plot(num_iterations=80,n_bins=20,refresh=True,with_envelope=True)
len(data.lon)
#X = data[['AET','StandAge','lon','lat']]
#X = section[['SppN','lon','lat']]
#X = section[['SppN','newLon','newLat']]
X = section[['newLon','newLat']]
#Y = section['plotBiomass']
logY = section['logBiomass']
Y = section[['residuals1']]
#Y = data[['SppN']]
## First step in spatial autocorrelation
#Y = pd.DataFrame(np.zeros(len(Y)))
## Let´s take a small sample only for the spatial autocorrelation
import numpy as np
sample_size = 2000
randindx = np.random.randint(0,X.shape[0],sample_size)
nX = X.loc[randindx]
nY = Y.loc[randindx]
# Import GPFlow
import GPflow as gf
k = gf.kernels.Matern12(2, active_dims = [0,1],lengthscales=1000 )
#k = gf.kernels.Matern12(2,lengthscales=700000) + gf.kernels.Constant?
#k = gf.kernels.RBF(2,variance=2.0,lengthscales=4000) + gf.kernels.RBF(2,variance=2.0,lengthscales=700000) + gf.kernels.Constant(2,variance=1.0,active_dims=[0,1])
model = gf.gpr.GPR(section[['newLon','newLat']].as_matrix(),section.residuals1.values.reshape(-1,1),k)
%time model.optimize()
model.get_parameter_dict()
import numpy as np
Nn = 300
dsc = section
predicted_x = np.linspace(min(dsc.newLon),max(dsc.newLon),Nn)
predicted_y = np.linspace(min(dsc.newLat),max(dsc.newLat),Nn)
Xx, Yy = np.meshgrid(predicted_x,predicted_y)
## Fake richness
fake_sp_rich = np.ones(len(Xx.ravel()))
predicted_coordinates = np.vstack([ Xx.ravel(), Yy.ravel()]).transpose()
#predicted_coordinates = np.vstack([section.SppN, section.newLon,section.newLat]).transpose()
predicted_coordinates.shape
means,variances = model.predict_y(predicted_coordinates)
variances = np.array(variances)
means = np.array(means)
fig = plt.figure(figsize=(16,10), dpi= 80, facecolor='w', edgecolor='w')
#plt.pcolor(Xx,Yy,np.sqrt(variances.reshape(Nn,Nn))) #,cmap=plt.cm.Greens)
plt.pcolormesh(Xx,Yy,np.sqrt(variances.reshape(Nn,Nn)))
plt.colorbar()
plt.scatter(dsc.newLon,dsc.newLat,c=dsc.SppN,edgecolors='')
plt.title("VAriance Biomass")
plt.colorbar()
import cartopy
plt.figure(figsize=(17,11))
proj = cartopy.crs.PlateCarree()
ax = plt.subplot(111, projection=proj)
ax = plt.axes(projection=proj)
#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')
#ax.set_extent([-93, -70, 30, 50])
#ax.set_extent([-100, -60, 20, 50])
#ax.set_extent([-95, -70, 25, 45])
#ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAKES, alpha=0.9)
ax.stock_img()
#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())
#ax.add_feature(cartopy.feature.RIVERS)
mm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn),transform=proj )
#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')
cs = plt.contour(Xx,Yy,means.reshape(Nn,Nn),linewidths=2,colors='k',linestyles='dotted',levels=[4.0,5.0,6.0,7.0,8.0])
plt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')
#ax.scatter(new_data.lon,new_data.lat,c=new_data.error,edgecolors='',transform=proj,cmap=plt.cm.Greys,alpha=0.2)
plt.colorbar(mm)
plt.title("Predicted Species Richness")
#(x.LON > -90) & (x.LON < -80) & (x.LAT > 40) & (x.LAT < 50)
#### test to check duplicates
new_data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Add log of the Biomass
Step2: Linear Regression
Step3: STOPPPP!!
Step4: Now with distance restriction (experimental!)
Step5: Model Fitting Using a GLM
Step6: Hasta Aqui!, Lo dem'as son tonterias y retazos
|
1,925
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.0,<2.1"
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
b.get_setting()
b['setting']
b['plotting_backend@setting']
b['plotting_backend@setting'].choices
b['log_history@setting'].description
b['log_history@setting']
b.history_enabled
b.enable_history()
b['log_history@setting']
b.history_enabled
b['dict_set_all@setting']
b['teff@component']
b.set_value_all('teff@component', 4000)
print b['value@teff@primary@component'], b['value@teff@secondary@component']
b['dict_set_all@setting'] = True
b['teff@component'] = 8000
print b['value@teff@primary@component'], b['value@teff@secondary@component']
b['dict_set_all@setting'] = False
b['incl']
b['dict_filter@setting'] = {'context': 'component'}
b['incl']
b.filter(qualifier='incl')
b.set_value('dict_filter@setting', {})
b['plotting_backend@setting']
b['plotting_backend@setting'].choices
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Accessing Settings
Step3: or via filtering/twig access
Step4: and can be set as any other Parameter in the Bundle
Step5: Available Settings
Step6: This parameter can also be set by calling b.enable_history() or b.disable_history() and can be accessed with b.history_enabled.
Step7: dict_set_all
Step8: In our default binary there are temperatures ('teff') parameters for each of the components ('primary' and 'secondary'). If we were to do
Step9: If you want dictionary access to use set_value_all instead of set_value, you can enable this parameter
Step10: Now let's disable this so it doesn't confuse us while looking at the other options
Step11: dict_filter
Step12: In our default binary, there are several inclination parameters - one for each component ('primary', 'secondary', 'binary') and one with the constraint context (to keep the inclinations aligned).
Step13: Now we no longer see the constraint parameters.
Step14: Now let's reset this option... keeping in mind that we no longer have access to the 'setting' context through twig access, we'll have to use methods to clear the dict_filter
Step15: plotting_backend
|
1,926
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line)
import statsmodels.formula.api as smf # package we'll be using for linear regression
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
import numpy as np
plt.style.use('ggplot')
import dateutil.parser
import math
import random
import matplotlib.ticker as plticker
matplotlib.rcParams['ps.fonttype'] = 42
df = pd.read_csv("data/hanford.csv")
df.head()
df.describe()
df.median()
rang= df['Mortality'].max() - df['Mortality'].min()
rang
iqr_m = df['Mortality'].quantile(q=0.75)- df['Mortality'].quantile(q=0.25)
iqr_m
iqr_e = df['Exposure'].quantile(q=0.75)- df['Exposure'].quantile(q=0.25)
iqr_e
UAL_m= (iqr_m*1.5) + df['Mortality'].quantile(q=0.75)
UAL_m
UAL_e= (iqr_m*1.5) + df['Exposure'].quantile(q=0.75)
UAL_e
LAL_m= df['Mortality'].quantile(q=0.25) - (iqr_e*1.5)
LAL_m
LAL_e= df['Exposure'].quantile(q=0.25) - (iqr_e*1.5)
LAL_e
len(df[df['Mortality']> UAL_m])
len(df[df['Exposure']> UAL_e])
len(df[df['Mortality']< LAL_m])
len(df[df['Mortality'] > UAL_m])
df.corr()
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
lm.params
def exposure_predict(exposure):
df = pd.read_csv("data/hanford.csv")
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
mortality = exposure * lm.params.Exposure + lm.params.Intercept
return mortality
intercept, slope = lm.params
df.plot(kind="scatter",x="Exposure",y="Mortality")
plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red")
exposure_predict(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step6: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
|
1,927
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
from cpyquickhelper.examples.vector_container_python import (
RandomTensorVectorFloat, RandomTensorVectorFloat2)
rnd = RandomTensorVectorFloat(10, 10)
result = rnd.get_tensor_vector()
print(result)
result_ref = rnd.get_tensor_vector_ref()
print(result_ref)
rnd2 = RandomTensorVectorFloat2(10, 10)
result2 = rnd2.get_tensor_vector()
print(result2)
result2_ref = rnd2.get_tensor_vector_ref()
print(result2_ref)
%timeit rnd.get_tensor_vector()
%timeit rnd.get_tensor_vector_ref()
%timeit rnd2.get_tensor_vector()
%timeit rnd2.get_tensor_vector_ref()
import itertools
from cpyquickhelper.numbers.speed_measure import measure_time
from tqdm import tqdm
import pandas
data = []
sizes = [1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, 5000, 10000]
sizes = list(itertools.product(sizes, sizes))
for i, j in tqdm(sizes):
if j >= 1000:
if i > 1000:
continue
if i * j >= 1e6:
repeat, number = 3, 3
else:
repeat, number = 10, 10
rnd = RandomTensorVectorFloat(i, j)
obs = measure_time(lambda: rnd.get_tensor_vector(), repeat=repeat, number=number, div_by_number=True)
obs['name'] = 'capsule'
obs['n_vectors'] = i
obs['size'] = j
data.append(obs)
rnd2 = RandomTensorVectorFloat2(i, j)
obs = measure_time(lambda: rnd2.get_tensor_vector(), repeat=repeat, number=number, div_by_number=True)
obs['name'] = 'list'
obs['n_vectors'] = i
obs['size'] = j
data.append(obs)
obs = measure_time(lambda: rnd2.get_tensor_vector_ref(), repeat=repeat, number=number, div_by_number=True)
obs['name'] = 'ref'
obs['n_vectors'] = i
obs['size'] = j
data.append(obs)
df = pandas.DataFrame(data)
df.tail()
piv = pandas.pivot_table(df, index=['n_vectors', 'size'], columns=['name'], values='average')
piv['ratio'] = piv['capsule'] / piv['list']
piv.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
piv[['capsule', 'list', 'ref']].plot(logy=True, ax=ax[0], title='Capsule (OPAQUE) / list')
piv.sort_values('ratio', ascending=False)[['ratio']].plot(ax=ax[1], title='Ratio Capsule (OPAQUE) / list');
flat = piv.reset_index(drop=False)[['n_vectors', 'size', 'ratio']]
flat_piv = flat.pivot('n_vectors', 'size', 'ratio')
flat_piv
import numpy
import seaborn
seaborn.heatmap(numpy.minimum(flat_piv.values, 1), cmap="YlGnBu",
xticklabels=list(flat_piv.index), yticklabels=list(flat_piv.columns));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Two identical classes
Step2: Scenarii
|
1,928
|
<ASSISTANT_TASK:>
Python Code:
from flow.scenarios import MergeScenario
from flow.core.params import VehicleParams
from flow.controllers import IDMController
from flow.core.params import SumoCarFollowingParams
# create an empty vehicles object
vehicles = VehicleParams()
# add some vehicles to this object of type "human"
vehicles.add("human",
acceleration_controller=(IDMController, {}),
car_following_params=SumoCarFollowingParams(
speed_mode="obey_safe_speed",
# we use the speed mode "obey_safe_speed" for better dynamics at the merge
),
num_vehicles=20)
from flow.core.params import InFlows
inflow = InFlows()
inflow.add(veh_type="human",
edge="inflow_highway",
vehs_per_hour=2000)
inflow.add(veh_type="human",
edge="inflow_merge",
vehs_per_hour=100)
from flow.scenarios.merge import ADDITIONAL_NET_PARAMS
from flow.core.params import NetParams
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
# make the part of the highway after the merge longer
additional_net_params['post_merge_length'] = 350
# make the number of lanes on the highway be just one
additional_net_params['highway_lanes'] = 1
net_params = NetParams(inflows=inflow, # our inflows
additional_params=additional_net_params)
from flow.core.params import SumoParams, EnvParams, InitialConfig
from flow.envs.loop.loop_accel import AccelEnv, ADDITIONAL_ENV_PARAMS
from flow.core.experiment import Experiment
sumo_params = SumoParams(render=True,
sim_step=0.2)
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
initial_config = InitialConfig()
scenario = MergeScenario(name="merge-example",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config)
env = AccelEnv(env_params, sumo_params, scenario)
exp = Experiment(env)
_ = exp.run(1, 10000)
inflow.add(veh_type="human",
edge="inflow_highway",
vehs_per_hour=2000)
from flow.core.experiment import Experiment
from flow.core.params import NetParams, EnvParams, InitialConfig, InFlows, \
VehicleParams, SumoParams, SumoCarFollowingParams
from flow.controllers import IDMController
from flow.scenarios import MergeScenario
from flow.scenarios.merge import ADDITIONAL_NET_PARAMS
from flow.envs.loop.loop_accel import AccelEnv, ADDITIONAL_ENV_PARAMS
# create a vehicle type
vehicles = VehicleParams()
vehicles.add("human",
acceleration_controller=(IDMController, {}),
car_following_params=SumoCarFollowingParams(
speed_mode="obey_safe_speed"))
# create the inflows
inflows = InFlows()
# inflow for (1)
inflows.add(veh_type="human",
edge="inflow_highway",
vehs_per_hour=10000,
depart_lane="random",
depart_speed="speedLimit",
color="white")
# inflow for (2)
inflows.add(veh_type="human",
edge="inflow_merge",
period=2,
depart_lane=0, # right lane
depart_speed=0,
color="green")
# inflow for (3)
inflows.add(veh_type="human",
edge="inflow_merge",
probability=0.1,
depart_lane=1, # left lane
depart_speed="random",
begin=60, # 1 minute
number=30,
color="red")
# modify the network accordingly to instructions
# (the available parameters can be found in flow/scenarios/merge.py)
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
additional_net_params['post_merge_length'] = 350 # this is just for visuals
additional_net_params['highway_lanes'] = 4
additional_net_params['merge_lanes'] = 2
# setup and run the simulation
net_params = NetParams(inflows=inflows,
additional_params=additional_net_params)
sim_params = SumoParams(render=True,
sim_step=0.2)
sim_params.color_vehicles = False
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
initial_config = InitialConfig()
scenario = MergeScenario(name="merge-example",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config)
env = AccelEnv(env_params, sim_params, scenario)
exp = Experiment(env)
_ = exp.run(1, 10000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A schematic of the above network is displayed in the figure below. As we can see, the edges at the start of the main highway and of the on-merge are named inflow_highway and inflow_merge respectively. These names will be important when we begin creating our inflows, as we will need to specify by which edges the vehicles should enter the network.
Step2: We have created a new type of vehicle, called human, and we directly inserted 20 vehicles of this type into the network. These vehicles will already be on the network when the simulation starts, contrary to the vehicles added by the inflow which will only start coming in the network after the simulation starts.
Step3: In order to add new inflows of vehicles of pre-defined types onto specific edges and lanes in the network, we use the InFlows object's add method. This function requires at least the following parameters (more will be shown in section 3)
Step4: Next, we create a second inflow of vehicles on the on-merge lane at a lower rate of 100 vehicules par hour.
Step5: In the next section, we will add our inflows to our network and run a simulation to see them in action.
Step6: Finally, we create and start the simulation, following what is explained in tutorial 1.
Step7: <img src="img/merge_visual.png" width="100%">
Step8: However, this add method has a lot more parameters, which we will talk about now.
|
1,929
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-2', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
1,930
|
<ASSISTANT_TASK:>
Python Code:
randinds = np.random.permutation(len(digits.target))
# shuffle the values
from sklearn.utils import shuffle
data, targets = shuffle(digits.data, digits.target, random_state=0)
# scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(data)
data_scaled = scaler.transform(data)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data_scaled, targets, test_size=0.20, random_state=0)
X_train.shape, y_train.shape
from cgt.distributions import categorical
def model(X, y):
# relu(W*x + b)
np.random.seed(0)
h1 = nn.rectify(nn.Affine(64, 512, weight_init=nn.IIDGaussian(std=.1))(X))
h2 = nn.rectify(nn.Affine(512, 512, weight_init=nn.IIDGaussian(std=.1))(h1))
# softmax probabilities
probs = nn.softmax(nn.Affine(512, 10)(h2))
# our prediction is the highest probability
ypreds = cgt.argmax(probs, axis=1)
acc = cgt.cast(cgt.equal(ypreds, y), cgt.floatX).mean()
cost = -categorical.loglik(y, probs).mean()
return cost, acc
X = cgt.matrix(name='X', fixed_shape=(None, 64))
y = cgt.vector(name='y', dtype='i8')
cost, acc = model(X, y)
learning_rate = 1e-3
epochs = 100
batch_size = 64
# get all the weight parameters for our model
params = nn.get_parameters(cost)
# train via SGD, use 1e-3 as the learning rate
updates = nn.sgd(cost, params, learning_rate)
# Functions
trainf = cgt.function(inputs=[X,y], outputs=[], updates=updates)
cost_and_accf = cgt.function(inputs=[X,y], outputs=[cost,acc])
import time
for i in xrange(epochs):
t1 = time.time()
for srt in xrange(0, X_train.shape[0], batch_size):
end = batch_size+srt
trainf(X_train[srt:end], y_train[srt:end])
elapsed = time.time() - t1
costval, accval = cost_and_accf(X_test, y_test)
print("Epoch {} took {}, test cost = {}, test accuracy = {}".format(i+1, elapsed, costval, accval))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prep is done, time for the model.
Step2: We've defined the cost and accuracy functions, time to train our model.
|
1,931
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
events_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude='bads') # Pick MEG channels
raw.filter(1., 30., fir_design='firwin') # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
tmin = -0.050
tmax = 0.400
# decimate to make the example faster to run, but then use verbose='error' in
# the Epochs constructor to suppress warning about decimation causing aliasing
decim = 2
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
proj=True, picks=picks, baseline=None, preload=True,
reject=dict(mag=5e-12), decim=decim, verbose='error')
clf = make_pipeline(
StandardScaler(),
LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs
)
time_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=None,
verbose=True)
# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs['Left'].get_data(),
y=epochs['Left'].events[:, 2] > 2)
scores = time_gen.score(X=epochs['Right'].get_data(),
y=epochs['Right'].events[:, 2] > 2)
fig, ax = plt.subplots(1)
im = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower',
extent=epochs.times[[0, -1, 0, -1]])
ax.axhline(0., color='k')
ax.axvline(0., color='k')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Generalization across time and condition')
plt.colorbar(im, ax=ax)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will train the classifier on all left visual vs auditory trials
Step2: Score on the epochs where the stimulus was presented to the right.
Step3: Plot
|
1,932
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import fredpy as fp
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
fp.api_key = '################################'
fp.api_key = fp.load_api_key('fred_api_key.txt')
u = fp.series('UNRATE')
plt.plot(u.data.index,u.data.values,'-',lw=3,alpha = 0.65)
plt.grid()
# Download quarterly real GDP data using `fredpy`. Save the data in a variable called gdp
gdp = fp.series('gdpc1')
# Note that gdp is an instance of the `fredpy.series` class
print(type(gdp))
# Print the title, the units, the frequency, the date range, and the source of the gdp data
print(gdp.title)
print(gdp.units)
print(gdp.frequency)
print(gdp.date_range)
print(gdp.source)
# Print the last 4 values of the gdp data
print(gdp.data[-4:],'\n')
# Plot real GDP data
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(gdp.data,'-',lw=3,alpha = 0.65)
ax.grid()
ax.set_title(gdp.title)
ax.set_ylabel(gdp.units)
# Restrict GDP to observations from January 1, 1990 to present
win = ['01-01-1990','01-01-2200']
gdp_win = gdp.window(win)
# Plot
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(gdp_win.data,'-',lw=3,alpha = 0.65)
ax.grid()
ax.set_title(gdp_win.title)
ax.set_ylabel(gdp_win.units)
# Plot recession bars
gdp_win.recessions()
# Compute and plot the (annualized) quarterly growth rate of real GDP
gdp_pc = gdp.pc(annualized=True)
# Plot
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(gdp_pc.data,'-',lw=3,alpha = 0.65)
ax.grid()
ax.set_title(gdp_pc.title)
ax.set_ylabel(gdp_pc.units)
# Plot recession bars
gdp_pc.recessions()
# Compute and plot the log of real GDP
gdp_log = gdp.log()
# Plot
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(gdp_log.data,'-',lw=3,alpha = 0.65)
ax.set_title(gdp_log.title)
ax.set_ylabel(gdp_log.units)
ax.grid()
# Download CPI and GDP deflator data
cpi = fp.series('CPIAUCSL')
deflator = fp.series('GDPDEF')
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
ax.plot(cpi.data,'-',lw=3,alpha = 0.65)
ax.grid()
ax.set_title(cpi.title.split(':')[0])
ax.set_ylabel(cpi.units)
ax = fig.add_subplot(1,2,2)
ax.plot(deflator.data,'-m',lw=3,alpha = 0.65)
ax.grid()
ax.set_title(deflator.title)
ax.set_ylabel(deflator.units)
# The CPI data are produced at a monthly frequency
print(cpi.frequency)
# Convert CPI data to quarterly frequency to conform with the GDP deflator
cpi_Q = cpi.as_frequency(freq='Q')
print(cpi_Q.frequency)
# Compute the inflation rate based on each index
cpi_pi = cpi_Q.apc()
def_pi = deflator.apc()
# Print date ranges for new inflation series
print(cpi_pi.date_range)
print(def_pi.date_range)
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(1,1,1)
ax.plot(cpi_pi.data,'-',lw=3,alpha = 0.65,label='cpi')
ax.plot(def_pi.data,'-',lw=3,alpha = 0.65,label='def')
ax.legend(loc='upper right')
ax.set_title('US inflation')
ax.set_ylabel('Percent')
ax.grid()
# Download unemployment and 3 month T-bill data
unemp = fp.series('UNRATE')
tbill_3m = fp.series('TB3MS')
# Print date ranges for series
print(unemp.date_range)
print(tbill_3m.date_range)
# Equalize the date ranges
unemp, tbill_3m = fp.window_equalize([unemp, tbill_3m])
# Print the new date ranges for series
print()
print(unemp.date_range)
print(tbill_3m.date_range)
# Download nominal GDP, the GDP deflator
gdp = fp.series('GDP')
defl = fp.series('GDPDEF')
# Make sure that all series have the same window of observation
gdp,defl = fp.window_equalize([gdp,defl])
# Deflate GDP series
gdp = gdp.divide(defl)
# Convert GDP to per capita terms
gdp = gdp.per_capita()
# Take log of GDP
gdp = gdp.log()
# Plot log data
fig = plt.figure(figsize=(6,4))
ax1 = fig.add_subplot(1,1,1)
ax1.plot(gdp.data,'-',lw=3,alpha = 0.65)
ax1.grid()
ax1.set_title('log real GDP per capita')
gdp.recessions()
fig.tight_layout()
# Compute the hpfilter
gdp_cycle, gdp_trend = gdp.hp_filter()
# Plot log data
fig = plt.figure(figsize=(6,8))
ax1 = fig.add_subplot(2,1,1)
ax1.plot(gdp.data,'-',lw=3,alpha = 0.7,label='actual')
ax1.plot(gdp_trend.data,'r-',lw=3,alpha = 0.65,label='HP trend')
ax1.grid()
ax1.set_title('log real GDP per capita')
gdp.recessions()
ax1.legend(loc='lower right')
fig.tight_layout()
ax1 = fig.add_subplot(2,1,2)
ax1.plot(gdp_cycle.data,'b-',lw=3,alpha = 0.65,label='HP cycle')
ax1.grid()
ax1.set_title('log real GDP per capita - dev from trend')
gdp.recessions()
ax1.legend(loc='lower right')
fig.tight_layout()
u = fp.series('LNS14000028')
p = fp.series('CPIAUCSL')
# Construct the inflation series
p = p.pc(annualized=True)
p = p.ma(length=6,center=True)
# Make sure that the data inflation and unemployment series cver the same time interval
p,u = fp.window_equalize([p,u])
# Data
fig = plt.figure()
ax = fig.add_subplot(2,1,1)
ax.plot(u.data,'b-',lw=2)
ax.grid(True)
ax.set_title('Unemployment')
ax = fig.add_subplot(2,1,2)
ax.plot(p.data,'r-',lw=2)
ax.grid(True)
ax.set_title('Inflation')
fig.autofmt_xdate()
# Filter the data
p_bpcycle,p_bptrend = p.bp_filter(low=24,high=84,K=84)
u_bpcycle,u_bptrend = u.bp_filter(low=24,high=84,K=84)
# Scatter plot of BP-filtered inflation and unemployment data (Sargent's Figure 1.5)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
t = np.arange(len(u_bpcycle.data))
ax.scatter(u_bpcycle.data,p_bpcycle.data,facecolors='none',alpha=0.75,s=20,c=t, linewidths=1.5)
ax.set_xlabel('unemployment rate (%)')
ax.set_ylabel('inflation rate (%)')
ax.set_title('Inflation and unemployment: BP-filtered data')
ax.grid(True)
# HP filter
p_hpcycle,p_hptrend = p.hp_filter(lamb=129600)
u_hpcycle,u_hptrend = u.hp_filter(lamb=129600)
# Scatter plot of HP-filtered inflation and unemployment data (Sargent's Figure 1.5)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
t = np.arange(len(u_hpcycle.data))
ax.scatter(u_hpcycle.data,p_hpcycle.data,facecolors='none',alpha=0.75,s=20,c=t, linewidths=1.5)
ax.set_xlabel('unemployment rate (%)')
ax.set_ylabel('inflation rate (%)')
ax.set_title('Inflation and unemployment: HP-filtered data')
ax.grid(True)
# Get all available vintages
gdp_vintage_dates = fp.get_vintage_dates('GDPA')
print('Number of vintages available:',len(gdp_vintage_dates))
print('Oldest vintage: ',gdp_vintage_dates[0])
print('Most recent vintage: ',gdp_vintage_dates[1])
# Download oldest available GDP data
gdp_old = fp.series('GDPA',observation_date = gdp_vintage_dates[0])
# Download most recently available GDP data
gdp_cur = fp.series('GDPA',observation_date = gdp_vintage_dates[-1])
# Equalize date ranges
gdp_old, gdp_cur = fp.window_equalize([gdp_old, gdp_cur])
# Plot
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(gdp_old.data,lw=3,alpha = 0.65,label=pd.to_datetime(gdp_vintage_dates)[0].strftime('Vintage: %b %Y'))
ax.plot(gdp_cur.data,lw=3,alpha = 0.65,label=pd.to_datetime(gdp_vintage_dates)[1].strftime('Vintage: %b %Y'))
ax.set_ylabel(gdp_cur.units)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_title('US GDP')
ax.grid()
# create a Pandas DataFrame
df = pd.DataFrame({'inflation':p.data,
'unemployment':u.data},)
print(df.head())
# Export to csv
df.to_csv('data.csv')
import datetime
# Specify the API path
path = 'fred/series/observations'
# Set observation_date string as today's date
observation_date = datetime.datetime.today().strftime('%Y-%m-%d')
# Specify desired parameter values for the API querry
parameters = {'series_id':'unrate',
'observation_date':observation_date,
'file_type':'json'
}
# API request
r = fp.fred_api_request(api_key=fp.api_key,path=path,parameters=parameters)
# Return results in JSON forma
results = r.json()
# Load data, deal with missing values, format dates in index, and set dtype
data = pd.DataFrame(results['observations'],columns =['date','value'])
data = data.replace('.', np.nan)
data['date'] = pd.to_datetime(data['date'])
data = data.set_index('date')['value'].astype(float)
# Plot the unemployment rate
data.plot()
plt.title('Unemployment Rate')
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load API key
Step2: or by reading from a text file containing only the text of the API key in the first line
Step3: If fred_api_key.txt is not in the same directory as your program file, then you must supply the full path of the file.
Step4: More examples
Step5: Even though the CPI inflation rate is on average about .3% higher the GDP deflator inflation rate, the CPI and the GDP deflator produce comparable measures of US inflation.
Step6: Filtering 1
Step7: The post-Great Recession slowdown in US real GDP growth is apparent in the figure.
Step8: Filtering 2
Step9: The choice of filterning method appears to strongly influence the results. While both filtering methods
Step10: From the available vintage dates, use the observation_date keyword in series() to download a desired vintage. For example, download and plot the oldest available and most recent US GDP data.
Step11: Exporting data sets
|
1,933
|
<ASSISTANT_TASK:>
Python Code:
grammar =
S -> NP VP
S -> VP
NP -> DET N
VP -> V[SUBCAT=tr] NP
VP -> V[SUBCAT=intr]
DET -> "das"
N -> "Kind" | "Buch"
V[SUBCAT=tr] -> "lies"
V[SUBCAT=tr] -> "liest"
V[SUBCAT=intr] -> "schlaf"
V[SUBCAT=intr] -> "schläft"
pos_sentences = [
"das Kind schläft",
"das Kind liest das Buch",
"lies das Buch",
"schlaf"
]
neg_sentences = [
"das Kind schlaf",
"das Kind lies das Buch",
"liest das Buch",
"schläft"
]
import nltk
from IPython.display import display
import sys
def test_grammar(grammar, sentences):
cfg = nltk.grammar.FeatureGrammar.fromstring(grammar)
parser = nltk.parse.FeatureEarleyChartParser(cfg)
for i, sent in enumerate(sentences, 1):
print("Satz {}: {}".format(i, sent))
sys.stdout.flush()
results = parser.parse(sent.split())
analyzed = False
for tree in results:
display(tree) # tree.draw() oder print(tree)
analyzed = True
if not analyzed:
print("Keine Analyse möglich", file=sys.stderr)
sys.stderr.flush()
test_grammar(grammar, pos_sentences)
test_grammar(grammar, neg_sentences)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Übungsblatt 8
Step2: Hier wurde versucht, Aufforderungssätze zu modellieren. Allerdings akzeptiert diese Grammatik immer noch viele ungrammatische Sätze.
Step3: Hier sollten nur korrekte Syntaxbäume herauskommen
Step4: Hier sollte ausschließlich Keine Analyse möglich stehen
|
1,934
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
1,935
|
<ASSISTANT_TASK:>
Python Code:
import gensim
import os
import collections
import random
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data'])
lee_train_file = test_data_dir + os.sep + 'lee_background.cor'
lee_test_file = test_data_dir + os.sep + 'lee.cor'
def read_corpus(fname, tokens_only=False):
with open(fname, encoding="iso-8859-1") as f:
for i, line in enumerate(f):
if tokens_only:
yield gensim.utils.simple_preprocess(line)
else:
# For training data, add tags
yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(line), [i])
train_corpus = list(read_corpus(lee_train_file))
test_corpus = list(read_corpus(lee_test_file, tokens_only=True))
train_corpus[:2]
print(test_corpus[:2])
model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=10)
model.build_vocab(train_corpus)
%time model.train(train_corpus)
model.infer_vector(['only', 'you', 'can', 'prevent', 'forrest', 'fires'])
ranks = []
second_ranks = []
for doc_id in range(len(train_corpus)):
inferred_vector = model.infer_vector(train_corpus[doc_id].words)
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
rank = [docid for docid, sim in sims].index(doc_id)
ranks.append(rank)
second_ranks.append(sims[1])
collections.Counter(ranks) #96% accuracy
print('Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(train_corpus))
# Compare and print the most/median/least similar documents from the train corpus
print('Train Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
sim_id = second_ranks[doc_id]
print('Similar Document {}: «{}»\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(test_corpus))
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
# Compare and print the most/median/least similar documents from the train corpus
print('Test Document ({}): «{}»\n'.format(doc_id, ' '.join(test_corpus[doc_id])))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is it?
Step2: Define a Function to Read and Preprocess Text
Step3: Let's take a look at the training corpus
Step4: And the testing corpus looks like this
Step5: Notice that the testing corpus is just a list of lists and does not contain any tags.
Step6: Build a Vocabulary
Step7: Essentially, the vocabulary is a dictionary (accessible via model.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.vocab['penalty'].count for counts for the word penalty).
Step8: Inferring a Vector
Step9: Assessing Model
Step10: Let's count how each document ranks with respect to the training corpus
Step11: Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. This is great and not entirely surprising. We can take a look at an example
Step12: Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself
Step13: Testing the Model
|
1,936
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (100, 110)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
from collections import Counter
from string import punctuation
print(tf.__version__)
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: i for i,word in enumerate(vocab,1)}
int_to_vocab = dict(enumerate(vocab,1))
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
tokenizer = {
'.' : '||Period||',
',' : '||Comma||',
'"' : '||Quotation_Mark||',
';' : '||Semicolon||',
'!' : '||Exclamation_Mark||',
'?' : '||Question_Mark||',
'(' : '||Left_Parentheses||',
')' : '||Right_Parentheses||',
'--' : '||Dash||',
'\n': '||Return||'
}
return tokenizer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
print(tf.__version__)
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None,None], name='target')
learning_rate = tf.placeholder(tf.float32,name='learning_rate')
# TODO: Implement Function
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def build_rnn_cell(size):
return tf.contrib.rnn.BasicLSTMCell(size, state_is_tuple=True)
def get_init_cell(batch_size, rnn_size, keep_prob=0.7):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
num_layers = 2
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([build_rnn_cell(rnn_size) for _ in range(num_layers)])
# cell=tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size) for _ in range(num_layers)])
# batch_size = tf.placeholder(tf.int32, [])
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1,1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell,inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embedded = get_embed(input_data,vocab_size, embed_dim)
outputs,final_state = build_rnn(cell, embedded)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None, weights_initializer = tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer()
)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
slice_size = batch_size*seq_length
n_batches = int(len(int_text)/slice_size)
inputs = np.array(int_text[:n_batches*slice_size])
targets = np.array(int_text[1:n_batches*slice_size + 1])
inputs = np.stack(np.split(inputs,batch_size))
targets = np.stack(np.split(targets, batch_size))
batches = []
for b in range(n_batches):
x = inputs[:,b*seq_length:(b+1)*seq_length]
y = targets[:,b*seq_length: (b+1)*seq_length]
batches.append([x,y])
batches = np.array(batches)
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 60
# Batch Size
batch_size = 20
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
inp = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inp, init_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
word = np.random.choice(list(int_to_vocab.values()), p=probabilities)
return word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
1,937
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
#print(plt.style.available)
plt.style.use('presentation')
G = 6.67E-11 # Constante de gravitation universelle en m(3)*s(-2)*kg(-1)
Mt = 5.98E24 # Masse de la terre en kg
Rt = 6378E3 # Rayon de la terre en m
Wt = (2*np.pi)/(24*60*60)
print("Vitesse de rotation de la terre = ",Wt,"Radians/seconde")
print("Vitesse de rotation de la terre = ",round (Wt,7),"Radians/seconde")
def AccGravitationnelle (r):
AccGravitationnelle = (G*Mt)/(r**2)
return AccGravitationnelle
def AccCentrifuge (r):
AccCentrifuge = r * (Wt**2)
return AccCentrifuge
def AccTotale (r):
AccTotale = AccGravitationnelle(r) - AccCentrifuge(r)
return AccTotale
print("Accélération gravitationnelle à la surface de la terre:",AccGravitationnelle(Rt), "(m.s-2)")
print("Accélération gravitationnelle à la surface de la terre:",round(AccGravitationnelle(Rt),2), "(m.s-2)")
print("Accélération centrifuge à la surface de la terre:",round(AccCentrifuge(Rt),9), "(m.s-2)")
Rg =((G*Mt)/(Wt**2))**(1/3)
Hg = (Rg - Rt)
print("Rayon de l'orbite geostationnaire:", int (Rg/1000), 'km')
print("Hauteur de l'orbite geostationnaire:", int (Hg/1000), 'km')
r = np.arange(6E6, 160E6, 1E3)
r_km = r*1E-3
plt.figure(figsize=(16, 12), dpi= 100)
plt.plot(r_km, AccGravitationnelle(r), color="cyan", label='Accélération Gravitationnelle')
plt.plot(r_km, AccCentrifuge(r), color="orange", label='Accélération Centrifuge' )
plt.plot(r_km, AccTotale(r), color="green", label='Accélération Totale' )
plt.axvline(x=Rt*1E-3, linestyle=":", color="brown")
plt.axvline(x=Rg*1E-3, linestyle=":", color="brown")
plt.xlabel("Rayon de l'orbite en km", fontsize=18)
plt.ylabel("Accélération en m.s-2", fontsize=18)
plt.axis([0, 160E3, -1, 11])
plt.grid()
plt.title("Accélérations sur le fil en fonction du rayon de l'orbite", fontsize=22)
plt.legend(loc='upper right', fontsize=16, fancybox='true')
plt.savefig('images/Accelerations.png')
plt.show()
Rf = 150000E3
r = np.arange(6E6, 160E6, 1E3)
r_km = r*1E-3
plt.figure(figsize=(16, 12), dpi= 80)
plt.plot(r_km, AccTotale(r), color="green", label='Accélération Totale' )
#Lignes verticales délimitent les trois rayons:
plt.axvline(x=Rt*1E-3, linestyle=":", color="brown")
plt.axvline(x=Rg*1E-3, linestyle=":", color="brown")
plt.axvline(x=Rf*1E-3, linestyle=":", color="brown")
plt.fill_between(r_km, AccTotale(r), 0, where=(r_km > Rt*1E-3)&(r_km < Rg*1E-3), color='orange', alpha=.4)
plt.fill_between(r_km, AccTotale(r), 0, where=(r_km > Rg*1E-3)&(r_km < Rf*1E-3), color="green", alpha=.4)
plt.xlabel("Rayon de l'orbite en km")
plt.ylabel("Accélération en m.s-2")
plt.axis([0, 160E3, -1, 11])
plt.grid()
plt.title("Accélération sur le fil en fonction du rayon de l'orbite")
plt.legend(loc='upper right')
plt.savefig('images/Equilibre.png')
plt.show()
NombreMaillons = 16
LongueurMaillon = 1E-3*(Rg-Rt)/NombreMaillons
print("Nombre de maillons:", NombreMaillons, "; Longueur d'un maillon:",round(LongueurMaillon,0), "km")
fig, ax = plt.subplots(figsize=(16, 8))
ax.set_xticks(np.arange(Rt*1E-3, Rg*1E-3, LongueurMaillon))
r = np.arange(Rt, Rg, 1E3)
r_km = r*1E-3
plt.plot(r_km, AccTotale(r), color="blue", label="Accélération")
r = np.arange(Rt, Rg, LongueurMaillon*1E3)
r_km = r*1E-3
plt.step(r_km, AccTotale(r), where='pre', color="orange", label="Approximation par défault")
plt.step(r_km, AccTotale(r), where='post', color="green", label="Approximation par excés")
plt.axvline(x=Rt*1E-3, linestyle=":", color="brown")
plt.axvline(x=Rg*1E-3, linestyle=":", color="brown")
ax.grid(markevery=LongueurMaillon)
plt.ylim(-1, 25)
plt.legend(loc='upper right')
plt.title("Expérience sur l'approximation de l'intégrale de l'accélération")
plt.savefig('images/Integrale.png')
plt.show()
def IntegraleAcc (r):
IntegraleAcc = G*Mt*( -1/r - (r**2)/(2*Rg**3) + 1/Rt + (Rt**2)/(2*Rg**3))
return IntegraleAcc
NombreMaillons = 16
print(Rt)
LongueurMaillon = (Rg-Rt)/NombreMaillons
print("Nombre de maillons:", NombreMaillons, ",Longueur d'un maillon:",round(LongueurMaillon*1E-3,0), "km")
# Maillons est la liste
Maillons_pre = np.linspace(Rt, Rg,NombreMaillons+1)
Maillons = Maillons_pre + LongueurMaillon/2
Maillons_post = Maillons_pre + LongueurMaillon
Acc_pre = AccTotale(Maillons_pre)
Acc = AccTotale(Maillons)
Acc_post = AccTotale(Maillons_post)
# Obtenir les valeurs maximum et minimum de l'accélération
AccMax = np.maximum (Acc_pre,Acc_post)
AccMin = np.minimum (Acc_pre,Acc_post)
# Représentation graphique:
r = np.arange(Rt, Rg, 1E3)
r_km = r*1E-3
plt.figure(figsize=(16, 10), dpi= 80, facecolor='w', edgecolor='k')
plt.plot(Maillons, AccMax, color='red', label='Approximation par excès')
plt.plot(r, AccTotale(r), color='blue', label="Acceleration Totale")
plt.plot(Maillons, AccMin, color='green', label="Approximation par excès")
plt.plot(Maillons, AccMax,'ro')
plt.plot(Maillons, Acc, 'bo')
plt.plot(Maillons, AccMin,'go')
plt.xticks(Maillons)
plt.ylim(-1, 25)
plt.grid()
plt.axvline(x=Rt, linestyle=":", color="brown")
plt.axvline(x=Rg, linestyle=":", color="brown")
plt.legend()
plt.title("Encadrer l'accélération")
plt.savefig('images/AccMaxMin.png')
plt.show()
NombreMaillons = 16
print(Rt)
LongueurMaillon = (Rg-Rt)/NombreMaillons
print("Nombre de maillons:", NombreMaillons, ",Longueur d'un maillon:",round(LongueurMaillon*1E-3,0), "km")
# Maillons est la liste
Maillons_pre = np.linspace(Rt, Rg,NombreMaillons+1)
Maillons = Maillons_pre + LongueurMaillon/2
Maillons_post = Maillons_pre + LongueurMaillon
Acc_pre = AccTotale(Maillons_pre)
Acc = AccTotale(Maillons)
Acc_post = AccTotale(Maillons_post)
# Obtenir les valeurs maximum et minimum de l'accélération
AccMax = np.maximum (Acc_pre,Acc_post)
AccMin = np.minimum (Acc_pre,Acc_post)
#Surface de chaque maillon.
Aire_max = LongueurMaillon*AccMax
Aire_min = LongueurMaillon*AccMin
Int_min = [0]*(NombreMaillons+1)
Int_max = [0]*(NombreMaillons+1)
#Calculer les approximations
for i in range(NombreMaillons+1):
Int_min[i] = sum(Aire_min[0:i] )
Int_max[i] = sum(Aire_max[0:i] )
# Représentation graphique:
r = np.arange(Rt, Rg, 1E3)
r_km = r*1E-3
plt.figure(figsize=(16, 10), dpi= 80, facecolor='w', edgecolor='k')
plt.plot(r, IntegraleAcc(r), label="Fonction primitive de l'accelération Totale")
plt.plot(Maillons, Int_min, label='Intégrale, approximation min')
plt.plot(Maillons, Int_max, label='Intégrale, approximation max')
plt.xticks(Maillons)
plt.axvline(x=Rt, linestyle=":", color="brown")
plt.axvline(x=Rg, linestyle=":", color="brown")
plt.grid()
plt.legend(loc='lower center')
plt.title("Expérience sur l'approximation de la l'intégrale de l'accélération")
plt.savefig('images/Tension-16.png')
plt.show()
# Rf: rayon du bout final du cable, tension nulle
# Valeur théorique trouvée dans la documentation, "Physics of the Space Elevator"
Rf = (Rt/2)*( np.sqrt(1+8*(Rg/Rt)**3) - 1)
LongueurCable = Rf-Rt
print ("Rayon final: Rf=",int(Rf*1E-3),"km")
print ("Longueur totale du cable, LongueurCable =", int(LongueurCable*1E-3),"km")
r = np.arange(6E6, 160E6, 1E3)
r_km = r*1E-3
plt.figure(figsize=(16, 12), dpi= 100, facecolor='w', edgecolor='k')
#Première figure: 2 rangées, 1 colonne, figure 1
plt.subplot(211)
plt.title("Accélération Totale et son intégrale en fonction du rayon")
plt.axvline(x=Rt*1E-3, linestyle=":", color="brown")
plt.text(Rt*1E-3, -3.2, 'Rt', fontsize ='16', color="brown")
plt.axvline(x=Rg*1E-3, linestyle=":", color="brown")
plt.text(Rg*1E-3, -3.2, 'Rg', fontsize ='16', color="brown")
plt.axvline(x=Rf*1E-3, linestyle=":", color="brown")
plt.text(Rf*1E-3, -3.2, 'Rf', fontsize ='16', color="brown")
plt.ylabel("Accélération Totale (m.s-2)")
plt.plot(r_km, AccTotale(r), color="green", linewidth='1.5', label="Accélération Totale")
plt.legend()
plt.grid()
#Deuxième figure: 2 rangées, 1 colonne, figure 2
plt.subplot(212)
plt.axvline(x=Rt*1E-3, linestyle=":", color="brown")
plt.axvline(x=Rg*1E-3, linestyle=":", color="brown")
plt.axvline(x=Rf*1E-3, linestyle=":", color="brown")
plt.plot(r_km, IntegraleAcc(r), label="Integrale de l'accélération")
plt.xlabel("Rayon de l'orbite en km")
plt.ylabel("Intégrale de l'accélération Totale (m2.s-2)")
plt.legend()
plt.grid()
plt.savefig('images/acctotale-integrale.png')
plt.show()
densite_acier = 7900 # kg/m3
densite_kevlar = 1440 # kg/m3
densite_cnt = 1300 # kg/m3
Tmax_acier = 5E9 # Pascals (N/m2)
Tmax_kevlar = 3.6E9 # Pascals (N/m2)
Tmax_cnt = 130E9 # Pascals (N/m2)
r = np.arange(Rt, Rg, 1E3)
r_km = r*1E-3
plt.figure(figsize=(16, 12), dpi= 80, facecolor='w', edgecolor='k')
plt.axvline(x=Rg*1E-3, linestyle=":", color="brown")
#Acier:
plt.plot(r_km, IntegraleAcc(r)*densite_acier, color="red", label="Tension Acier")
plt.axhline(y=Tmax_acier, linestyle=":", color="red", linewidth=3, label="Résistance Acier" )
#Kevlar:
plt.plot(r_km, IntegraleAcc(r)*densite_kevlar, color="orange", label="Tension Kevlar")
plt.axhline(y=Tmax_kevlar, linestyle=":", color="orange",linewidth=3, label="Résistance Kevlar")
#Carbon nanotubes:
plt.plot(r_km, IntegraleAcc(r)*densite_cnt, color="green", label="Tension CNT")
plt.axhline(y=Tmax_cnt, linestyle=":", color="green",linewidth=3, Label="Résistance CNT")
plt.xlabel("Rayon de l'orbite en km")
plt.ylabel("Tension en Pascals (Pa), (N.m-2)")
plt.grid()
plt.title("Tension et résistance du cable")
plt.legend(loc="upper left")
plt.savefig('images/materiaux.png')
plt.show()
print("Tension maximum du cable au point Rg:" )
print("Tension Maximum par unité de masse:", int(IntegraleAcc(Rg))*1E-9,"Giga Pascals/kg")
print("Tension maximum Rg, Acier:", int(IntegraleAcc(Rg)*densite_acier*1E-9), "Giga Pascals")
print("Tension maximum Rg, Kevlar:", int(IntegraleAcc(Rg)*densite_kevlar*1E-9), "Giga Pascals")
print("Tension maximum Rg, CNT:", int(IntegraleAcc(Rg)*densite_cnt*1E-9), "Giga Pascals")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Définissons ensuite les constantes utilisées
Step2: 1. Movement orbital
Step3: 1.2 Fonctions accélération
Step4: Exemple
Step5: Exemple
Step6: 1.3 Calcul de la hauteur de l'orbite geostationnaire
Step7: Représentation graphique
Step8: Nous observons que la Force totale s'annule pour le rayon de l'orbite géostationnaire. Cette courbe verte nous montre la force par unité de longueur et par unité de masse exercée sur chaque point du cable. La force est négative (attractive) au desssous de l'orbite géostationnaire et elle est positive au dessus.
Step9: 2.2 La modélisation du cable
Step10: Solution de l'intégrale
Step11: Encadrer l'accélération
Step12: Approximation de la Tension
Step13: 2.3 Calcul du contrepoids et de la longueur du cable
Step14: Ci-dessous, en deux plots séparés
Step15: 3. Quels matériaux por le cable?
Step16: Calcul de résistance du cable
|
1,938
|
<ASSISTANT_TASK:>
Python Code:
# imports
import pandas as pd
import numpy as np
import time
import os
from tabulate import tabulate
import sys
from operator import add
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql import functions as F #https://stackoverflow.com/questions/39504950/python-pyspark-get-sum-of-a-pyspark-dataframe-column-values
from pyspark.sql.functions import monotonically_increasing_id
from DataPreperation import DataPreperation
#.config('spark.executor.cores','6') \
spark = SparkSession.builder \
.appName("App") \
.getOrCreate()
# .master("local[*]") \
# .config('spark.cores.max','16')
#.master("local") \
# .config("spark.some.config.option", "some-value") \
spark.sparkContext.setLogLevel('WARN') #Get rid of all the junk in output
Y = 'SalePrice'
ID_VAR = 'Id'
DROPS = [ID_VAR]
original_train = spark.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('data_sets/kaggle_house/train.csv')
original_test = spark.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('data_sets/kaggle_house/test.csv')
#add an id column for row reference
# original_train.withColumn("id", monotonically_increasing_id())
# original_test.withColumn("id", monotonically_increasing_id())
#this needs to be done for h2o glm.predict() bug (which needs same number of columns)
# test = test.withColumn(Y,test[ID_VAR])
# (train,valid) = original_train.randomSplit([0.7,0.3], seed=123)
# train.describe().show()
numerics, categoricals = DataPreperation.get_type_lists(frame=original_train,rejects=[ID_VAR,Y],frame_type='spark')
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
original_train.select('TotalBsmtSF',Y).toPandas().head()
trace = go.Scatter(
x = original_train.select('TotalBsmtSF').rdd.flatMap(list).collect(),
y = original_train.select(Y).rdd.flatMap(list).collect(),
mode = 'markers'
)
data = [trace]
# Plot and embed in ipython notebook!
iplot(data)#, filename='basic-scatter')
original_train = DataPreperation.winsorize_columns(original_train,['TotalBsmtSF'],\
winzerize_type='percentile',limits =0.1)
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
original_train.select('TotalBsmtSF',Y).toPandas().head()
trace = go.Scatter(
x = original_train.select('TotalBsmtSF').rdd.flatMap(list).collect(),
y = original_train.select(Y).rdd.flatMap(list).collect(),
mode = 'markers'
)
data = [trace]
# Plot and embed in ipython notebook!
iplot(data)#, filename='basic-scatter')
print('Column before encoding...')
print(original_train.select('RoofStyle').rdd.flatMap(list).collect()[0:49])
print()
original_train = DataPreperation.label_encoder(original_train,['RoofStyle'])
print()
numerics, categoricals = DataPreperation.get_type_lists(frame=original_train,rejects=[ID_VAR,Y],frame_type='spark')
print()
print('Column after encoding...')
print(original_train.select('RoofStyle_encoded').rdd.flatMap(list).collect()[0:49])
#Here is how to do polynomical expansion
train_corr = DataPreperation.get_top_correlations(original_train,numerics)
# https://plot.ly/python/figure-factory/table/
import plotly.figure_factory as ff
corr_df = pd.DataFrame(columns=['columns', 'correlation', 'correlation_abs'])
for idx, d in enumerate(train_corr):
corr_df.loc[idx] = [d['columns'],d['correlation'],d['correlation_abs']]
table = ff.create_table(corr_df)
iplot(table, filename='pandas_table')
for idx, row in corr_df.iterrows():
if(corr_df.loc[idx]['correlation_abs'] >0.5 and corr_df.loc[idx]['correlation_abs'] != 1): #Set a cutoff only combine values greater then .7
original_train = DataPreperation.feature_combiner(original_train,columns=corr_df.loc[idx]['columns'])
original_test = DataPreperation.feature_combiner(original_test,columns=corr_df.loc[idx]['columns'])
#show the results
table = ff.create_table(original_train.select('GarageArea','GarageCars','GarageArea|GarageCars').toPandas().sample(10))
table.layout.width=1000
iplot(table, filename='pandas_table')
original_train = DataPreperation.polynomial_expansion(original_train,['1stFlrSF'],degree=3)
original_test = DataPreperation.polynomial_expansion(original_test,['1stFlrSF'],degree=3)
#show the results
print(original_train.select(ID_VAR,'1stFlrSF','1stFlrSF_^2','1stFlrSF_^3').toPandas().sample(2))
table = ff.create_table(original_train.select(ID_VAR,'1stFlrSF','1stFlrSF_^2','1stFlrSF_^3').toPandas().sample(10))
table.layout.width=1000
iplot(table, filename='pandas_table')
(train,valid) = original_train.randomSplit([0.7,0.3], seed=123)
print("Encoding numberic variables...")
for i, var in enumerate(['MSZoning']):
total = len(categoricals)
print('Encoding: ' + var + ' (' + str(i+1) + '/' + str(total) + ') ...')
train,valid, original_test = DataPreperation.shrunken_averages_encoder(train, valid_frame = valid,test_frame=original_test,\
x=var, y=Y, lambda_=0.15, perturb_range=0.05,threshold=150,\
test=False, frame_type='spark',test_does_have_y=False,id_col=ID_VAR)
table = ff.create_table(train.select('MSZoning','MSZoning_Tencode').toPandas().sample(15))
iplot(table, filename='pandas_table')
# original_train = spark.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('data_sets/kaggle_house/train.csv')
# original_test = spark.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('data_sets/kaggle_house/test.csv')
# (train,valid) = original_train.randomSplit([0.7,0.3], seed=123)
#PCA does not handle null values and there was some in test
# train.na.drop()
# valid.na.drop()
# original_test.na.drop()
# original_test.GarageArea.cast('float')
# original_test.GarageCars.cast('float')
for idx, row in corr_df.iterrows():
if(corr_df.loc[idx]['correlation_abs'] >.7 and corr_df.loc[idx]['correlation_abs'] != 1): #Set a cutoff only combine values greater then .7
print('Doing PCA for', corr_df.loc[idx]['columns'])
#The test data was messy so i couldnt include test it has 'NA' which made for errors
train,valid = DataPreperation.dimensionality_reduction(train, valid_frame = valid,test_frame=None,\
columns=corr_df.loc[idx]['columns'],n_comp=2,\
random_seed=420,decompositions_to_run=['PCA'],\
frame_type='spark',test_does_have_y=False,\
only_return_decompositions=False,id_col=ID_VAR,\
column_name=corr_df.loc[idx]['columns'][0]+'&'+corr_df.loc[idx]['columns'][1])#show the results
table = ff.create_table(train.select('GarageArea','GarageCars','GarageArea&GarageCars_pca_1','1stFlrSF&TotalBsmtSF_pca_2').toPandas()[0:10])
# table = ff.create_table(train.select('1stFlrSF','TotalBsmtSF','1stFlrSF&TotalBsmtSF_pca_1','1stFlrSF&TotalBsmtSF_pca_2').toPandas()[0:10])
table.layout.width=1000
iplot(table, filename='pandas_table')
# original_train = spark.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('data_sets/kaggle_house/train.csv')
# original_test = spark.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('data_sets/kaggle_house/test.csv')
# (train,valid) = original_train.randomSplit([0.7,0.3], seed=123)
#PCA does not handle null values and there was some in test
# train.na.drop()
# valid.na.drop()
# original_test.na.drop()
# original_test.GarageArea.cast('float')
# original_test.GarageCars.cast('float')
for idx, row in corr_df.iterrows():
if(corr_df.loc[idx]['correlation_abs'] >.7 and corr_df.loc[idx]['correlation_abs'] != 1): #Set a cutoff only combine values greater then .7
print('Doing SVD for', corr_df.loc[idx]['columns'])
#The test data was messy so i couldnt include test it has 'NA' which made for errors
train,valid = DataPreperation.dimensionality_reduction(train, valid_frame = valid,test_frame=None,\
columns=corr_df.loc[idx]['columns'],n_comp=2,\
random_seed=420,decompositions_to_run=['SVD'],\
frame_type='spark',test_does_have_y=False,\
only_return_decompositions=False,id_col=ID_VAR,\
column_name=corr_df.loc[idx]['columns'][0]+'&'+corr_df.loc[idx]['columns'][1])#show the results
table = ff.create_table(train.select('GarageArea','GarageCars','GarageArea&GarageCars_svd_1','1stFlrSF&TotalBsmtSF_svd_2').toPandas()[0:10])
# table = ff.create_table(train.select('1stFlrSF','TotalBsmtSF','1stFlrSF&TotalBsmtSF_pca_1','1stFlrSF&TotalBsmtSF_pca_2').toPandas()[0:10])
table.layout.width=1000
iplot(table, filename='pandas_table')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data types
Step2: Dealing with Outliers
Step3: Winsorize for Outliers
Step4: New Chart
Step5: Label Encoding
Step6: Feature interaction
Step7: Polynomial Expansion
Step8: Perturbed Rate-by-Level with Shrunken Averages
Step9: Dimensionality Reduction PCA
Step10: Dimensionality Reduction SVD (cont.)
|
1,939
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
temp_cidade1 = np.array([33.15,32.08,32.10,33.25,33.01,33.05,32.00,31.10,32.27,33.81])
temp_cidade2 = np.array([35.17,36.23,35.22,34.33,35.78,36.31,36.03,36.23,36.35,35.25])
temp_cidade3 = np.array([22.17,23.25,24.22,22.31,23.18,23.31,24.11,23.53,24.38,21.25])
medias = [np.mean(temp_cidade1), np.mean(temp_cidade2), np.mean(temp_cidade3)] # Valores para o gráfico
nomes = ['Cidade Um', 'Cidade Dois', 'Cidade Três'] # Nomes para o gráfico
import matplotlib.pyplot as plt
fig, ax = plt.subplots() # Retorna a figura do gráfico e o objeto de elementos gráficos (axes)
ax.bar([0,1,2], medias, align='center') # Criamos um gráfico passando a posição dos elementos
ax.set_xticks([0,1,2]) # Indica a posição de cada rótulo no eixo X
ax.set_xticklabels(nomes) # Nomes das cidades
ax.set_title('Média das temperaturas') # Título do gráfico
ax.yaxis.grid(True) # Se é para mostrar a grade dos valores Y
plt.show() # Gera o gráfico
fig, ax = plt.subplots()
ax.plot(temp_cidade1)
ax.set_title('Temperaturas da Cidade 1') # Título do gráfico
ax.yaxis.grid(True)
plt.show()
fig = plt.figure(figsize=(20, 5)) # Largura e altura da figura em polegadas
grade = fig.add_gridspec(1, 3) # Criamos uma grade com 1 linha e 3 colunas (pode ser com várias linhas também)
ax1 = fig.add_subplot(grade[0, 0]) # Primeira linha, primeira coluna
ax2 = fig.add_subplot(grade[0, 1]) # Primeira linha, segunda coluna
ax3 = fig.add_subplot(grade[0, 2]) # Primeira linha, terceira coluna
ax1.plot(temp_cidade1)
ax1.set_title('Temperaturas da Cidade 1') # Título do gráfico 1
ax1.yaxis.grid(True)
ax2.plot(temp_cidade2)
ax2.set_title('Temperaturas da Cidade 2') # Título do gráfico 2
ax2.yaxis.grid(True)
ax3.plot(temp_cidade3)
ax3.set_title('Temperaturas da Cidade 3') # Título do gráfico 3
ax3.yaxis.grid(True)
plt.show()
fig, ax = plt.subplots()
ax.plot(temp_cidade1)
ax.plot(temp_cidade2)
ax.plot(temp_cidade3)
ax.set_title('Temperaturas das Cidades 1,2 e 3') # Título do gráfico
ax.yaxis.grid(True)
plt.show()
fig, ax = plt.subplots()
ax.plot(temp_cidade1, marker='^') # Marcadores em triângulos
ax.plot(temp_cidade2, marker='o') # Marcadores em círculos
ax.plot(temp_cidade3, marker='.') # Marcadores em pontos
ax.set_title('Temperaturas das Cidades 1,2 e 3') # Título do gráfico
ax.yaxis.grid(True)
plt.show()
fig, ax = plt.subplots()
ax.plot(temp_cidade1, color="red",markerfacecolor='pink', marker='^', linewidth=4, markersize=12, label='Cidade1')
ax.plot(temp_cidade2, color="skyblue",markerfacecolor='blue', marker='o', linewidth=4, markersize=12, label='Cidade2')
ax.plot(temp_cidade3, color="green", linewidth=4, linestyle='dashed', label='Cidade3')
ax.set_title('Temperaturas das Cidades 1,2 e 3') # Título do gráfico
ax.yaxis.grid(True)
plt.legend()
plt.show()
import pandas as pd
dolar = pd.read_csv('../datasets/dolar.csv')
desemprego = pd.read_csv('../datasets/desemprego.csv')
dolar.head()
dolar.describe()
dolar.loc[dolar.Dolar > 1000, ['Dolar']] = dolar['Dolar'] / 1000
desemprego.head()
fig = plt.figure(figsize=(20, 5)) # Largura e altura da figura em polegadas
grade = fig.add_gridspec(1, 2) # Criamos uma grade com 1 linha e 3 colunas (pode ser com várias linhas também)
ax1 = fig.add_subplot(grade[0, 0]) # Primeira linha, primeira coluna
ax2 = fig.add_subplot(grade[0, 1]) # Primeira linha, segunda coluna
ax1.plot(dolar['Periodo'],dolar['Dolar']) # Potamos o valor do dólar por período
ax1.set_title('Evolução da cotação do Dólar')
ax1.yaxis.grid(True)
ax2.plot(desemprego['Periodo'],desemprego['Desemprego'])
ax2.set_title('Evolução da taxa de desemprego') # Evolução da taxa de desemprego por período
ax2.yaxis.grid(True)
fig, ax = plt.subplots()
ax.scatter(dolar['Dolar'],desemprego['Desemprego'])
ax.set_xlabel("Valor do dólar")
ax.set_ylabel("Taxa de desemprego")
plt.show()
df_vendas = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))
df_vendas.head()
list(df_vendas.columns)
fig, ax = plt.subplots()
totais = df_vendas.sum()
ax.pie(totais, labels=list(df_vendas.columns),autopct='%1.1f%%')
ax.set_title('Vendas do período')
plt.show()
listexplode = [0]*len(totais) # criamos uma lista contendo zeros: Um para cada fatia da pizza
imax = totais.idxmax() # Pegamos o índice do produto com maior quantidade de vendas
ix = list(df_vendas.columns).index(imax) # Agora, transformamos este índice em posição da lista
listexplode[ix]=0.1 # Modificamos a especificação de destaque da fatia com maior valor
fig, ax = plt.subplots()
ax.pie(totais, labels=list(df_vendas.columns),autopct='%1.1f%%', explode=listexplode)
ax.set_title('Vendas do período')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vamos calcular a média de temperatura de cada cidade e utilizá-la para gerar um gráfico
Step2: Agora, vamos criar um gráfico de barras utilizando o módulo pyplot. Primeiramente, mostraremos um gráfico bem simples
Step3: Gerar um gráfico de linhas pode ajudar a entender a evolução dos dados ao longo do tempo. Vamos gerar gráficos de linhas com as temperaturas das três cidades.
Step4: É muito comum compararmos variações de dados, e podemos fazer isso desenhando os gráficos lado a lado. Podem ser gráficos do mesmo tipo ou de diferentes tipos, além de poderem ser em várias linhas e colunas. Para isto, construímos uma instância da classe Figure separadamente, e cada instância de Axes também
Step5: Outra maneira interessante de comparar séries de dados é criar um gráfico com múltiplas séries. Vejamos como fazer isso
Step6: Aqui, foram utilizadas linhas de cores diferentes, mas podemos mudar a legenda e a forma dos gráficos
Step7: Mas podemos dar maior destaque
Step8: As propriedades abaixo podem ser utilizadas para diferenciar as linhas
Step9: Vamos analisar esse dataframe
Step10: Tem algo errado! Alguns valores estão asima de 1.000. Deve ser erro de dataset. Vamos acertar isso
Step11: Podemos construir gráficos em linha de cada um deles usando o plot, mas, na verdade qualquer tipo de gráfico pode ser gerado com dataframes.
Step12: É... Uma das grandes vantagens da visualização, mesmo simples, é constatarmos uma correlação positiva entre o valor do dólar e o desemprego. Mas cuidado para não tomar isso como relação de causa e efeito! Há outros fatores que influenciam ambos! Para ilustrar isso, e mostrar como criar gráficos de dispersão (scatter) vamos plotar um gráfico com o dólar no eixo X e o desemprego no Y
Step13: Podemos ver que até há alguma correlação aparente, mas em algum momento, depois do valor de R$ 3,00, a taxa de desemprego deu um salto. Isso prova que faltam variáveis explicativas no modelo.
Step14: Bom, imaginemos que cada coluna representa as vendas de um dos produtos, e cada linha seja um dia. Vamos gerar um gráfico de pizza com as vendas.
Step15: E podemos "explodir" um ou mais pedaços. Por exemplo, vamos separar o pedaço maior, do produto "C"
|
1,940
|
<ASSISTANT_TASK:>
Python Code:
import ipyvolume
import ipyvolume as ipv
import vaex
ds = vaex.example()
N = 10000
ipv.figure()
quiver = ipv.quiver(ds.data.x[:N], ds.data.y[:N], ds.data.z[:N],
ds.data.vx[:N], ds.data.vy[:N], ds.data.vz[:N],
size=1, size_selected=5, color_selected="grey")
ipv.xyzlim(-30, 30)
ipv.show()
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
from bokeh.models import CustomJS, ColumnDataSource
import ipyvolume.bokeh
output_notebook()
data_source = ColumnDataSource(data=dict(x=ds.data.Lz[:N], y=ds.data.E[:N]))
p = figure(title="E Lz space", tools='lasso_select', width=500, height=500)
r = p.circle('x', 'y', source=data_source, color="navy", alpha=0.2)
ipyvolume.bokeh.link_data_source_selection_to_widget(data_source, quiver, 'selected')
show(p)
# but them next to eachother
import ipywidgets
out = ipywidgets.Output()
with out:
show(p)
ipywidgets.HBox([out, ipv.gcc()])
from bokeh.resources import CDN
from bokeh.embed import components
script, div = components((p))
template_options = dict(extra_script_head=script + CDN.render_js() + CDN.render_css(),
body_pre="<h2>Do selections in 2d (bokeh)<h2>" + div + "<h2>And see the selection in ipyvolume<h2>")
ipyvolume.embed.embed_html("tmp/bokeh.html",
[ipv.gcc(), ipyvolume.bokeh.wmh], all_states=True,
template_options=template_options)
# uncomment the next line to open the html file
# !open tmp/bokeh.html
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We load some data from vaex, but only use the first 10 000 samples for performance reasons of Bokeh.
Step2: We make a quiver plot using ipyvolume's matplotlib's style api.
Step3: Bokeh scatter part
Step4: Now try doing a selection and see how the above 3d quiver plot reflects this selection.
Step5: Embedding in html
|
1,941
|
<ASSISTANT_TASK:>
Python Code:
def get_words(url):
import requests
words = requests.get(url).content.decode('latin-1')
word_list = words.split('\n')
index = 0
while index < len(word_list):
word = word_list[index]
if ';' in word or not word:
word_list.pop(index)
else:
index+=1
return word_list
#Get lists of positive and negative words
p_url = 'http://ptrckprry.com/course/ssd/data/positive-words.txt'
n_url = 'http://ptrckprry.com/course/ssd/data/negative-words.txt'
positive_words = get_words(p_url)
negative_words = get_words(n_url)
with open('data/community.txt','r') as f:
community = f.read()
with open('data/le_monde.txt','r') as f:
le_monde = f.read()
from nltk import word_tokenize
cpos = cneg = lpos = lneg = 0
for word in word_tokenize(community):
if word in positive_words:
cpos+=1
if word in negative_words:
cneg+=1
for word in word_tokenize(le_monde):
if word in positive_words:
lpos+=1
if word in negative_words:
lneg+=1
print("community {0:1.2f}%\t {1:1.2f}%\t {2:1.2f}%".format(cpos/len(word_tokenize(community))*100,
cneg/len(word_tokenize(community))*100,
(cpos-cneg)/len(word_tokenize(community))*100))
print("le monde {0:1.2f}%\t {1:1.2f}%\t {2:1.2f}%".format(lpos/len(word_tokenize(le_monde))*100,
lneg/len(word_tokenize(le_monde))*100,
(lpos-lneg)/len(word_tokenize(le_monde))*100))
nrc = "data/NRC-emotion-lexicon-wordlevel-alphabetized-v0.92.txt"
count=0
emotion_dict=dict()
with open(nrc,'r') as f:
all_lines = list()
for line in f:
if count < 46:
count+=1
continue
line = line.strip().split('\t')
if int(line[2]) == 1:
if emotion_dict.get(line[0]):
emotion_dict[line[0]].append(line[1])
else:
emotion_dict[line[0]] = [line[1]]
def get_nrc_data():
nrc = "data/NRC-emotion-lexicon-wordlevel-alphabetized-v0.92.txt"
count=0
emotion_dict=dict()
with open(nrc,'r') as f:
all_lines = list()
for line in f:
if count < 46:
count+=1
continue
line = line.strip().split('\t')
if int(line[2]) == 1:
if emotion_dict.get(line[0]):
emotion_dict[line[0]].append(line[1])
else:
emotion_dict[line[0]] = [line[1]]
return emotion_dict
emotion_dict = get_nrc_data()
emotion_dict['abandoned']
CONSUMER_KEY = ""
CONSUMER_SECRET = ""
TOKEN = ""
TOKEN_SECRET = ""
with open('yelp_keys.txt','r') as f:
count = 0
for line in f:
if count == 0:
CONSUMER_KEY = line.strip()
if count == 1:
CONSUMER_SECRET = line.strip()
if count == 2:
TOKEN = line.strip()
if count == 3:
TOKEN_SECRET = line.strip()
count+=1
#We'll use the get_lat_lng function we wrote way back in week 3
def get_lat_lng(address):
url = 'https://maps.googleapis.com/maps/api/geocode/json?address='
url += address
import requests
response = requests.get(url)
if not (response.status_code == 200):
return None
data = response.json()
if not( data['status'] == 'OK'):
return None
main_result = data['results'][0]
geometry = main_result['geometry']
latitude = geometry['location']['lat']
longitude = geometry['location']['lng']
return latitude,longitude
lat,long = get_lat_lng("Columbia University")
#Now set up our search parameters
def set_search_parameters(lat,long,radius):
#See the Yelp API for more details
params = {}
params["term"] = "restaurant"
params["ll"] = "{},{}".format(str(lat),str(long))
params["radius_filter"] = str(radius) #The distance around our point in metres
params["limit"] = "10" #Limit ourselves to 10 results
return params
set_search_parameters(lat,long,200)
def get_results(params):
import rauth
consumer_key = CONSUMER_KEY
consumer_secret = CONSUMER_SECRET
token = TOKEN
token_secret = TOKEN_SECRET
session = rauth.OAuth1Session(
consumer_key = consumer_key
,consumer_secret = consumer_secret
,access_token = token
,access_token_secret = token_secret)
request = session.get("http://api.yelp.com/v2/search",params=params)
#Transforms the JSON API response into a Python dictionary
data = request.json()
session.close()
return data
#Get the results
response = get_results(set_search_parameters(get_lat_lng("Community Food and Juice")[0],get_lat_lng("Community Food and Juice")[1],200))
all_snippets = list()
for business in response['businesses']:
name = business['name']
snippet = business['snippet_text']
id = business['id']
all_snippets.append((id,name,snippet))
all_snippets
def get_snippets(response):
all_snippets = list()
for business in response['businesses']:
name = business['name']
snippet = business['snippet_text']
id = business['id']
all_snippets.append((id,name,snippet))
return all_snippets
def emotion_analyzer(text,emotion_dict=emotion_dict):
#Set up the result dictionary
emotions = {x for y in emotion_dict.values() for x in y}
emotion_count = dict()
for emotion in emotions:
emotion_count[emotion] = 0
#Analyze the text and normalize by total number of words
total_words = len(text.split())
for word in text.split():
if emotion_dict.get(word):
for emotion in emotion_dict.get(word):
emotion_count[emotion] += 1/len(text.split())
return emotion_count
print("%-12s %1s\t%1s %1s %1s %1s %1s %1s %1s %1s"%(
"restaurant","fear","trust","negative","positive","joy","disgust","anticip",
"sadness","surprise"))
for snippet in all_snippets:
text = snippet[2]
result = emotion_analyzer(text)
print("%-12s %1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f"%(
snippet[1][0:10],result['fear'],result['trust'],
result['negative'],result['positive'],result['joy'],result['disgust'],
result['anticipation'],result['sadness'],result['surprise']))
def comparative_emotion_analyzer(text_tuples):
print("%-20s %1s\t%1s %1s %1s %1s %1s %1s %1s %1s"%(
"restaurant","fear","trust","negative","positive","joy","disgust","anticip",
"sadness","surprise"))
for text_tuple in text_tuples:
text = text_tuple[2]
result = emotion_analyzer(text)
print("%-20s %1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f"%(
text_tuple[1][0:20],result['fear'],result['trust'],
result['negative'],result['positive'],result['joy'],result['disgust'],
result['anticipation'],result['sadness'],result['surprise']))
#And test it
comparative_emotion_analyzer(all_snippets)
def analyze_nearby_restaurants(address,radius):
lat,long = get_lat_lng(address)
params = set_search_parameters(lat,long,radius)
response = get_results(params)
snippets = get_snippets(response)
comparative_emotion_analyzer(snippets)
#And test it
analyze_nearby_restaurants("Community Food and Juice",200)
#Test it on some other place
analyze_nearby_restaurants("221 Baker Street",200)
all_snippets
text=''
for snippet in all_snippets:
text+=snippet[2]
text
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
%matplotlib inline
wordcloud = WordCloud(stopwords=STOPWORDS,background_color='white',width=3000,height=3000).generate(text)
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
import nltk
from nltk.corpus import PlaintextCorpusReader
community_root = "data/community"
le_monde_root = "data/le_monde"
community_files = "community.*"
le_monde_files = "le_monde.*"
heights_root = "data/heights"
heights_files = "heights.*"
amigos_root = "data/amigos"
amigos_files = "amigos.*"
community_data = PlaintextCorpusReader(community_root,community_files)
le_monde_data = PlaintextCorpusReader(le_monde_root,le_monde_files)
heights_data = PlaintextCorpusReader(heights_root,heights_files)
amigos_data = PlaintextCorpusReader(amigos_root,amigos_files)
amigos_data.fileids()
amigos_data.raw()
def comparative_emotion_analyzer(text_tuples,name_location=1,text_location=2):
print("%-20s %1s\t%1s %1s %1s %1s %1s %1s %1s %1s"%(
"restaurant","fear","trust","negative","positive","joy","disgust","anticip",
"sadness","surprise"))
for text_tuple in text_tuples:
text = text_tuple[text_location]
result = emotion_analyzer(text)
print("%-20s %1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f\t%1.2f"%(
text_tuple[name_location][0:20],result['fear'],result['trust'],
result['negative'],result['positive'],result['joy'],result['disgust'],
result['anticipation'],result['sadness'],result['surprise']))
#And test it
comparative_emotion_analyzer(all_snippets)
restaurant_data = [('community',community_data.raw()),('le monde',le_monde_data.raw())
,('heights',heights_data.raw()), ('amigos',amigos_data.raw())]
comparative_emotion_analyzer(restaurant_data,0,1)
#Construct tokens (words/sentences) from the text
text = le_monde_data.raw()
import nltk
from nltk import sent_tokenize,word_tokenize
sentences = nltk.Text(sent_tokenize(text))
print(len(sentences))
words = nltk.Text(word_tokenize(text))
print(len(words))
num_chars=len(text)
num_words=len(word_tokenize(text))
num_sentences=len(sent_tokenize(text))
vocab = {x.lower() for x in word_tokenize(text)}
print(num_chars,int(num_chars/num_words),int(num_words/num_sentences),(len(vocab)/num_words))
def get_complexity(text):
num_chars=len(text)
num_words=len(word_tokenize(text))
num_sentences=len(sent_tokenize(text))
vocab = {x.lower() for x in word_tokenize(text)}
return len(vocab),int(num_chars/num_words),int(num_words/num_sentences),len(vocab)/num_words
get_complexity(le_monde_data.raw())
for text in restaurant_data:
(vocab,word_size,sent_size,vocab_to_text) = get_complexity(text[1])
print("{0:15s}\t{1:1.2f}\t{2:1.2f}\t{3:1.2f}\t{4:1.2f}".format(text[0],vocab,word_size,sent_size,vocab_to_text))
texts = restaurant_data
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
%matplotlib inline
#Remove unwanted words
#As we look at the cloud, we can get rid of words that don't make sense by adding them to this variable
DELETE_WORDS = []
def remove_words(text_string,DELETE_WORDS=DELETE_WORDS):
for word in DELETE_WORDS:
text_string = text_string.replace(word,' ')
return text_string
#Remove short words
MIN_LENGTH = 0
def remove_short_words(text_string,min_length = MIN_LENGTH):
word_list = text_string.split()
for word in word_list:
if len(word) < min_length:
text_string = text_string.replace(' '+word+' ',' ',1)
return text_string
#Set up side by side clouds
COL_NUM = 2
ROW_NUM = 2
fig, axes = plt.subplots(ROW_NUM, COL_NUM, figsize=(12,12))
for i in range(0,len(texts)):
text_string = remove_words(texts[i][1])
text_string = remove_short_words(text_string)
ax = axes[i%2]
ax = axes[i//2, i%2] #Use this if ROW_NUM >=2
ax.set_title(texts[i][0])
wordcloud = WordCloud(stopwords=STOPWORDS,background_color='white',width=1200,height=1000,max_words=20).generate(text_string)
ax.imshow(wordcloud)
ax.axis('off')
plt.show()
from nltk.book import *
inaugural.fileids()
inaugural.raw('1861-Lincoln.txt')
texts = [('trump',inaugural.raw('2017-Trump.txt')),
('obama',inaugural.raw('2009-Obama.txt')+inaugural.raw('2013-Obama.txt')),
('jackson',inaugural.raw('1829-Jackson.txt')+inaugural.raw('1833-Jackson.txt')),
('washington',inaugural.raw('1789-Washington.txt')+inaugural.raw('1793-Washington.txt'))]
for text in texts:
(vocab,word_size,sent_size,vocab_to_text) = get_complexity(text[1])
print("{0:15s}\t{1:1.2f}\t{2:1.2f}\t{3:1.2f}\t{4:1.2f}".format(text[0],vocab,word_size,sent_size,vocab_to_text))
from nltk.corpus import inaugural
sentence_lengths = list()
for fileid in inaugural.fileids():
sentence_lengths.append(get_complexity(' '.join(inaugural.words(fileid)))[2])
plt.plot(sentence_lengths)
text4.dispersion_plot(["government", "citizen", "freedom", "duties", "America",'independence','God','patriotism'])
from nltk.stem.porter import PorterStemmer
p_stemmer = PorterStemmer()
text = inaugural.raw()
striptext = text.replace('\n\n', ' ')
striptext = striptext.replace('\n', ' ')
sentences = sent_tokenize(striptext)
words = word_tokenize(striptext)
text = nltk.Text([p_stemmer.stem(i).lower() for i in words])
text.dispersion_plot(["govern", "citizen", "free", "america",'independ','god','patriot'])
!pip install vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
headers = ['pos','neg','neu','compound']
texts = restaurant_data
analyzer = SentimentIntensityAnalyzer()
for i in range(len(texts)):
name = texts[i][0]
sentences = sent_tokenize(texts[i][1])
pos=compound=neu=neg=0
for sentence in sentences:
vs = analyzer.polarity_scores(sentence)
pos+=vs['pos']/(len(sentences))
compound+=vs['compound']/(len(sentences))
neu+=vs['neu']/(len(sentences))
neg+=vs['neg']/(len(sentences))
print(name,pos,neg,neu,compound)
def vader_comparison(texts):
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
headers = ['pos','neg','neu','compound']
print("Name\t",' pos\t','neg\t','neu\t','compound')
analyzer = SentimentIntensityAnalyzer()
for i in range(len(texts)):
name = texts[i][0]
sentences = sent_tokenize(texts[i][1])
pos=compound=neu=neg=0
for sentence in sentences:
vs = analyzer.polarity_scores(sentence)
pos+=vs['pos']/(len(sentences))
compound+=vs['compound']/(len(sentences))
neu+=vs['neu']/(len(sentences))
neg+=vs['neg']/(len(sentences))
print('%-10s'%name,'%1.2f\t'%pos,'%1.2f\t'%neg,'%1.2f\t'%neu,'%1.2f\t'%compound)
vader_comparison(restaurant_data)
en={}
try:
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
sentences = sent_detector.tokenize(community_data.raw().strip())
for sentence in sentences:
tokenized = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokenized)
chunked = nltk.ne_chunk(tagged)
for tree in chunked:
if hasattr(tree, 'label'):
ne = ' '.join(c[0] for c in tree.leaves())
en[ne] = [tree.label(), ' '.join(c[1] for c in tree.leaves())]
except Exception as e:
print(str(e))
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(en)
meaningful_sents = list()
i=0
for sentence in sentences:
if 'service' in sentence:
i+=1
meaningful_sents.append((i,sentence))
vader_comparison(meaningful_sents)
def get_affect(text,word,lower=False):
import nltk
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
sentences = sent_detector.tokenize(text.strip())
sentence_count = 0
running_total = 0
for sentence in sentences:
if lower: sentence = sentence.lower()
if word in sentence:
vs = analyzer.polarity_scores(sentence)
running_total += vs['compound']
sentence_count += 1
if sentence_count == 0: return 0
return running_total/sentence_count
get_affect(community_data.raw(),'service',True)
nltk.Text(community_data.words()).concordance('service',100)
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
from nltk.probability import FreqDist
from nltk.corpus import stopwords
from collections import OrderedDict
import pprint
text = community_data.raw()
summary_sentences = []
candidate_sentences = {}
candidate_sentence_counts = {}
striptext = text.replace('\n\n', ' ')
striptext = striptext.replace('\n', ' ')
words = word_tokenize(striptext)
lowercase_words = [word.lower() for word in words
if word not in stopwords.words() and word.isalpha()]
word_frequencies = FreqDist(lowercase_words)
most_frequent_words = FreqDist(lowercase_words).most_common(20)
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(most_frequent_words)
sentences = sent_tokenize(striptext)
for sentence in sentences:
candidate_sentences[sentence] = sentence.lower()
candidate_sentences
for long, short in candidate_sentences.items():
count = 0
for freq_word, frequency_score in most_frequent_words:
if freq_word in short:
count += frequency_score
candidate_sentence_counts[long] = count
sorted_sentences = OrderedDict(sorted(
candidate_sentence_counts.items(),
key = lambda x: x[0],
reverse = True)[:4])
pp.pprint(sorted_sentences)
def build_naive_summary(text):
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
from nltk.probability import FreqDist
from nltk.corpus import stopwords
from collections import OrderedDict
summary_sentences = []
candidate_sentences = {}
candidate_sentence_counts = {}
striptext = text.replace('\n\n', ' ')
striptext = striptext.replace('\n', ' ')
words = word_tokenize(striptext)
lowercase_words = [word.lower() for word in words
if word not in stopwords.words() and word.isalpha()]
word_frequencies = FreqDist(lowercase_words)
most_frequent_words = FreqDist(lowercase_words).most_common(20)
sentences = sent_tokenize(striptext)
for sentence in sentences:
candidate_sentences[sentence] = sentence.lower()
for long, short in candidate_sentences.items():
count = 0
for freq_word, frequency_score in most_frequent_words:
if freq_word in short:
count += frequency_score
candidate_sentence_counts[long] = count
sorted_sentences = OrderedDict(sorted(
candidate_sentence_counts.items(),
key = lambda x: x[1],
reverse = True)[:4])
return sorted_sentences
summary = '\n'.join(build_naive_summary(community_data.raw()))
print(summary)
summary = '\n'.join(build_naive_summary(le_monde_data.raw()))
print(summary)
build_naive_summary(inaugural.raw('1789-Washington.txt'))
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
%matplotlib inline
import nltk
from nltk.corpus import PlaintextCorpusReader
from nltk import sent_tokenize,word_tokenize
from nltk.book import *
import nltk
from nltk.corpus import PlaintextCorpusReader
community_root = "data/community"
le_monde_root = "data/le_monde"
community_files = "community.*"
le_monde_files = "le_monde.*"
heights_root = "data/heights"
heights_files = "heights.*"
amigos_root = "data/amigos"
amigos_files = "amigos.*"
community_data = PlaintextCorpusReader(community_root,community_files)
le_monde_data = PlaintextCorpusReader(le_monde_root,le_monde_files)
heights_data = PlaintextCorpusReader(heights_root,heights_files)
amigos_data = PlaintextCorpusReader(amigos_root,amigos_files)
type(community_data)
text = community_data.raw()
summary_sentences = []
candidate_sentences = {}
candidate_sentence_counts = {}
striptext = text.replace('\n\n', ' ')
striptext = striptext.replace('\n', ' ')
import gensim.summarization
#!pip install gensim
import gensim.summarization
summary = gensim.summarization.summarize(striptext, word_count=100)
print(summary)
print(gensim.summarization.keywords(striptext,words=10))
summary = '\n'.join(build_naive_summary(community_data.raw()))
print(summary)
text = le_monde_data.raw()
summary_sentences = []
candidate_sentences = {}
candidate_sentence_counts = {}
striptext = text.replace('\n\n', ' ')
striptext = striptext.replace('\n', ' ')
summary = gensim.summarization.summarize(striptext, word_count=100)
print(summary)
#print(gensim.summarization.keywords(striptext,words=10))
from gensim import corpora
from gensim.models.ldamodel import LdaModel
from gensim.parsing.preprocessing import STOPWORDS
import pprint
text = PlaintextCorpusReader("data/","Nikon_coolpix_4300.txt").raw()
striptext = text.replace('\n\n', ' ')
striptext = striptext.replace('\n', ' ')
sentences = sent_tokenize(striptext)
#words = word_tokenize(striptext)
#tokenize each sentence into word tokens
texts = [[word for word in sentence.lower().split()
if word not in STOPWORDS and word.isalnum()]
for sentence in sentences]
len(texts)
print(text)
text
dictionary = corpora.Dictionary(texts) #(word_id,frequency) pairs
corpus = [dictionary.doc2bow(text) for text in texts] #(word_id,freq) pairs by sentence
#print(dictionary.token2id)
#print(dictionary.keys())
#print(corpus[9])
#print(texts[9])
#print(dictionary[73])
#dictionary[4]
#Set parameters
num_topics = 5 #The number of topics that should be generated
passes = 10
lda = LdaModel(corpus,
id2word=dictionary,
num_topics=num_topics,
passes=10)
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(lda.print_topics(num_words=3))
from operator import itemgetter
lda.get_document_topics(corpus[0],minimum_probability=0.05,per_word_topics=False)
sorted(lda.get_document_topics(corpus[0],minimum_probability=0,per_word_topics=False),key=itemgetter(1),reverse=True)
def draw_wordcloud(lda,topicnum,min_size=0,STOPWORDS=[]):
word_list=[]
prob_total = 0
for word,prob in lda.show_topic(topicnum,topn=50):
prob_total +=prob
for word,prob in lda.show_topic(topicnum,topn=50):
if word in STOPWORDS or len(word) < min_size:
continue
freq = int(prob/prob_total*1000)
alist=[word]
word_list.extend(alist*freq)
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
%matplotlib inline
text = ' '.join(word_list)
wordcloud = WordCloud(stopwords=STOPWORDS,background_color='white',width=3000,height=3000).generate(' '.join(word_list))
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
draw_wordcloud(lda,2)
REMOVE_WORDS = {'shall','generally','spirit','country','people','nation','nations','great','better'}
#Create a word dictionary (id, word)
texts = [[word for word in sentence.lower().split()
if word not in STOPWORDS and word not in REMOVE_WORDS and word.isalnum()]
for sentence in sentences]
dictionary = corpora.Dictionary(texts)
#Create a corpus of documents
text_list = list()
for fileid in inaugural.fileids():
text = inaugural.words(fileid)
doc=list()
for word in text:
if word in STOPWORDS or word in REMOVE_WORDS or not word.isalpha() or len(word) <5:
continue
doc.append(word)
text_list.append(doc)
by_address_corpus = [dictionary.doc2bow(text) for text in text_list]
lda = LdaModel(by_address_corpus,
id2word=dictionary,
num_topics=20,
passes=10)
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(lda.print_topics(num_words=10))
len(by_address_corpus)
from operator import itemgetter
sorted(lda.get_document_topics(by_address_corpus[0],minimum_probability=0,per_word_topics=False),key=itemgetter(1),reverse=True)
draw_wordcloud(lda,18)
print(lda.show_topic(12,topn=5))
print(lda.show_topic(18,topn=5))
doc_list = [community_data,le_monde_data,amigos_data,heights_data]
all_text = community_data.raw() + le_monde_data.raw() + amigos_data.raw() + heights_data.raw()
documents = [doc.raw() for doc in doc_list]
texts = [[word for word in document.lower().split()
if word not in STOPWORDS and word.isalnum()]
for document in documents]
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
from gensim.similarities.docsim import Similarity
from gensim import corpora, models, similarities
lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2)
doc =
Many, many years ago, I used to frequent this place for their amazing french toast.
It's been a while since then and I've been hesitant to review a place I haven't been to in 7-8 years...
but I passed by French Roast and, feeling nostalgic, decided to go back.
It was a great decision.
Their Bloody Mary is fantastic and includes bacon (which was perfectly cooked!!), olives,
cucumber, and celery. The Irish coffee is also excellent, even without the cream which is what I ordered.
Great food, great drinks, a great ambiance that is casual yet familiar like a tiny little French cafe.
I highly recommend coming here, and will be back whenever I'm in the area next.
Juan, the bartender, is great!! One of the best in any brunch spot in the city, by far.
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lsi = lsi[vec_bow]
index = similarities.MatrixSimilarity(lsi[corpus])
sims = index[vec_lsi]
sims = sorted(enumerate(sims), key=lambda item: -item[1])
sims
doc=
I went to Mexican Festival Restaurant for Cinco De Mayo because I had been there years
prior and had such a good experience. This time wasn't so good. The food was just
mediocre and it wasn't hot when it was brought to our table. They brought my friends food out
10 minutes before everyone else and it took forever to get drinks. We let it slide because the place was
packed with people and it was Cinco De Mayo. Also, the margaritas we had were slamming! Pure tequila.
But then things took a turn for the worst. As I went to get something out of my purse which was on
the back of my chair, I looked down and saw a huge water bug. I had to warn the lady next to me because
it was so close to her chair. We called the waitress over and someone came with a broom and a dustpan and
swept it away like it was an everyday experience. No one seemed phased.
Even though our waitress was very nice, I do not think we will be returning to Mexican Festival again.
It seems the restaurant is a shadow of its former self.
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lsi = lsi[vec_bow]
index = similarities.MatrixSimilarity(lsi[corpus])
sims = index[vec_lsi]
sims = sorted(enumerate(sims), key=lambda item: -item[1])
sims
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h4>Read the text being analyzed and count the proportion of positive and negative words in the text</h4>
Step2: <h4>Compute sentiment by looking at the proportion of positive and negative words in the text</h4>
Step3: <h2>Simple sentiment analysis using NRC data</h2>
Step4: <h4>Functionalize this</h4>
Step5: <h2>Analyzing yelp reviews</h2>
Step6: <h2>I've saved my keys in a file and will use those. Don't run the next cell!</h2>
Step7: <h4>We need to do a few things</h4>
Step8: <h4>Write the function that queries yelp. We'll use rauth library to handle authentication</h4>
Step9: <h4>Extract snippets</h4>
Step10: <h4>Functionalize this</h4>
Step11: <h2>A function that analyzes emotions</h2>
Step12: <h4>Now we can analyze the emotional content of the review snippets</h4>
Step13: <h4>Let's functionalize this</h4>
Step14: <h4>And let's functionalize the yelp stuff as well</h4>
Step15: <h2>Simple analysis
Step16: <h2>Let's do a detailed comparison of local restaurants</h2>
Step17: <h4>We need to modify comparitive_emotion_analyzer to tell it where the restaurant name and the text is in the tuple</h4>
Step18: <h2>Simple Analysis
Step19: <h4>Functionalize this</h4>
Step20: <h4>We could do a word cloud comparison</h4>
Step21: <h3>Comparing complexity of restaurant reviews won't get us anything useful</h3>
Step22: <h1>Often, a comparitive analysis helps us understand text better</h1>
Step23: <h4>Let's look at the complexity of the speeches by four presidents</h4>
Step24: <h2>Analysis over time</h2>
Step25: <h1>dispersion plots</h1>
Step26: <h4>We may want to use word stems rather than the part of speect form</h4>
Step27: <h2>Weighted word analysis using Vader</h2>
Step28: <h4>And functionalize this as well</h4>
Step29: <h2>Named Entities</h2>
Step30: <h4>Assuming we've done a good job of identifying named entities, we can get an affect score on entities</h4>
Step31: <h4>We could also develop a affect calculator for common terms in our domain (e.g., food items)</h4>
Step32: <h4>The nltk function concordance returns text fragments around a word</h4>
Step33: <h2>Text summarization</h2>
Step34: <h4>Then prep the text. Get did of end of line chars</h4>
Step35: <h4>Construct a list of words after getting rid of unimportant ones and numbers</h4>
Step36: <h4>Construct word frequencies and choose the most common n (20)</h4>
Step37: <h4>lowercase the sentences</h4>
Step38: <h4>Packaging all this into a function</h4>
Step39: <h4>We can summarize George Washington's first inaugural speech<h4>
Step40: <h3>gensim
Step41: <h1>Topic modeling</h1>
Step42: <h4>Prepare the text</h4>
Step43: <h4>Create a (word,frequency) dictionary for each word in the text</h4>
Step44: <h4>Do the LDA</h4>
Step45: <h4>See results</h4>
Step46: <h2>Matching topics to documents</h2>
Step47: <h3>Making sense of the topics</h3>
Step48: <h4>Roughly,</h4>
Step49: <h2>Create the model</h2>
Step50: <h2>We can now compare presidential addresses by topic</h2>
Step53: <h1>Similarity</h1>
|
1,942
|
<ASSISTANT_TASK:>
Python Code:
# Load image
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)
# Create kernel
kernel = np.array([[0, -1, 0],
[-1, 5,-1],
[0, -1, 0]])
# Sharpen image
image_sharp = cv2.filter2D(image, -1, kernel)
# Show image
plt.imshow(image_sharp, cmap='gray'), plt.axis("off")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Image As Greyscale
Step2: Sharpen Image
Step3: View Image
|
1,943
|
<ASSISTANT_TASK:>
Python Code:
import subprocess
completed = subprocess.run(['ls', '-l'])
completed
completed = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
completed
import subprocess
try:
completed = subprocess.run(
'echo to stdout; echo to stderr 1>&2; exit 1',
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
except subprocess.CalledProcessError as err:
print('ERROR:', err)
else:
print('returncode:', completed.returncode)
print('stdout is {!r}'.format(completed.stdout))
print('stderr is {!r}'.format(completed.stderr))
import subprocess
completed = subprocess.run('echo $HOME', shell=True, stdout=subprocess.PIPE)
completed
import subprocess
try:
completed = subprocess.run('echo $HOME', stdout=subprocess.PIPE)
except:
print("Get Error if don't execute on shell")
import subprocess
print("read:")
proc = subprocess.Popen(['echo', '"to stdout"'],
stdout = subprocess.PIPE)
stdout_value = proc.communicate()[0].decode("utf-8")
print('stdout', repr(stdout_value))
import subprocess
cat = subprocess.Popen(
['cat', 'index.rst'],
stdout=subprocess.PIPE,
)
grep = subprocess.Popen(
['grep', '.. literalinclude::'],
stdin=cat.stdout,
stdout=subprocess.PIPE,
)
cut = subprocess.Popen(
['cut', '-f', '3', '-d:'],
stdin=grep.stdout,
stdout=subprocess.PIPE,
)
end_of_pipe = cut.stdout
print('Included files:')
for line in end_of_pipe:
print(line.decode('utf-8').strip())
# %load signal_child.py
import os
import signal
import time
import sys
pid = os.getpid()
received = False
def signal_usr1(signum, frame):
"Callback invoked when a signal is received"
global received
received = True
print('CHILD {:>6}: Received USR1'.format(pid))
sys.stdout.flush()
print('CHILD {:>6}: Setting up signal handler'.format(pid))
sys.stdout.flush()
signal.signal(signal.SIGUSR1, signal_usr1)
print('CHILD {:>6}: Pausing to wait for signal'.format(pid))
sys.stdout.flush()
time.sleep(3)
if not received:
print('CHILD {:>6}: Never received signal'.format(pid))
# %load signal_parent.py
import os
import signal
import subprocess
import time
import sys
proc = subprocess.Popen(['python3', 'signal_child.py'])
print('PARENT : Pausing before sending signal...')
sys.stdout.flush()
time.sleep(1)
print('PARENT : Signaling child')
sys.stdout.flush()
os.kill(proc.pid, signal.SIGUSR1)
!python signal_parent.py
import os
import signal
import subprocess
import tempfile
import time
import sys
script = '''#!/bin/sh
echo "Shell script in process $$"
set -x
python3 signal_child.py
'''
script_file = tempfile.NamedTemporaryFile('wt')
script_file.write(script)
script_file.flush()
proc = subprocess.Popen(['sh', script_file.name])
print('PARENT : Pausing before signaling {}...'.format(
proc.pid))
sys.stdout.flush()
time.sleep(1)
print('PARENT : Signaling child {}'.format(proc.pid))
sys.stdout.flush()
os.kill(proc.pid, signal.SIGUSR1)
time.sleep(3)
import os
import signal
import subprocess
import tempfile
import time
import sys
def show_setting_prgrp():
print('Calling os.setpgrp() from {}'.format(os.getpid()))
os.setpgrp()
print('Process group is now {}'.format(
os.getpid(), os.getpgrp()))
sys.stdout.flush()
script = '''#!/bin/sh
echo "Shell script in process $$"
set -x
python3 signal_child.py
'''
script_file = tempfile.NamedTemporaryFile('wt')
script_file.write(script)
script_file.flush()
proc = subprocess.Popen(
['sh', script_file.name],
preexec_fn=show_setting_prgrp,
)
print('PARENT : Pausing before signaling {}...'.format(
proc.pid))
sys.stdout.flush()
time.sleep(1)
print('PARENT : Signaling process group {}'.format(
proc.pid))
sys.stdout.flush()
os.killpg(proc.pid, signal.SIGUSR1)
time.sleep(3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Capturing Output
Step2: Suppressing Output
Step3: Execute on shell
Step4: if you don't run this command on a shell, this is a error, because HOME is not defined
Step5: Working with Pipes Directly
Step6: Connecting Segments of a Pipe
Step7: Interacting with Another Command
Step8: Process Groups / Session
Step9: The pid used to send the signal does not match the pid of the child of the shell script waiting for the signal, because in this example there are three separate processes interacting
|
1,944
|
<ASSISTANT_TASK:>
Python Code:
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 100)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 50
values = org_values + np.random.normal(0, noise, org_values.shape)
real_parameters = np.array(real_parameters + [noise])
# Get properties of the noise sample
noise_sample_mean = np.mean(values - org_values)
noise_sample_std = np.std(values - org_values)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, noise*0.1],
[0.02, 600, noise*100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Perform sampling using MCMC, with a single chain
x0 = real_parameters * 1.1
mcmc = pints.MCMCController(log_posterior, 1, [x0])
mcmc.set_max_iterations(6000)
mcmc.set_log_to_screen(False)
print('Running...')
chains = mcmc.run()
print('Done!')
# Select chain 0 and discard warm-up
chain = chains[0]
chain = chain[3000:]
import pints.plot
# Plot the 1d histogram of each parameter
pints.plot.histogram([chain])
plt.show()
# Plot the 1d histogram of each parameter
fig, axes = pints.plot.histogram([chain])
# Customise the plots
parameter_names = [r'$r$', r'$k$', r'$\sigma$']
for i, ax in enumerate(axes):
# (1) Add parameter name
ax.set_xlabel(parameter_names[i])
# (2i) Add mean
ax.axvline(np.mean(chain[:, i]), color='k', label='Mean')
# (2ii) Add 95% credible interval
ax.axvline(np.percentile(chain[:, i], 2.5), color='C1', label='95% credible interval')
ax.axvline(np.percentile(chain[:, i], 97.5), color='C1')
axes[0].legend()
# (3) Update figure size
fig.set_size_inches(14, 9)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting Pints' standard 1d histograms
Step2: Customise the plots
|
1,945
|
<ASSISTANT_TASK:>
Python Code:
import logging
import time
from contextlib import contextmanager
import os
from multiprocessing import Process
import psutil
import numpy as np
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
import gensim.downloader
from gensim import matutils, utils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, TfidfModel, LsiModel
from gensim.models.basemodel import BaseTopicModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
newsgroups = gensim.downloader.load('20-newsgroups')
categories = [
'alt.atheism',
'comp.graphics',
'rec.motorcycles',
'talk.politics.mideast',
'sci.space'
]
categories = {name: idx for idx, name in enumerate(categories)}
random_state = RandomState(42)
trainset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'train'
])
random_state.shuffle(trainset)
testset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'test'
])
random_state.shuffle(testset)
train_documents = [preprocess_string(doc['data']) for doc in trainset]
test_documents = [preprocess_string(doc['data']) for doc in testset]
dictionary = Dictionary(train_documents)
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=20000) # filter out too in/frequent tokens
tfidf = TfidfModel(dictionary=dictionary)
train_corpus = [
dictionary.doc2bow(document)
for document
in train_documents
]
test_corpus = [
dictionary.doc2bow(document)
for document
in test_documents
]
train_corpus_tfidf = list(tfidf[train_corpus])
test_corpus_tfidf = list(tfidf[test_corpus])
%%time
nmf = GensimNmf(
corpus=train_corpus_tfidf,
num_topics=5,
id2word=dictionary,
chunksize=1000,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
kappa=1,
)
W = nmf.get_topics().T
dense_test_corpus = matutils.corpus2dense(
test_corpus_tfidf,
num_terms=W.shape[0],
)
if isinstance(nmf, SklearnNmf):
H = nmf.transform(dense_test_corpus.T).T
else:
H = np.zeros((nmf.num_topics, len(test_corpus_tfidf)))
for bow_id, bow in enumerate(test_corpus_tfidf):
for topic_id, word_count in nmf[bow]:
H[topic_id, bow_id] = word_count
np.linalg.norm(W.dot(H))
np.linalg.norm(dense_test_corpus)
nmf.show_topics()
CoherenceModel(
model=nmf,
corpus=test_corpus_tfidf,
coherence='u_mass'
).get_coherence()
print(testset[0]['data'])
print('=' * 100)
print("Topics: {}".format(nmf[test_corpus[0]]))
word = dictionary[0]
print("Word: {}".format(word))
print("Topics: {}".format(nmf.get_term_topics(word)))
def density(matrix):
return (matrix > 0).mean()
print("Density: {}".format(density(nmf._W)))
print("Density: {}".format(density(nmf._h)))
fixed_params = dict(
chunksize=1000,
num_topics=5,
id2word=dictionary,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
)
@contextmanager
def measure_ram(output, tick=5):
def _measure_ram(pid, output, tick=tick):
py = psutil.Process(pid)
with open(output, 'w') as outfile:
while True:
memory = py.memory_info().rss
outfile.write("{}\n".format(memory))
outfile.flush()
time.sleep(tick)
pid = os.getpid()
p = Process(target=_measure_ram, args=(pid, output, tick))
p.start()
yield
p.terminate()
def get_train_time_and_ram(func, name, tick=5):
memprof_filename = "{}.memprof".format(name)
start = time.time()
with measure_ram(memprof_filename, tick=tick):
result = func()
elapsed_time = pd.to_timedelta(time.time() - start, unit='s').round('ms')
memprof_df = pd.read_csv(memprof_filename, squeeze=True)
mean_ram = "{} MB".format(
int(memprof_df.mean() // 2 ** 20),
)
max_ram = "{} MB".format(int(memprof_df.max() // 2 ** 20))
return elapsed_time, mean_ram, max_ram, result
def get_f1(model, train_corpus, X_test, y_train, y_test):
if isinstance(model, SklearnNmf):
dense_train_corpus = matutils.corpus2dense(
train_corpus,
num_terms=model.components_.shape[1],
)
X_train = model.transform(dense_train_corpus.T)
else:
X_train = np.zeros((len(train_corpus), model.num_topics))
for bow_id, bow in enumerate(train_corpus):
for topic_id, word_count in model[bow]:
X_train[bow_id, topic_id] = word_count
log_reg = LogisticRegressionCV(multi_class='multinomial', cv=5)
log_reg.fit(X_train, y_train)
pred_labels = log_reg.predict(X_test)
return f1_score(y_test, pred_labels, average='micro')
def get_sklearn_topics(model, top_n=5):
topic_probas = model.components_.T
topic_probas = topic_probas / topic_probas.sum(axis=0)
sparsity = np.zeros(topic_probas.shape[1])
for row in topic_probas:
sparsity += (row == 0)
sparsity /= topic_probas.shape[1]
topic_probas = topic_probas[:, sparsity.argsort()[::-1]][:, :top_n]
token_indices = topic_probas.argsort(axis=0)[:-11:-1, :]
topic_probas.sort(axis=0)
topic_probas = topic_probas[:-11:-1, :]
topics = []
for topic_idx in range(topic_probas.shape[1]):
tokens = [
model.id2word[token_idx]
for token_idx
in token_indices[:, topic_idx]
]
topic = (
'{}*"{}"'.format(round(proba, 3), token)
for proba, token
in zip(topic_probas[:, topic_idx], tokens)
)
topic = " + ".join(topic)
topics.append((topic_idx, topic))
return topics
def get_metrics(model, test_corpus, train_corpus=None, y_train=None, y_test=None, dictionary=None):
if isinstance(model, SklearnNmf):
model.get_topics = lambda: model.components_
model.show_topics = lambda top_n: get_sklearn_topics(model, top_n)
model.id2word = dictionary
W = model.get_topics().T
dense_test_corpus = matutils.corpus2dense(
test_corpus,
num_terms=W.shape[0],
)
if isinstance(model, SklearnNmf):
H = model.transform(dense_test_corpus.T).T
else:
H = np.zeros((model.num_topics, len(test_corpus)))
for bow_id, bow in enumerate(test_corpus):
for topic_id, word_count in model[bow]:
H[topic_id, bow_id] = word_count
l2_norm = None
if not isinstance(model, LdaModel):
pred_factors = W.dot(H)
l2_norm = np.linalg.norm(pred_factors - dense_test_corpus)
l2_norm = round(l2_norm, 4)
f1 = None
if train_corpus and y_train and y_test:
f1 = get_f1(model, train_corpus, H.T, y_train, y_test)
f1 = round(f1, 4)
model.normalize = True
coherence = CoherenceModel(
model=model,
corpus=test_corpus,
coherence='u_mass'
).get_coherence()
coherence = round(coherence, 4)
topics = model.show_topics(5)
model.normalize = False
return dict(
coherence=coherence,
l2_norm=l2_norm,
f1=f1,
topics=topics,
)
tm_metrics = pd.DataFrame(columns=['model', 'train_time', 'coherence', 'l2_norm', 'f1', 'topics'])
y_train = [doc['target'] for doc in trainset]
y_test = [doc['target'] for doc in testset]
# LDA metrics
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(
corpus=train_corpus,
**fixed_params,
),
'lda',
0.1,
)
row.update(get_metrics(
lda, test_corpus, train_corpus, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# LSI metrics
row = {}
row['model'] = 'lsi'
row['train_time'], row['mean_ram'], row['max_ram'], lsi = get_train_time_and_ram(
lambda: LsiModel(
corpus=train_corpus_tfidf,
num_topics=5,
id2word=dictionary,
chunksize=2000,
),
'lsi',
0.1,
)
row.update(get_metrics(
lsi, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Sklearn NMF metrics
row = {}
row['model'] = 'sklearn_nmf'
train_csc_corpus_tfidf = matutils.corpus2csc(train_corpus_tfidf, len(dictionary)).T
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: SklearnNmf(n_components=5, random_state=42).fit(train_csc_corpus_tfidf),
'sklearn_nmf',
0.1,
)
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test, dictionary,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Gensim NMF metrics
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], gensim_nmf = get_train_time_and_ram(
lambda: GensimNmf(
normalize=False,
corpus=train_corpus_tfidf,
**fixed_params
),
'gensim_nmf',
0.1,
)
row.update(get_metrics(
gensim_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
tm_metrics.replace(np.nan, '-', inplace=True)
tm_metrics.drop('topics', axis=1)
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
# Re-import modules from scratch, so that this Section doesn't rely on any previous cells.
import itertools
import json
import logging
import time
import os
from smart_open import smart_open
import psutil
import numpy as np
import scipy.sparse
from contextlib import contextmanager, contextmanager, contextmanager
from multiprocessing import Process
from tqdm import tqdm, tqdm_notebook
import joblib
import pandas as pd
from sklearn.decomposition.nmf import NMF as SklearnNmf
import gensim.downloader
from gensim import matutils
from gensim.corpora import MmCorpus, Dictionary
from gensim.models import LdaModel, LdaMulticore, CoherenceModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.utils import simple_preprocess
tqdm.pandas()
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
data = gensim.downloader.load("wiki-english-20171001")
data = gensim.downloader.load("wiki-english-20171001")
article = next(iter(data))
print("Article: %r\n" % article['title'])
for section_title, section_text in zip(article['section_titles'], article['section_texts']):
print("Section title: %r" % section_title)
print("Section text: %s…\n" % section_text[:100].replace('\n', ' ').strip())
def wikidump2tokens(articles):
Stream through the Wikipedia dump, yielding a list of tokens for each article.
for article in articles:
article_section_texts = [
" ".join([title, text])
for title, text
in zip(article['section_titles'], article['section_texts'])
]
article_tokens = simple_preprocess(" ".join(article_section_texts))
yield article_tokens
if os.path.exists('wiki.dict'):
# If we already stored the Dictionary in a previous run, simply load it, to save time.
dictionary = Dictionary.load('wiki.dict')
else:
dictionary = Dictionary(wikidump2tokens(data))
# Keep only the 30,000 most frequent vocabulary terms, after filtering away terms
# that are too frequent/too infrequent.
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=30000)
dictionary.save('wiki.dict')
vector_stream = (dictionary.doc2bow(article) for article in wikidump2tokens(data))
class RandomSplitCorpus(MmCorpus):
Use the fact that MmCorpus supports random indexing, and create a streamed
corpus in shuffled order, including a train/test split for evaluation.
def __init__(self, random_seed=42, testset=False, testsize=1000, *args, **kwargs):
super().__init__(*args, **kwargs)
random_state = np.random.RandomState(random_seed)
self.indices = random_state.permutation(range(self.num_docs))
test_nnz = sum(len(self[doc_idx]) for doc_idx in self.indices[:testsize])
if testset:
self.indices = self.indices[:testsize]
self.num_docs = testsize
self.num_nnz = test_nnz
else:
self.indices = self.indices[testsize:]
self.num_docs -= testsize
self.num_nnz -= test_nnz
def __iter__(self):
for doc_id in self.indices:
yield self[doc_id]
if not os.path.exists('wiki.mm'):
MmCorpus.serialize('wiki.mm', vector_stream, progress_cnt=100000)
if not os.path.exists('wiki_tfidf.mm'):
MmCorpus.serialize('wiki_tfidf.mm', tfidf[MmCorpus('wiki.mm')], progress_cnt=100000)
# Load back the vectors as two lazily-streamed train/test iterables.
train_corpus = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki.mm',
)
test_corpus = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki.mm',
)
train_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki_tfidf.mm',
)
test_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki_tfidf.mm',
)
if not os.path.exists('wiki_train_csr.npz'):
scipy.sparse.save_npz(
'wiki_train_csr.npz',
matutils.corpus2csc(train_corpus_tfidf, len(dictionary)).T,
)
tm_metrics = pd.DataFrame(columns=[
'model', 'train_time', 'mean_ram', 'max_ram', 'coherence', 'l2_norm', 'topics',
])
params = dict(
chunksize=2000,
num_topics=50,
id2word=dictionary,
passes=1,
eval_every=10,
minimum_probability=0,
random_state=42,
)
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], nmf = get_train_time_and_ram(
lambda: GensimNmf(normalize=False, corpus=train_corpus_tfidf, **params),
'gensim_nmf',
1,
)
print(row)
nmf.save('gensim_nmf.model')
nmf = GensimNmf.load('gensim_nmf.model')
row.update(get_metrics(nmf, test_corpus_tfidf))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
row = {}
row['model'] = 'lsi'
row['train_time'], row['mean_ram'], row['max_ram'], lsi = get_train_time_and_ram(
lambda: LsiModel(
corpus=train_corpus_tfidf,
chunksize=2000,
num_topics=50,
id2word=dictionary,
),
'lsi',
1,
)
print(row)
lsi.save('lsi.model')
lsi = LsiModel.load('lsi.model')
row.update(get_metrics(lsi, test_corpus_tfidf))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(corpus=train_corpus, **params),
'lda',
1,
)
print(row)
lda.save('lda.model')
lda = LdaModel.load('lda.model')
row.update(get_metrics(lda, test_corpus))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
row = {}
row['model'] = 'sklearn_nmf'
sklearn_nmf = SklearnNmf(n_components=50, tol=1e-2, random_state=42)
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: sklearn_nmf.fit(scipy.sparse.load_npz('wiki_train_csr.npz')),
'sklearn_nmf',
10,
)
print(row)
joblib.dump(sklearn_nmf, 'sklearn_nmf.joblib')
sklearn_nmf = joblib.load('sklearn_nmf.joblib')
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, dictionary=dictionary,
))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
tm_metrics.replace(np.nan, '-', inplace=True)
tm_metrics.drop(['topics', 'f1'], axis=1)
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
import logging
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
from sklearn.model_selection import ParameterGrid
import gensim.downloader
from gensim import matutils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, LdaMulticore
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from sklearn.base import BaseEstimator, TransformerMixin
import scipy.sparse as sparse
class NmfWrapper(BaseEstimator, TransformerMixin):
def __init__(self, bow_matrix, **kwargs):
self.corpus = sparse.csc.csc_matrix(bow_matrix)
self.nmf = GensimNmf(**kwargs)
def fit(self, X):
self.nmf.update(self.corpus)
@property
def components_(self):
return self.nmf.get_topics()
gensim.models.nmf.logger.propagate = False
============================
Faces dataset decompositions
============================
This example applies to :ref:`olivetti_faces` different unsupervised
matrix decomposition (dimension reduction) methods from the module
:py:mod:`sklearn.decomposition` (see the documentation chapter
:ref:`decompositions`) .
print(__doc__)
# Authors: Vlad Niculae, Alexandre Gramfort
# License: BSD 3 claus
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
rng = RandomState(0)
# #############################################################################
# Load faces data
dataset = fetch_olivetti_faces(shuffle=True, random_state=rng)
faces = dataset.data
n_samples, n_features = faces.shape
# global centering
faces_centered = faces - faces.mean(axis=0)
# local centering
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
def plot_gallery(title, images, n_col=n_col, n_row=n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
# #############################################################################
# List of the different estimators, whether to center and transpose the
# problem, and whether the transformer uses the clustering API.
estimators = [
('Eigenfaces - PCA using randomized SVD',
decomposition.PCA(n_components=n_components, svd_solver='randomized',
whiten=True),
True),
('Non-negative components - NMF (Sklearn)',
decomposition.NMF(n_components=n_components, init='nndsvda', tol=5e-3),
False),
('Non-negative components - NMF (Gensim)',
NmfWrapper(
bow_matrix=faces.T,
chunksize=3,
eval_every=400,
passes=2,
id2word={idx: idx for idx in range(faces.shape[1])},
num_topics=n_components,
minimum_probability=0,
random_state=42,
),
False),
('Independent components - FastICA',
decomposition.FastICA(n_components=n_components, whiten=True),
True),
('Sparse comp. - MiniBatchSparsePCA',
decomposition.MiniBatchSparsePCA(n_components=n_components, alpha=0.8,
n_iter=100, batch_size=3,
random_state=rng),
True),
('MiniBatchDictionaryLearning',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng),
True),
('Cluster centers - MiniBatchKMeans',
MiniBatchKMeans(n_clusters=n_components, tol=1e-3, batch_size=20,
max_iter=50, random_state=rng),
True),
('Factor Analysis components - FA',
decomposition.FactorAnalysis(n_components=n_components, max_iter=2),
True),
]
# #############################################################################
# Plot a sample of the input data
plot_gallery("First centered Olivetti faces", faces_centered[:n_components])
# #############################################################################
# Do the estimation and plot it
for name, estimator, center in estimators:
print("Extracting the top %d %s..." % (n_components, name))
t0 = time.time()
data = faces
if center:
data = faces_centered
estimator.fit(data)
train_time = (time.time() - t0)
print("done in %0.3fs" % train_time)
if hasattr(estimator, 'cluster_centers_'):
components_ = estimator.cluster_centers_
else:
components_ = estimator.components_
# Plot an image representing the pixelwise variance provided by the
# estimator e.g its noise_variance_ attribute. The Eigenfaces estimator,
# via the PCA decomposition, also provides a scalar noise_variance_
# (the mean of pixelwise variance) that cannot be displayed as an image
# so we skip it.
if (hasattr(estimator, 'noise_variance_') and
estimator.noise_variance_.ndim > 0): # Skip the Eigenfaces case
plot_gallery("Pixelwise variance",
estimator.noise_variance_.reshape(1, -1), n_col=1,
n_row=1)
plot_gallery('%s - Train time %.1fs' % (name, train_time),
components_[:n_components])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset preparation
Step2: Create a train/test split
Step3: We'll use very simple preprocessing with stemming to tokenize each document. YMMV; in your application, use whatever preprocessing makes sense in your domain. Correctly preparing the input has major impact on any subsequent ML training.
Step4: Dictionary compilation
Step5: Create training corpus
Step6: Here we simply stored the bag-of-words vectors into a list, but Gensim accepts any iterable as input, including streamed ones. To learn more about memory-efficient input iterables, see our Data Streaming in Python
Step7: View the learned topics
Step8: Evaluation measure
Step9: Topic inference on new documents
Step10: Word topic inference
Step11: Internal NMF state
Step12: Term-topic matrix of shape (words, topics).
Step13: Topic-document matrix for the last batch of shape (topics, batch)
Step14: 3. Benchmarks
Step15: Run the models
Step16: Benchmark results
Step17: Main insights
Step18: Subjectively, Gensim and Sklearn NMFs are on par with each other, LDA and LSI look a bit worse.
Step19: Load the Wikipedia dump
Step20: Print the titles and sections of the first Wikipedia article, as a little sanity check
Step22: Let's create a Python generator function that streams through the downloaded Wikipedia dump and preprocesses (tokenizes, lower-cases) each article
Step23: Create a word-to-id mapping, in order to vectorize texts. Makes a full pass over the Wikipedia corpus, takes ~3.5 hours
Step24: Store preprocessed Wikipedia as bag-of-words sparse matrix in MatrixMarket format
Step26: For the purposes of this tutorial though, we'll serialize ("cache") the vectorized bag-of-words vectors to disk, to wiki.mm file in MatrixMarket format. The reason is, we'll be re-using the vectorized articles multiple times, for different models for our benchmarks, and also shuffling them, so it makes sense to amortize the vectorization time by persisting the resulting vectors to disk.
Step27: Save preprocessed Wikipedia in scipy.sparse format
Step28: Metrics
Step29: Define common parameters, to be shared by all evaluated models
Step30: Train Gensim NMF model and record its metrics
Step31: Train Gensim LSI model and record its metrics
Step32: Train Gensim LDA and record its metrics
Step33: Train Sklearn NMF and record its metrics
Step34: Wikipedia results
Step35: Insights
Step36: It seems all four models successfully learned useful topics from the Wikipedia corpus.
Step38: Modified face decomposition notebook
|
1,946
|
<ASSISTANT_TASK:>
Python Code:
mod = 1000000007
arr =[[ None for i in range(1001 ) ] for j in range(1001 ) ]
def Preprocess() :
arr[0 ][0 ] = 1
for i in range(1 , 1001 ) :
arr[i ][0 ] = 1
for j in range(1 , i ) :
arr[i ][j ] =(arr[i - 1 ][j - 1 ] + arr[i - 1 ][j ] ) % mod
arr[i ][i ] = 1
def powmod(a , n ) :
if not n :
return 1
pt = powmod(a , n // 2 )
pt =(pt * pt ) % mod
if n % 2 :
return(pt * a ) % mod
else :
return pt
def CountSubset(val , n ) :
ans = powmod(2 , n - 1 )
val . sort()
for i in range(0 , n ) :
j = i + 1
while j < n and val[j ] == val[i ] :
r = n - 1 - j
l = i
ans =(ans + arr[l + r ][l ] ) % mod
j += 1
return ans
if __name__== "__main __":
Preprocess()
val =[2 , 3 , 2 ]
n = len(val )
print(CountSubset(val , n ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,947
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
print(__doc__)
data_path = bst_phantom_elekta.data_path(verbose=True)
subject = 'sample'
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname)
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG1933', 'MEG2421']
raw.plot_psd(tmax=30., average=False)
raw.plot(events=events)
tmin, tmax = -0.1, 0.1
bmax = -0.05 # Avoid capture filter ringing into baseline
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, bmax),
preload=False)
epochs['1'].average().plot(time_unit='s')
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08)
mne.viz.plot_alignment(epochs.info, subject=subject, show_axes=True,
bem=sphere, dig=True, surfaces='head')
# here we can get away with using method='oas' for speed (faster than "shrunk")
# but in general "shrunk" is usually better
cov = mne.compute_covariance(epochs, tmax=bmax)
mne.viz.plot_evoked_white(epochs['1'].average(), cov)
data = []
t_peak = 0.036 # true for Elekta phantom
for ii in event_id:
# Avoid the first and last trials -- can contain dipole-switching artifacts
evoked = epochs[str(ii)][1:-1].average().crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs
dip, residual = fit_dipole(evoked, cov, sphere, n_jobs=1)
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
ax.texts.clear()
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
actual_pos, actual_ori = mne.dipole.get_phantom_dipoles()
actual_amp = 100. # nAm
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(6, 7))
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('mean(position error) = %0.1f mm' % (np.mean(diffs),))
ax1.bar(event_id, diffs)
ax1.set_xlabel('Dipole index')
ax1.set_ylabel('Loc. error (mm)')
angles = np.rad2deg(np.arccos(np.abs(np.sum(dip.ori * actual_ori, axis=1))))
print(u'mean(angle error) = %0.1f°' % (np.mean(angles),))
ax2.bar(event_id, angles)
ax2.set_xlabel('Dipole index')
ax2.set_ylabel(u'Angle error (°)')
amps = actual_amp - dip.amplitude / 1e-9
print('mean(abs amplitude error) = %0.1f nAm' % (np.mean(np.abs(amps)),))
ax3.bar(event_id, amps)
ax3.set_xlabel('Dipole index')
ax3.set_ylabel('Amplitude error (nAm)')
fig.tight_layout()
plt.show()
actual_amp = np.ones(len(dip)) # misc amp to create Dipole instance
actual_gof = np.ones(len(dip)) # misc GOF to create Dipole instance
dip_true = \
mne.Dipole(dip.times, actual_pos, actual_amp, actual_ori, actual_gof)
fig = mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces='inner_skull',
coord_frame='head', meg='helmet', show_axes=True)
# Plot the position and the orientation of the actual dipole
fig = mne.viz.plot_dipole_locations(dipoles=dip_true, mode='arrow',
subject=subject, color=(0., 0., 0.),
fig=fig)
# Plot the position and the orientation of the estimated dipole
fig = mne.viz.plot_dipole_locations(dipoles=dip, mode='arrow', subject=subject,
color=(0.2, 1., 0.5), fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=70, elevation=80, distance=0.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
Step2: Data channel array consisted of 204 MEG planor gradiometers,
Step3: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
Step4: Our phantom produces sinusoidal bursts at 20 Hz
Step5: Now we epoch our data, average it, and look at the first dipole response.
Step6: Let's use a sphere head geometry model <eeg_sphere_model>
Step7: Let's do some dipole fits. We first compute the noise covariance,
Step8: Do a quick visualization of how much variance we explained, putting the
Step9: Now we can compare to the actual locations, taking the difference in mm
Step10: Let's plot the positions and the orientations of the actual and the estimated
|
1,948
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Adicione uma dimensão de canais
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Crie uma instância do modelo
model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
# training=True é necessário apenas se houver camadas com diferentes
# comportamentos durante o treinamento versus inferência (por exemplo, Dropout).
predictions = model(images, training=True)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
@tf.function
def test_step(images, labels):
# training=True é necessário apenas se houver camadas com diferentes
# comportamentos durante o treinamento versus inferência (por exemplo, Dropout).
predictions = model(images, training=False)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
EPOCHS = 5
for epoch in range(EPOCHS):
# Reiniciar as métricas no início da próxima época
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow 2 início rápido para especialistas
Step2: Carregue e prepare o [conjunto de dados MNIST] (http
Step3: Use tf.data para agrupar e embaralhar o conjunto de dados
Step4: Crie o modelo tf.keras usando a Keras [API de subclasse de modelo] (https
Step5: Escolha uma função otimizadora e de perda para treinamento
Step6: Selecione métricas para medir a perda e a precisão do modelo. Essas métricas acumulam os valores ao longo das épocas e, em seguida, imprimem o resultado geral.
Step7: Use tf.GradientTape para treinar o modelo
Step8: Teste o modelo
|
1,949
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import AgglomerativeClustering
# Load data
iris = datasets.load_iris()
X = iris.data
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
# Create meanshift object
clt = AgglomerativeClustering(linkage='complete',
affinity='euclidean',
n_clusters=3)
# Train model
model = clt.fit(X_std)
# Show cluster membership
model.labels_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Data
Step2: Standardize Features
Step3: Conduct Agglomerative Clustering
Step4: Show Cluster Membership
|
1,950
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import netCDF4
import matplotlib.pyplot as plt
dataurl = "http://thredds.socib.es/thredds/dodsC/mooring/conductivity_and_temperature_recorder/buoy_canaldeibiza-scb_sbe37006/L1/dep0003_buoy-canaldeibiza_scb-sbe37006_L1_latest.nc"
with netCDF4.Dataset(dataurl) as ds:
temperature_values = ds.variables['WTR_TEM_SBE37'][:]
time_values = ds.variables['time'][:]
temperature_units = ds.variables['WTR_TEM_SBE37'].units
time_units = ds.variables['time'].units
with netCDF4.Dataset(dataurl) as ds:
print ds.variables.keys()
plt.plot(time_values, temperature_values)
plt.xlabel(time_units, fontsize=20)
plt.ylabel(temperature_units, fontsize=20)
time2 = netCDF4.num2date(time_values, time_units)
print time2[0:5]
plt.plot(time2, temperature_values)
plt.ylabel(temperature_units, fontsize=20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use data from a mooring located in the Ibiza Channel.<br/>
Step2: Read the file
Step3: The variable storing the temperature is not called temp or TEMP, but WTR_TEM_SBE37. The variable names can be obtained by checking the link on the thredds server or by running
Step4: In another example we will see how to do it in a more clever way.
Step5: With the time units in seconds since 1st January 1970, it is not easy to see which period the curve represents.<br/>
|
1,951
|
<ASSISTANT_TASK:>
Python Code:
# Librerias utilizadas
import pandas as pd
import sys
import urllib
import os
import numpy as np
# Configuracion del sistema
print('Python {} on {}'.format(sys.version, sys.platform))
print('Pandas version: {}'.format(pd.__version__))
import platform; print('Running on {} {}'.format(platform.system(), platform.release()))
# LIGAS PARA DESCARGA DE ARCHIVOS
# Las ligas para descarga tienen una raiz URL común que cambia
# dependiendo del indicador y estado que se busque descargar
url = r'http://www.beta.inegi.org.mx/contenidos/Proyectos/enchogares/especiales/intercensal/2015/tabulados/'
indicador = r'14_vivienda_'
raiz = url+indicador
links = {
'01' : raiz+'ags.xls',
'02' : raiz+'bc.xls',
'03' : raiz+'bcs.xls',
'04' : raiz+'cam.xls',
'05' : raiz+'coah.xls',
'06' : raiz+'col.xls',
'07' : raiz+'chis.xls',
'08' : raiz+'chih.xls',
'09' : raiz+'cdmx.xls',
'10' : raiz+'dgo.xls',
'11' : raiz+'gto.xls',
'12' : raiz+'gro.xls',
'13' : raiz+'hgo.xls',
'14' : raiz+'jal.xls',
'15' : raiz+'mex.xls',
'16' : raiz+'mich.xls',
'17' : raiz+'mor.xls',
'18' : raiz+'nay.xls',
'19' : raiz+'nl.xls',
'20' : raiz+'oax.xls',
'21' : raiz+'pue.xls',
'22' : raiz+'qro.xls',
'23' : raiz+'qroo.xls',
'24' : raiz+'slp.xls',
'25' : raiz+'sin.xls',
'26' : raiz+'son.xls',
'27' : raiz+'tab.xls',
'28' : raiz+'tamps.xlsz',
'29' : raiz+'tlax.xls',
'30' : raiz+'ver.xls',
'31' : raiz+'yuc.xls',
'32' : raiz+'zac.xls'
}
print(links['09'])
# Descarga de archivos a carpeta local
destino = r'D:\PCCS\00_RawData\01_CSV\Intercensal2015\estatal\14. Vivienda'
archivos = {} # Diccionario para guardar memoria de descarga
for k,v in links.items():
archivo_local = destino + r'\{}.xls'.format(k)
if os.path.isfile(archivo_local):
print('Ya existe el archivo: {}'.format(archivo_local))
archivos[k] = archivo_local
else:
print('Descargando {} ... ... ... ... ... '.format(archivo_local))
urllib.request.urlretrieve(v, archivo_local) #
archivos[k] = archivo_local
print('se descargó {}'.format(archivo_local))
pd.options.display.max_colwidth = 150
df = pd.read_excel(archivos['01'],
sheetname = 'Índice',
skiprows = 6,
usecols = ['Tabulado', 'Título'],
dtype = {'Tabulado' : 'str'},
).set_index('Tabulado')
df
# Funcion para extraer datos de hoja tipo
# La funcion espera los siguientes valores:
# --- entidad: [str] clave geoestadistica de entidad de 2 digitos
# --- ruta: [str] ruta al archivo de excel que contiene la información
# --- hoja: [str] numero de hoja dentro del archivo de excel que se pretende procesar
# --- colnames: [list] nombres para las columnas de datos (Las columnas en los archivos de este
# dataset requieren ser nombradas manualmente por la configuración de los
# encabezados en los archivo fuente)
# --- skip: [int] El numero de renglones en la hoja que el script tiene que ignorar para encontrar
# el renglon de encabezados.
def cargahoja(entidad, ruta, hoja, colnames, skip):
# Abre el archivo de excel
raw_data = pd.read_excel(ruta,
sheetname=hoja,
skiprows=skip).dropna()
# renombra las columnas
raw_data.columns = colnames
# Obten Unicamente las filas con valores estimativos
raw_data = raw_data[raw_data['Estimador'] == 'Valor']
# Crea la columna CVE_MUN
raw_data['CVE_ENT'] = entidad
raw_data['ID_MUN'] = raw_data.Municipio.str.split(' ', n=1).apply(lambda x: x[0])
raw_data['CVE_MUN'] = raw_data['CVE_ENT'].map(str) + raw_data['ID_MUN']
# Borra columnas con informacion irrelevante o duplicada
del (raw_data['CVE_ENT'])
del (raw_data['ID_MUN'])
del (raw_data['Entidad federativa'])
del (raw_data['Estimador'])
raw_data.set_index('CVE_MUN', inplace=True)
return raw_data
# correr funcion sobre todos los archivos
colnames = ['Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Pisos_Tierra',
'Pisos_Cemento o firme',
'Pisos_Mosaico, madera u otro recubrimiento',
'Pisos_No especificado']
DatosPiso = {}
for k,v in archivos.items():
print('Procesando {}'.format(v))
hoja = cargahoja(k, v, '02', colnames, 7)
DatosPiso[k] = hoja
PisosDF = pd.DataFrame()
for k,v in DatosPiso.items():
PisosDF = PisosDF.append(v)
PisosDF = PisosDF[PisosDF['Municipio'] != 'Total']
PisosDF.describe()
# Se define un diccionario con la siguiente sintaxis: 'NUMERO DE HOJA' : [LISTA DE COLUMNAS]
dicthojas = {
'08' : [ # Combustible utilizado para cocinar
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Cocina_con_Lena o carbon',
'Cocina_con_Gas',
'Cocina_con_Electricidad',
'Cocina_con_Otro_Combustible',
'Cocina_con_Los_ocupantes_no_cocinan',
'Cocina_con_no_especificado'
],
'09' : [ # Utilizan leña o carbón para cocinar y distribucion porcentual segun disponibilidad de estufa o fogon
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas en las que sus ocupantes utilizan leña o carbon para cocinar',
'Dispone_de_estufa_o_fogon_con_chimenea',
'No dispone_de_estufa_o_fogon_con_chimenea',
'Estufa_o_fogon_no_especificado'
],
'16' : [ # Viviendas con electricidad
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Disponen_de_electricidad',
'No_disponen_de_electricidad',
'No_especificado_de_electricidad'
],
'19' : [ # Forma de eliminación de residuos
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Entregan_residuos_a_servicio_publico_de_recoleccion',
'Tiran_residuos_en_basurero_publico_colocan_en_contenedor_o_deposito',
'Queman_residuos',
'Entierran_residuos_o_tiran_en_otro_lugar',
'Eliminacion_de_residuos_no_especificado',
],
'20' : [ # Viviendas que entregan sus residuos al servicio publico y distribucion porcentual por condición de separacion
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas en las que entregan los residuos al servicio publico',
'Separan_organicos_inorganicos',
'No_separan_organicos_inorganicos',
'Separan_residuos_No_especificado'
],
'21' : [ # Separación y reutilización de residuos
'Entidad federativa',
'Municipio',
'Forma de reutilizacion de residuos',
'Estimador',
'Viviendas particulares habitadas',
'Reutilizan_residuos',
'No_reutilizan_residuos',
'No_especificado_reutilizan_residuos',
],
'23' : [ # Disponibilidad y tipo de equipamiento
'Entidad federativa',
'Municipio',
'Tipo de equipamiento',
'Estimador',
'Viviendas particulares habitadas',
'Dispone_de_Equipamiento',
'No_dispone_de_Equipamiento',
'No_especificado_dispone_de_Equipamiento'
],
'24' : [ # Disponibilidad de agua entubada según disponibilidad y acceso
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Entubada_Total',
'Entubada_Dentro_de_la_vivienda',
'Entubada_Fuera_de_la_vivienda,_pero_dentro_del_terreno',
'Acarreo_Total',
'Acarreo_De_llave_comunitaria',
'Acarreo_De_otra_vivienda',
'Acarreo_De_una_pipa',
'Acarreo_De_un_pozo',
'Acarreo_De_un_río_arroyo_o_lago',
'Acarreo_De_la_recolección_de_lluvia',
'Acarreo_Fuente_No_especificada',
'Entubada_o_Acarreo_No_especificado'
],
'25' : [ # Disponibilidad de agua entubada según fuente de abastecimiento
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares que disponen de agua entubada',
'Agua_entubada_de_Servicio_Publico',
'Agua_entubada_de_Pozo_comunitario',
'Agua_entubada_de_Pozo_particular',
'Agua_entubada_de_Pipa',
'Agua_entubada_de_Otra_Vivienda',
'Agua_entubada_de_Otro_lugar',
'Agua_entubada_de_No_especificado'
],
'26' : [ # Disponibilidad de drenaje y lugar de desalojo
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Drenaje_Total',
'Drenaje_desaloja_a_Red_publica',
'Drenaje_desaloja_a_Fosa_Septica_o_Tanque_Septico',
'Drenaje_desaloja_a_Barranca_o_Grieta',
'Drenaje_desaloja_a_Rio_lago_o_mar',
'No_Dispone_de_drenaje',
'Dispone_drenaje_No_especificado',
]
}
skiprows = {
'02' : 7, # Tipo de piso
'08' : 7, # Combustible utilizado para cocinar
'09' : 7, # Utilizan leña o carbón para cocinar y distribucion porcentual segun disponibilidad de estufa o fogon
'16' : 7, # disponibilidad de energía eléctrica
'19' : 7, # Forma de eliminación de residuos
'20' : 8, # Viviendas que entregan sus residuos al servicio publico y distribucion porcentual por condición de separacion
'21' : 7, # Separación y reutilización de residuos
'23' : 7, # Disponibilidad y tipo de equipamiento
'24' : 8, # Disponibilidad de agua entubada según disponibilidad y acceso
'25' : 7, # Disponibilidad de agua entubada según fuente de abastecimiento
'26' : 8, # Disponibilidad de drenaje y lugar de desalojo
}
HojasDatos = {}
for estado, archivo in archivos.items():
print('Procesando {}'.format(archivo))
hojas = {}
for hoja, columnas in dicthojas.items():
print('---Procesando hoja {}'.format(hoja))
dataset = cargahoja(estado, archivo, hoja, columnas, skiprows[hoja])
if hoja not in HojasDatos.keys():
HojasDatos[hoja] = {}
HojasDatos[hoja][estado] = dataset
# Procesado de diccionarios para obtener datasets estándar
DSstandar = {}
for hoja, estado in HojasDatos.items():
print('Procesando hoja {}'.format(hoja))
tempDS = pd.DataFrame()
for cve_edo, datos in estado.items():
tempDS = tempDS.append(datos)
print('---Se agregó CVE_EDO {} a dataframe estandar'.format(cve_edo))
DSstandar[hoja] = tempDS
for hoja in DSstandar.keys():
temphoja = DSstandar[hoja]
temphoja = temphoja[temphoja['Municipio'] != 'Total']
DSstandar[hoja] = temphoja
# Funcion para extraccion de notas de una hoja
# Espera los siguientes input:
# --- ruta: [str] Ruta al archivo de datos del dataset fuente
# --- skip: [str] El numero de renglones en la hoja que el script tiene que ignorar para encontrar
# el renglon de encabezados.
def getnotes(ruta, skip):
tempDF = pd.read_excel(ruta, sheetname=hoja, skiprows=skip) # Carga el dataframe de manera temporal
c1 = tempDF['Unnamed: 0'].dropna() # Carga únicamente la columna 1, que contiene las notas, sin valores NaN
c1.index = range(len(c1)) # Reindexa la serie para compensar los NaN eliminados en el comando anterior
indice = c1[c1.str.contains('Nota')].index[0] # Encuentra el renglon donde inician las notas
rows = range(indice, len(c1)) # Crea una lista de los renglones que contienen notas
templist = c1.loc[rows].tolist() # Crea una lista con las notas
notas = []
for i in templist:
notas.append(i.replace('\xa0', ' ')) # Guarda cada nota y reemplaza caracteres especiales por espacios simples
return notas
listanotas = {}
for archivo, ruta in archivos.items():
print('Procesando {} desde {}'.format(archivo, ruta))
for hoja in skiprows.keys(): # Los keys del diccionario 'skiprows' son una lista de las hojas a procesar
if hoja not in listanotas.keys():
listanotas[hoja] = {}
listanotas[hoja][archivo] = getnotes(ruta, skiprows[hoja])
notasunicas = [] # Inicia con una lista vacía
for hoja, dict in listanotas.items(): # Itera sobre el diccionario con todas las notas
for estado, notas in dict.items(): # Itera sobre el diccionario de estados de cada hoja
for nota in notas: # Itera sobre la lista de notas que tiene cada estado
if nota not in notasunicas: # Si la nota no existe en la lista:
print('Estado: {} / Hoja {} / : Nota: {}'.format(estado, hoja, nota)) # Imprime la nota y donde se encontró
notasunicas.append(nota) # Agrega la nota al diccionario
for nota in notasunicas:
print(nota)
# Creacion de metadatos comunes
metadatos = {
'Nombre del Dataset': 'Encuesta Intercensal 2015 - Tabulados de Vivienda',
'Descripcion del dataset': np.nan,
'Disponibilidad Temporal': '2015',
'Periodo de actualizacion': 'No Determinada',
'Nivel de Desagregacion': 'Municipal',
'Notas': 'Los límites de confianza se calculan al 90 por ciento.' \
'\n1 Excluye las siguientes clases de vivienda: locales no construidos para habitación, viviendas móviles y refugios.' \
'\n* Municipio censado.' \
'\n** Municipio con muestra insuficiente.',
'Fuente': 'INEGI (Microdatos)',
'URL_Fuente': 'http://www.beta.inegi.org.mx/proyectos/enchogares/especiales/intercensal/',
'Dataset base': np.nan,
}
DSstandar['02'] = PisosDF
# Script para escritura de datasets estándar.
# La funcion espera los siguientes valores:
# --- hoja: (str) numero de hoja
# --- dataset: (Pandas DataFrame) datos que lleva la hoja
# --- metadatos: (dict) metadatos comunes para todas las hojas
# --- desc_hoja: (str) descripcion del contenido de la hoja
def escribedataset(hoja, dataset, metadatos, desc_hoja):
# Compilación de la información
datasetbaseurl = r'https://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/EI2015'
directoriolocal = r'D:\PCCS\01_Dmine\Datasets\EI2015'
archivo = hoja + '.xlsx'
tempmeta = metadatos
tempmeta['Descripcion del dataset'] = desc_hoja
tempmeta['Dataset base'] = '"' + archivo + '" disponible en \n' + datasetbaseurl
tempmeta = pd.DataFrame.from_dict(tempmeta, orient='index')
tempmeta.columns = ['Descripcion']
tempmeta = tempmeta.rename_axis('Metadato')
# Escritura de dataset estándar
destino = directoriolocal + '\\' + archivo
writer = pd.ExcelWriter(destino)
tempmeta.to_excel(writer, sheet_name ='METADATOS')
dataset.to_excel(writer, sheet_name = hoja)
writer.save()
print('Se guardó: "{}" en \n{}'.format(desc_hoja, destino))
for hoja, dataset in DSstandar.items():
print('Procesando hoja '+hoja)
escribedataset(hoja, dataset, metadatos, df.loc[hoja][0])
for hoja in DSstandar.keys():
print('**{}.xlsx**|{}'.format(hoja, df.loc[hoja][0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La descarga de datos se realiza desde el sitio Beta de INEGI. Los datos de la Encuesta Intercensal 2015 se encuentran en http
Step2: Las ligas quedan almacenadas en un diccionario de python en el que key = 'Clave Geoestadística Estatal'; y value = 'liga para descarga'. Por ejemplo, '09' es la clave geoestadística para la Ciudad de México. Si al diccionario links le solicitamos el valor de la key '09', nos regresa la liga para descargar los indicadores de vivienda de la Ciudad de México, como se muestra a continuación
Step3: Con el diccionario de ligas ya es posible descargar los archivos en una carpeta local para poder procesarlos.
Step4: Cada archivo tiene la misma estructura y contiene los datos de vivienda de 2015 levantados en la encuesta intercensal. La primera hoja, 'Índice', incluye un listado de las hojas y datos que contiene cada libro. Este índice se tomará como referencia para la minería de datos
Step5: La columna 'Tabulado' contiene el nombre de la hoja, mientras que 'Titulo' describe los datos de la misma. Para la construcción de parámetros de la PCCS se utilizarán las siguientes hojas
Step6: 2 . Correr funcion sobre todos los archivos de excel para extraer datos de la hoja 02
Step7: Los datos fueron guardados como diccionario de Python, es necesario convertirlos en un DataFrame unico antes de hacer la limpieza final.
Step8: 3 . Limpieza final del Dataframe 'Pisos'
Step9: HOJAS 08, 09, 16, 19, 20, 21 23, 24, 25, 26
Step10: --- 2.2 . Además de definir las columnas, es necesario definir cuántos renglones tiene que ignorar el script antes de encontrar los encabezados. Estos renglones se definen a continuación en un diccionario
Step11: 2 . Correr funcion sobre todos los archivos de excel
Step12: El diccionario resultante contiene los datos de cada hoja clasificados por estado. Sin embargo, la estructura de diccionarios aun requiere ser procesada para obtener dataframes estandar.
Step13: 3 . Limpieza final de los Dataframe
Step14: Extraccion de notas específicas para cada hoja
Step15: 2 . Correr la función sobre cada hoja y extraer las notas
Step16: Una vez que se han extraido todas las notas de las hojas, se corre un script iterativo para verificar en cuáles notas varía la nomenclatura. Este script funciona de la siguiente manera
Step17: Gracias a este script podemos ver que la nomenclatura en todas las hojas es estándar, y puede agregarse como nota en los metadatos de todos los dataset estándar.
Step18: El dataset que se procesó al principio de este estudio (Correspondiente a la hoja 02), se agrega junto con los demás datasets al diccionario de datasets estándar.
Step19: La función para la escritura de datasets estándar es la siguiente
Step20: Una vez definida la función para la escritura de datasets, se ejecuta de manera iterativa sobre los datos
Step21: Al final del proceso se generaron 10 datasets estándar
|
1,952
|
<ASSISTANT_TASK:>
Python Code:
R=5.
def z(x,y):
return sqrt(x**2+y**2+R**2.)
x = linspace(-10,10,100) #Definiendo el dominio en x
y = linspace(-10,10,100) #Definiendo el dominio en y
X, Y = meshgrid(x, y) #Formando la grilla x,y
fig = figure(figsize=(6,6))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, z(X,Y), cmap=cm.jet,rstride=3, cstride=3, alpha=1)
ax.set_xlabel('$x$',fontsize=18)
ax.set_ylabel('$y$',fontsize=18)
ax.set_zlabel('$z(x,y)$',fontsize=18)
#show()
fig = figure(figsize=(6,6))
ax = fig.gca(projection='3d')
surf1 = ax.plot_surface(X, Y, z(X,Y), cmap=cm.jet,rstride=3, cstride=3, alpha=1)
surf2 = ax.plot_surface(X, Y, -z(X,Y), cmap=cm.jet,rstride=3, cstride=3, alpha=1)
ax.set_xlabel('$x$',fontsize=18)
ax.set_ylabel('$y$',fontsize=18)
ax.set_zlabel('$z(x,y)$',fontsize=18)
#show()
def r(z):
return sqrt(z**2+R**2.)
z = linspace(-15,15,100)
ph = linspace(0,2.*pi,100)
Z,PHI = meshgrid(z,ph)
X=r(Z)*cos(PHI) #Definiendo dominio en x
Y=r(Z)*sin(PHI) #Definiendo dominio en y
fig = figure(figsize=(6,6))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z, cmap=cm.jet,rstride=3, cstride=3, alpha=1)
ax.set_xlabel('$x$',fontsize=18)
ax.set_ylabel('$y$',fontsize=18)
ax.set_zlabel('$z(x,y)$',fontsize=18)
#show()
R=0
X=r(Z)*cos(PHI) #Definiendo dominio en x
Y=r(Z)*sin(PHI) #Definiendo dominio en y
fig = figure(figsize=(6,6))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z, cmap=cm.jet,rstride=3, cstride=3, alpha=1)
ax.set_xlabel('$x$',fontsize=18)
ax.set_ylabel('$y$',fontsize=18)
ax.set_zlabel('$z(x,y)$',fontsize=18)
#show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Todo luce un poco mejor usando coordenadas polares en el plano xy
Step2: Cono (R=0)
|
1,953
|
<ASSISTANT_TASK:>
Python Code:
import numpy as NUM
import pylab as PYLAB
import arcpy as ARCPY
import numpy as NUM
import SSDataObject as SSDO
import scipy as SCIPY
import pandas as PANDAS
inputFC = r'../data/CA_Polygons.shp'
ssdo = SSDO.SSDataObject(inputFC)
ssdo.obtainData(ssdo.oidName, ['PCR2008', 'POPDEN08', 'PERCNOHS', 'MAJORO'])
ids = [ssdo.order2Master[i] for i in range(ssdo.numObs)]
convertDictDF = {}
for fieldName in ssdo.fields.keys():
convertDictDF[fieldName] = ssdo.fields[fieldName].data
df = PANDAS.DataFrame(convertDictDF, index = ids)
print(df[0:5])
%load_ext rpy2.ipython
#%reload_ext rpy2.ipython
%R -i df
%R library(rms)
%R logit = lrm(MAJORO ~ PCR2008 + POPDEN08 + PERCNOHS, data = df, x = TRUE, y = TRUE)
%R z_scores = logit$coefficients / sqrt(diag(logit$var))
%R -o logit_coef logit_coef = logit$coefficients
%R -o p_values p_values = pnorm(abs(z_scores), lower.tail = FALSE) * 2.0
print("Coefficients")
py_coef = NUM.array(logit_coef)
print(py_coef)
print("p_values")
py_pvalues = NUM.array(p_values)
print(py_pvalues)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize Data Object, Select Fields and Obtain Data
Step2: Make Use of PANDAS Data Frame
Step3: Push PANDAS Data Frame to R Data Frame - Use the -i flag
Step4: Analyze in R
Step5: Pull Results Back to Python - Use the -o flag
|
1,954
|
<ASSISTANT_TASK:>
Python Code:
#Importation des librairies utilisées
import pandas as pd
import numpy as np
import pickle
import functools
from tqdm import tqdm
import keras.models as km
import keras.layers as kl
N = 100000
DATA_DIR = ""
X = np.load(DATA_DIR+"data/description_coque.npy")[:N]
print(X.shape)
print(X[:3])
Nd=197
chars = list(functools.reduce(lambda x,y : x.union(y), [set(x) for x in X], set()))
print("Vocabulaire : " + str(chars))
chars.extend(["start","end"])
Nv = len(chars)
print("Taille du vocabulaire : %d" %Nv)
int_to_char = {i:c for i,c in enumerate(chars)}
char_to_int = {c:i for i,c in int_to_char.items()}
I_START = char_to_int["start"]
I_END = char_to_int["end"]
def encode_input_output_sequence(x, length_sequence, size_vocab, char_to_int_dic, i_start, i_end):
n = x.shape[0]
x_vec = np.zeros((n,length_sequence, size_vocab))
y_vec = np.zeros((n,length_sequence, size_vocab))
x_vec[:,0,i_start] = 1
y_vec[:,-1,i_end] = 1
for ix,x in tqdm(enumerate(x)):
for ic,c in enumerate(x):
c_int = char_to_int_dic[c]
x_vec[ix,ic+1,c_int]=1
y_vec[:,:-1,:] = x_vec[:,1:,:]
return x_vec, y_vec
X_vec, Y_vec = encode_input_output_sequence(X[:N], Nd+1, Nv, char_to_int,I_START,I_END)
X_vec.shape
# %load solution/3_1.py
nb_hidden = 32
epochs = 20
batch_size= 128
model = km.Sequential()
model.add(kl.LSTM(nb_hidden, input_shape=(None, Nv), return_sequences=True))
model.add(kl.TimeDistributed(kl.Dense(Nv)))
model.add(kl.Activation('softmax'))
model.summary()
model.compile(loss="categorical_crossentropy", optimizer="rmsprop")
model.fit(X_vec, Y_vec, epochs=epochs, batch_size=batch_size)
model.save("data/generate_model.h5")
x_pred = np.zeros((1, Nd+1, Nv))
print("step 0")
x_pred[0,0,I_START] =1
x_pred_str = decode_sequence(x_pred[0], int_to_char)
print(x_pred_str)
for i in range(Nd):
ix = np.argmax(model.predict(x_pred[:,:i+1,:])[0][-1,:])
x_pred[0,i+1,ix] = 1
x_pred_str=decode_sequence(x_pred[0], int_to_char)
print(x_pred_str)
#%load solution/3_2.py
#%load solution/3_3.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Téléchargement des données
Step2: Exercice Vérifiez que toutes les séquences sont bien de tailles 197.
Step3: Mise en forme des données
Step4: Nous ajoutons à ce vocabulaire deux indicateur permettant de localiser le début et la fin de chaque description
Step5: Création des dictionnaires
Step6: Encodage des Descriptions
Step7: Exercice Retrouvez la phrase originale de la phrase test affiché ci-dessous à partir de la phrase encodé. Vérifiez que x et y sont bien les mêmes descriptions seulement décalées d'un index
Step8: Apprentissage
Step9: Q Pourquoi est-ce la categorical_crossentropy qui est utilisée comme fonction de perte?
Step10: Q Comment cette génération est-elle produite?
Step11: Exercice Effectuez une génération en ajoutant de l'aléa. Vous pouvez par exemple faire en sorte que chaque lettre soit séléctionnée selon une loi multinomiale.
|
1,955
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# The Dataset comes from:
# https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
# Load up the data.
with open('../Datasets/optdigits.tes', 'r') as f: testing = pd.read_csv(f)
with open('../Datasets/optdigits.tra', 'r') as f: training = pd.read_csv(f)
# The number of samples between training and testing can vary
# But the number of features better remain the same!
n_features = testing.shape[1]
X_test = testing.iloc[:,:n_features-1]
X_train = training.iloc[:,:n_features-1]
y_test = testing.iloc[:,n_features-1:].values.ravel()
y_train = training.iloc[:,n_features-1:].values.ravel()
print (n_features)
import matplotlib.pyplot as plt
# The 'targets' or labels are stored in y. The 'samples' or data is stored in X
print ("Peeking the data...")
fig = plt.figure()
cnt = 0
for col in range(5):
for row in range(10):
plt.subplot(5, 10, cnt + 1)
plt.imshow(X_train.iloc[cnt,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')
plt.axis('off')
cnt += 1
fig.set_tight_layout(True)
plt.show()
from sklearn import svm # Library for Support Vector Machines
#
# Create and train an SVM classifier.
print ("Training SV Classifier...")
svc = svm.SVC(kernel='linear')
svc.fit(X_train, y_train)
#
# Print out the TRUE value of the 1000th digit in the test set
# By TRUE value, we mean, the actual provided label for that sample
#
true_1000th_test_value = y_test[999]
print ("1000th test label is: ", true_1000th_test_value)
#
# Predict the value of the 1000th digit in the test set.
# Was the model's prediction correct?
#
guess_1000th_test_value = svc.predict(X_test[999:1000])
print ("1000th test prediction is: ", guess_1000th_test_value)
#
# Use IMSHOW to display the 1000th test image
#
#
plt.imshow(X_test.iloc[999,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest');
# Visual Confirmation of accuracy
fig = plt.figure()
# Make some guesses
y_guess = svc.predict(X_test)
num_rows = 10
num_cols = 5
index = 0
for col in range(num_cols):
for row in range(num_rows):
plt.subplot(num_cols, num_rows, index + 1)
# 8x8 is the size of the image, 64 pixels
plt.imshow(X_test.iloc[index,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')
# Green = Guessed right
# Red = Fail!
fontcolor = 'g' if y_test[index] == y_guess[index] else 'r'
plt.title('Label: %i' % y_guess[index], fontsize=7, color=fontcolor)
plt.axis('off')
index += 1
fig.set_tight_layout(True)
plt.show()
# Calculate the score of the SVC against the testing data
print ("Scoring SVM Classifier...")
#
score = svc.score(X_test, y_test)
print ("Score: ", score)
#
# We start with the POLY kernel
svc = svm.SVC(kernel='poly', C=1.0, gamma=0.001)
svc.fit(X_train, y_train)
# Calculate the score of the SVC against the testing data
print ("Scoring SV poly Classifier...")
score = svc.score(X_test, y_test)
print ("Score: ", score)
#
# change SVC's kernel to 'rbf'
svc = svm.SVC(kernel='rbf', C=1.0, gamma=0.001)
svc.fit(X_train, y_train)
# Calculate the score of SVC against the testing data
print ("Scoring SVM rbf Classifier...")
score = svc.score(X_test, y_test)
print ("Score: ", score)
X = pd.read_csv("../Datasets/parkinsons.data")
X.drop(['name'], axis=1, inplace=True) # drop name column
y = X.status.copy() # copy “y” values out from status
X.drop(['status'], axis=1, inplace=True) # drop status column
# Perform a train/test split. 30% test group size, with a random_state equal to 7.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=7)
from sklearn import preprocessing
# tried with different scaler, standard is the best
scaler = preprocessing.StandardScaler() # best score was 0.932203389831
#scaler = preprocessing.MinMaxScaler() # best score was 0.881355932203
#scaler = preprocessing.MaxAbsScaler() # best score was 0.881355932203
#scaler = preprocessing.Normalizer() # best score was 0.796610169492
#scaler = preprocessing.KernelCenterer() # best score was 0.915254237288
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
from sklearn.decomposition import PCA
from sklearn import manifold
usePCA = False # change this to use PCA as dimensionality reducer
if usePCA:
reducer = PCA(n_components=7).fit(X_train)
else:
reducer = manifold.Isomap(n_neighbors=3, n_components=6).fit(X_train)
X_train = reducer.transform(X_train)
X_test = reducer.transform(X_test)
import numpy as np
# a naive, best-parameter search using nested for-loops.
best_score = 0
for c in np.arange(0.05,2,0.05):
for gamma in np.arange(0.001, 0.1, 0.001):
svc = svm.SVC(kernel='rbf', C=c, gamma=gamma)
svc.fit(X_train, y_train)
score = svc.score(X_test, y_test)
if score > best_score:
best_score = score
#print ("New best score:", score, "using C= ", c, "and gamma = ", gamma)
print(f"New best score: {score:.3f} using C= {c:.2f} and gamma = {gamma:.3f}")
#
# INFO: Parameters can be adjusted here
C = 1
kernel = 'linear'
iterations = 100
#
# INFO: You can set this to false if you want to
# draw the full square matrix
FAST_DRAW = True
#
# Load up the wheat dataset into dataframe 'X'
#
df = pd.read_csv("../Datasets/wheat.data", index_col='id')
# INFO: An easy way to show which rows have nans in them
print (df[pd.isnull(df).any(axis=1)])
#
# Go ahead and drop any row with a nan
#
df.dropna(axis=0, inplace=True)
#
# INFO: you might try setting the nan values to the
# mean value of that column, the mean should only be calculated for
# the specific class rather than across all classes, now that you
# have the labels
#
# Copy the labels out of the dset into variable 'y' then Remove
# them from X. Encode the labels -- canadian:0, kama:1, and rosa:2
#
labels = df.wheat_type.copy() # copy “y” values out
df.drop(['wheat_type'], axis=1, inplace=True) # drop output column
labels = labels.map({'canadian':0, 'kama':1, 'rosa':2})
#
# Split data into test / train sets
#
X_train, X_test, y_train, y_test = train_test_split(df, labels, test_size=0.3,
random_state=7)
import matplotlib as mpl
import matplotlib.pyplot as plt
def drawPlots(model, X_train, X_test, y_train, y_test, wintitle='Figure 1'):
# If this line throws an error, use plt.style.use('ggplot') instead
mpl.style.use('ggplot') # Look Pretty
padding = 3
resolution = 0.5
max_2d_score = 0
score = 0
y_colors = ['#ff0000', '#00ff00', '#0000ff']
my_cmap = mpl.colors.ListedColormap(['#ffaaaa', '#aaffaa', '#aaaaff'])
colors = [y_colors[i] for i in y_train]
num_columns = len(X_train.columns)
fig = plt.figure()
fig.canvas.set_window_title(wintitle)
cnt = 0
for col in range(num_columns):
for row in range(num_columns):
# Easy out
if FAST_DRAW and col > row:
cnt += 1
continue
ax = plt.subplot(num_columns, num_columns, cnt + 1)
plt.xticks(())
plt.yticks(())
# Intersection:
if col == row:
plt.text(0.5, 0.5, X_train.columns[row], verticalalignment='center', horizontalalignment='center', fontsize=12)
cnt += 1
continue
# Only select two features to display, then train the model
X_train_bag = X_train.iloc[:, [row,col]]
X_test_bag = X_test.iloc[:, [row,col]]
model.fit(X_train_bag, y_train)
# Create a mesh to plot in
x_min, x_max = X_train_bag.iloc[:, 0].min() - padding, X_train_bag.iloc[:, 0].max() + padding
y_min, y_max = X_train_bag.iloc[:, 1].min() - padding, X_train_bag.iloc[:, 1].max() + padding
xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),
np.arange(y_min, y_max, resolution))
# Plot Boundaries
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# Prepare the contour
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=my_cmap, alpha=0.8)
plt.scatter(X_train_bag.iloc[:, 0], X_train_bag.iloc[:, 1], c=colors, alpha=0.5)
score = round(model.score(X_test_bag, y_test) * 100, 3)
#plt.text(0.5, 0, "Score: {0}".format(score), transform = ax.transAxes, horizontalalignment='center', fontsize=8)
plt.text(0.5, 0, f"Score: {score}", transform = ax.transAxes, horizontalalignment='center', fontsize=8)
max_2d_score = score if score > max_2d_score else max_2d_score
cnt += 1
print ("Max 2D Score: ", max_2d_score)
fig.set_tight_layout(True)
import time
def benchmark(model, X_train, X_test, y_train, y_test, wintitle='Figure 1'):
print ('\n\n' + wintitle + ' Results')
# the only purpose of doing many iterations was to get a more accurate
# count of the time it took for each classifier
s = time.time()
for i in range(iterations):
#
# train the classifier on the training data / labels:
#
model.fit(X_train, y_train)
#print ("{0} Iterations Training Time: ".format(iterations), time.time() - s)
print(f"{iterations} Iterations Training Time: {time.time() - s:.3f}")
scoreBch = 0
s = time.time()
for i in range(iterations):
#
# score the classifier on the testing data / labels:
#
scoreBch = model.score(X_test, y_test)
#print ("{0} Iterations Scoring Time: ".format(iterations), time.time() - s)
print(f"{iterations} Iterations Scoring Time: {time.time() - s:.3f}")
print ("High-Dimensionality Score: ", round((scoreBch*100), 3))
#
# Create an KNeighbors classifier
#
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
benchmark(knn, X_train, X_test, y_train, y_test, 'KNeighbors')
drawPlots(knn, X_train, X_test, y_train, y_test, 'KNeighbors')
#
# Create an SVM classifier
# Use a linear kernel, and set the C value to C (see initial parameters)
#
from sklearn.svm import SVC
svc = SVC(kernel='linear', C=C)
benchmark(svc, X_train, X_test, y_train, y_test, 'SVC')
drawPlots(svc, X_train, X_test, y_train, y_test, 'SVC')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's have a look at these bitmaps of handwritten digits
Step2: Train the SVM Classifier
Step3: Checkpoint
Step4: The model's prediction was correct.
Step5: visual confirmation of accuracy
Step6: Score
Step7: Not bad, the model was correct more than 96% of the times!
Step8: Which is sightly better, but we can try a different kernel, more performant
Step9: Now it's better than the UPS' score!
Step10: We can apply different scaler for the pre-processing.
Step11: Same for the dimensionality reduction
Step12: Train the SVM classifier.
Step13: Best score was with C=0.85 and gamma=0.088
Step14: Read the data
Step15: Data pre-processing
Step16: Split into training and testing data sets
Step17: Utility function
Step18: Utility function
Step19: Train the Knn classifier
Step20: And get its benchmark
Step21: Train the SVM Classifier
|
1,956
|
<ASSISTANT_TASK:>
Python Code:
from fastai.collab import *
from fastai.tabular import *
user,item,title = 'userId','movieId','title'
path = untar_data(URLs.ML_SAMPLE)
path
ratings = pd.read_csv(path/'ratings.csv')
ratings.head()
data = CollabDataBunch.from_df(ratings, seed=42)
y_range = [0,5.5]
learn = collab_learner(data, n_factors=50, y_range=y_range)
learn.fit_one_cycle(3, 5e-3)
path=Config.data_path()/'ml-100k'
ratings = pd.read_csv(path/'u.data', delimiter='\t', header=None,
names=[user,item,'rating','timestamp'])
ratings.head()
movies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1', header=None,
names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])
movies.head()
len(ratings)
rating_movie = ratings.merge(movies[[item, title]])
rating_movie.head()
data = CollabDataBunch.from_df(rating_movie, seed=42, valid_pct=0.1, item_name=title)
data.show_batch()
y_range = [0,5.5]
learn = collab_learner(data, n_factors=40, y_range=y_range, wd=1e-1)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(5, 5e-3)
learn.save('dotprod')
learn.load('dotprod');
learn.model
g = rating_movie.groupby(title)['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
movie_bias = learn.bias(top_movies, is_item=True)
movie_bias.shape
mean_ratings = rating_movie.groupby(title)['rating'].mean()
movie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]
item0 = lambda o:o[0]
sorted(movie_ratings, key=item0)[:15]
sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
movie_w = learn.weight(top_movies, is_item=True)
movie_w.shape
movie_pca = movie_w.pca(3)
movie_pca.shape
fac0,fac1,fac2 = movie_pca.t()
movie_comp = [(f, i) for f,i in zip(fac0, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
movie_comp = [(f, i) for f,i in zip(fac1, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
idxs = np.random.choice(len(top_movies), 50, replace=False)
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Collaborative filtering example
Step2: That's all we need to create and train a model
Step3: Movielens 100k
Step4: Here's some benchmarks on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of 0.91**2 = 0.83.
Step5: Movie bias
Step6: Movie weights
|
1,957
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.8
b['ecc'] = 0.1
b['irrad_method'] = 'none'
b.add_dataset('orb', times=np.linspace(0,4,1000), dataset='orb01', component=['primary', 'secondary'])
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')
b.set_value('incl@orbit', 90)
b.run_compute(model='run_with_incl_90')
b.set_value('incl@orbit', 85)
b.run_compute(model='run_with_incl_85')
b.set_value('incl@orbit', 80)
b.run_compute(model='run_with_incl_80')
afig, mplfig = b.plot(show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, highlight_marker='s', highlight_color='g', highlight_ms=20, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, highlight=False, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=0.5, uncover=True, show=True)
afig, mplfig = b['primary@orb@run_with_incl_80'].plot(show=True)
afig, mplfig = b.plot(component='primary', kind='orb', model='run_with_incl_80', show=True)
afig, mplfig = b.plot('primary@orb@run_with_incl_80', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', y='vus', show=True)
b['orb01@primary@run_with_incl_80'].qualifiers
afig, mplfig = b['lc01@dataset'].plot(x='phases', z=0, show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xunit='AU', yunit='AU', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlabel='X POS', ylabel='Z POS', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlim=(-2,2), show=True)
afig, mplfig = b['lc01@dataset'].plot(yerror='sigmas', show=True)
afig, mplfig = b['lc01@dataset'].plot(yerror=None, show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(c='r', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', cmap='spring', show=True)
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', draw_sidebars=True, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True)
afig, mplfig = b['primary@orb@run_with_incl_80'].plot(label='primary')
afig, mplfig = b['secondary@orb@run_with_incl_80'].plot(label='secondary', legend=True, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True, legend_kwargs={'loc': 'center', 'facecolor': 'r'})
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(linestyle=':', s=0.1, show=True)
afig, mplfig = b['orb@run_with_incl_80'].plot(time=0, projection='3d', show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode
Step2: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step3: And we'll attach some dummy datasets. See Datasets for more details.
Step4: And run the forward models. See Computing Observables for more details.
Step5: Showing and Saving
Step6: Any call to plot returns 2 objects - the autofig and matplotlib figure instances. Generally we won't need to do anything with these, but having them returned could come in handy if you want to manually edit either before drawing/saving the image.
Step7: Time (highlight and uncover)
Step8: To change the style of the "highlighted" points, you can pass matplotlib recognized markers, colors, and markersizes to the highlight_marker, highlight_color, and highlight_ms keywords, respectively.
Step9: To disable highlighting, simply send highlight=False
Step10: Uncover
Step11: Selecting Datasets
Step12: Selecting Arrays
Step13: To see the list of available qualifiers that could be passed for x or y, call the qualifiers (or twigs) property on the ParameterSet.
Step14: For more information on each of the available arrays, see the relevant tutorial on that dataset method
Step15: Units
Step16: WARNING
Step17: Axes Limits
Step18: Errorbars
Step19: To disable the errorbars, simply set yerror=None.
Step20: Colors
Step21: In addition, you can point to an array in the dataset to use as color.
Step22: Choosing colors works slightly differently for meshes (ie you can set fc for facecolor and ec for edgecolor). For more details, see the tutorial on the MESH dataset.
Step23: Adding a Colorbar
Step24: Labels and Legends
Step25: The legend labels are generated automatically, but can be overriden by passing a string to the label keyword.
Step26: To override the position or styling of the legend, you can pass valid options to legend_kwargs which will be passed on to plt.legend
Step27: Other Plotting Options
Step28: 3D Axes
|
1,958
|
<ASSISTANT_TASK:>
Python Code:
import os
import logging
from datetime import datetime
import numpy as np
import tensorflow as tf
import tensorflow_transform as tft
import tensorflow.keras as keras
from google.cloud import aiplatform as vertex_ai
from google.cloud.aiplatform import hyperparameter_tuning as hp_tuning
from src.common import features, datasource_utils
from src.model_training import data, model, defaults, trainer, exporter
from src.preprocessing import etl
logging.getLogger().setLevel(logging.INFO)
tf.get_logger().setLevel('INFO')
print(f"TensorFlow: {tf.__version__}")
print(f"TensorFlow Transform: {tft.__version__}")
PROJECT = '[your-project-id]' # Change to your project id.
REGION = 'us-central1' # Change to your region.
BUCKET = '[your-bucket-name]' # Change to your bucket name.
SERVICE_ACCOUNT = "[your-service-account]"
if PROJECT == "" or PROJECT is None or PROJECT == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = shell_output[0]
if SERVICE_ACCOUNT == "" or SERVICE_ACCOUNT is None or SERVICE_ACCOUNT == "[your-service-account]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.account)' 2>/dev/null
SERVICE_ACCOUNT = shell_output[0]
if BUCKET == "" or BUCKET is None or BUCKET == "[your-bucket-name]":
# Get your bucket name to GCP projet id
BUCKET = PROJECT
# Try to create the bucket if it doesn'exists
! gsutil mb -l $REGION gs://$BUCKET
print("")
PARENT = f"projects/{PROJECT}/locations/{REGION}"
print("Project ID:", PROJECT)
print("Region:", REGION)
print("Bucket name:", BUCKET)
print("Service Account:", SERVICE_ACCOUNT)
print("Vertex API Parent URI:", PARENT)
VERSION = 'v01'
DATASET_DISPLAY_NAME = 'chicago-taxi-tips'
MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}'
WORKSPACE = f'gs://{BUCKET}/{DATASET_DISPLAY_NAME}'
EXPERIMENT_ARTIFACTS_DIR = os.path.join(WORKSPACE, 'experiments')
RAW_SCHEMA_LOCATION = 'src/raw_schema/schema.pbtxt'
TENSORBOARD_DISPLAY_NAME = f'tb-{DATASET_DISPLAY_NAME}'
EXPERIMENT_NAME = f'{MODEL_DISPLAY_NAME}'
tensorboard_resource = vertex_ai.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME)
tensorboard_resource_name = tensorboard_resource.gca_resource.name
print("TensorBoard resource name:", tensorboard_resource_name)
REMOVE_EXPERIMENT_ARTIFACTS = False
if tf.io.gfile.exists(EXPERIMENT_ARTIFACTS_DIR) and REMOVE_EXPERIMENT_ARTIFACTS:
print("Removing previous experiment artifacts...")
tf.io.gfile.rmtree(EXPERIMENT_ARTIFACTS_DIR)
if not tf.io.gfile.exists(EXPERIMENT_ARTIFACTS_DIR):
print("Creating new experiment artifacts directory...")
tf.io.gfile.mkdir(EXPERIMENT_ARTIFACTS_DIR)
print("Workspace is ready.")
print("Experiment directory:", EXPERIMENT_ARTIFACTS_DIR)
vertex_ai.init(
project=PROJECT,
location=REGION,
staging_bucket=BUCKET,
experiment=EXPERIMENT_NAME
)
run_id = f"run-local-{datetime.now().strftime('%Y%m%d%H%M%S')}"
vertex_ai.start_run(run_id)
EXPERIMENT_RUN_DIR = os.path.join(EXPERIMENT_ARTIFACTS_DIR, EXPERIMENT_NAME, run_id)
print("Experiment run directory:", EXPERIMENT_RUN_DIR)
EXPORTED_DATA_PREFIX = os.path.join(EXPERIMENT_RUN_DIR, 'exported_data')
TRANSFORMED_DATA_PREFIX = os.path.join(EXPERIMENT_RUN_DIR, 'transformed_data')
TRANSFORM_ARTIFACTS_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'transform_artifacts')
ML_USE = 'UNASSIGNED'
LIMIT = 5120
raw_data_query = datasource_utils.get_training_source_query(
project=PROJECT,
region=REGION,
dataset_display_name=DATASET_DISPLAY_NAME,
ml_use=ML_USE,
limit=LIMIT
)
print(raw_data_query)
args = {
'runner': 'DirectRunner',
'raw_data_query': raw_data_query,
'write_raw_data': True,
'exported_data_prefix': EXPORTED_DATA_PREFIX,
'transformed_data_prefix': TRANSFORMED_DATA_PREFIX,
'transform_artifact_dir': TRANSFORM_ARTIFACTS_DIR,
'temporary_dir': os.path.join(WORKSPACE, 'tmp'),
'gcs_location': f'gs://{BUCKET}/bq_tmp',
'project': PROJECT
}
vertex_ai.log_params(args)
print("Data preprocessing started...")
etl.run_transform_pipeline(args)
print("Data preprocessing completed.")
!gsutil ls {EXPERIMENT_RUN_DIR}
LOG_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'logs')
EXPORT_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'model')
tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR)
transform_feature_spec = tft_output.transformed_feature_spec()
transform_feature_spec
train_data_file_pattern = os.path.join(TRANSFORMED_DATA_PREFIX,'train/data-*.gz')
eval_data_file_pattern = os.path.join(TRANSFORMED_DATA_PREFIX,'eval/data-*.gz')
for input_features, target in data.get_dataset(
train_data_file_pattern, transform_feature_spec, batch_size=3).take(1):
for key in input_features:
print(f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}")
print(f"target: {target.numpy().tolist()}")
hyperparams = {
"hidden_units": [64, 32]
}
hyperparams = defaults.update_hyperparams(hyperparams)
hyperparams
classifier = model.create_binary_classifier(tft_output, hyperparams)
classifier.summary()
keras.utils.plot_model(
classifier,
show_shapes=True,
show_dtype=True
)
classifier(input_features)
logging.getLogger().setLevel(logging.INFO)
hyperparams["learning_rate"] = 0.001
hyperparams["num_epochs"] = 5
hyperparams["batch_size"] = 512
vertex_ai.log_params(hyperparams)
classifier = trainer.train(
train_data_dir=train_data_file_pattern,
eval_data_dir=eval_data_file_pattern,
tft_output_dir=TRANSFORM_ARTIFACTS_DIR,
hyperparams=hyperparams,
log_dir=LOG_DIR,
)
val_loss, val_accuracy = trainer.evaluate(
model=classifier,
data_dir=eval_data_file_pattern,
raw_schema_location=RAW_SCHEMA_LOCATION,
tft_output_dir=TRANSFORM_ARTIFACTS_DIR,
hyperparams=hyperparams,
)
vertex_ai.log_metrics(
{"val_loss": val_loss, "val_accuracy": val_accuracy})
!tb-gcp-uploader --tensorboard_resource_name={tensorboard_resource_name} \
--logdir={LOG_DIR} \
--experiment_name={EXPERIMENT_NAME} --one_shot=True
saved_model_dir = os.path.join(EXPORT_DIR)
exporter.export_serving_model(
classifier=classifier,
serving_model_dir=saved_model_dir,
raw_schema_location=RAW_SCHEMA_LOCATION,
tft_output_dir=TRANSFORM_ARTIFACTS_DIR,
)
!saved_model_cli show --dir={saved_model_dir} --tag_set=serve --signature_def=serving_tf_example
!saved_model_cli show --dir={saved_model_dir} --tag_set=serve --signature_def=serving_default
serving_model = tf.saved_model.load(saved_model_dir)
print("Saved model is loaded.")
# Test the serving_tf_example with TF Examples
file_names = tf.data.TFRecordDataset.list_files(EXPORTED_DATA_PREFIX + '/data-*.tfrecord')
for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1):
predictions = serving_model.signatures['serving_tf_example'](batch)
for key in predictions:
print(f"{key}: {predictions[key]}")
# Test the serving_default with feature dictionary
import tensorflow_data_validation as tfdv
from tensorflow_transform.tf_metadata import schema_utils
raw_schema = tfdv.load_schema_text(RAW_SCHEMA_LOCATION)
raw_feature_spec = schema_utils.schema_as_feature_spec(raw_schema).feature_spec
instance = {
"dropoff_grid": "POINT(-87.6 41.9)",
"euclidean": 2064.2696,
"loc_cross": "",
"payment_type": "Credit Card",
"pickup_grid": "POINT(-87.6 41.9)",
"trip_miles": 1.37,
"trip_day": 12,
"trip_hour": 6,
"trip_month": 2,
"trip_day_of_week": 4,
"trip_seconds": 555,
}
for feature_name in instance:
dtype = raw_feature_spec[feature_name].dtype
instance[feature_name] = tf.constant([[instance[feature_name]]], dtype)
predictions = serving_model.signatures['serving_default'](**instance)
for key in predictions:
print(f"{key}: {predictions[key].numpy()}")
vertex_ai.init(
project=PROJECT,
staging_bucket=BUCKET,
experiment=EXPERIMENT_NAME)
run_id = f"run-gcp-{datetime.now().strftime('%Y%m%d%H%M%S')}"
vertex_ai.start_run(run_id)
EXPERIMENT_RUN_DIR = os.path.join(EXPERIMENT_ARTIFACTS_DIR, EXPERIMENT_NAME, run_id)
print("Experiment run directory:", EXPERIMENT_RUN_DIR)
EXPORTED_DATA_PREFIX = os.path.join(EXPERIMENT_RUN_DIR, 'exported_data')
TRANSFORMED_DATA_PREFIX = os.path.join(EXPERIMENT_RUN_DIR, 'transformed_data')
TRANSFORM_ARTIFACTS_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'transform_artifacts')
ML_USE = 'UNASSIGNED'
LIMIT = 1000000
raw_data_query = datasource_utils.get_training_source_query(
project=PROJECT,
region=REGION,
dataset_display_name=DATASET_DISPLAY_NAME,
ml_use=ML_USE,
limit=LIMIT
)
etl_job_name = f"etl-{MODEL_DISPLAY_NAME}-{run_id}"
args = {
'job_name': etl_job_name,
'runner': 'DataflowRunner',
'raw_data_query': raw_data_query,
'exported_data_prefix': EXPORTED_DATA_PREFIX,
'transformed_data_prefix': TRANSFORMED_DATA_PREFIX,
'transform_artifact_dir': TRANSFORM_ARTIFACTS_DIR,
'write_raw_data': False,
'temporary_dir': os.path.join(WORKSPACE, 'tmp'),
'gcs_location': os.path.join(WORKSPACE, 'bq_tmp'),
'project': PROJECT,
'region': REGION,
'setup_file': './setup.py'
}
vertex_ai.log_params(args)
logging.getLogger().setLevel(logging.ERROR)
print("Data preprocessing started...")
etl.run_transform_pipeline(args)
print("Data preprocessing completed.")
!gsutil ls {EXPERIMENT_RUN_DIR}
LOG_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'logs')
EXPORT_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'model')
!python -m src.model_training.task \
--model-dir={EXPORT_DIR} \
--log-dir={LOG_DIR} \
--train-data-dir={TRANSFORMED_DATA_PREFIX}/train/* \
--eval-data-dir={TRANSFORMED_DATA_PREFIX}/eval/* \
--tft-output-dir={TRANSFORM_ARTIFACTS_DIR} \
--num-epochs=3 \
--hidden-units=32,32 \
--experiment-name={EXPERIMENT_NAME} \
--run-name={run_id} \
--project={PROJECT} \
--region={REGION} \
--staging-bucket={BUCKET}
TRAINER_PACKAGE_DIR = os.path.join(WORKSPACE, 'trainer_packages')
TRAINER_PACKAGE_NAME = f'{MODEL_DISPLAY_NAME}_trainer'
print("Trainer package upload location:", TRAINER_PACKAGE_DIR)
!rm -r src/__pycache__/
!rm -r src/.ipynb_checkpoints/
!rm -r src/raw_schema/.ipynb_checkpoints/
!rm -f {TRAINER_PACKAGE_NAME}.tar {TRAINER_PACKAGE_NAME}.tar.gz
!mkdir {TRAINER_PACKAGE_NAME}
!cp setup.py {TRAINER_PACKAGE_NAME}/
!cp -r src {TRAINER_PACKAGE_NAME}/
!tar cvf {TRAINER_PACKAGE_NAME}.tar {TRAINER_PACKAGE_NAME}
!gzip {TRAINER_PACKAGE_NAME}.tar
!gsutil cp {TRAINER_PACKAGE_NAME}.tar.gz {TRAINER_PACKAGE_DIR}/
!rm -r {TRAINER_PACKAGE_NAME}
!rm -r {TRAINER_PACKAGE_NAME}.tar.gz
TRAIN_RUNTIME = 'tf-cpu.2-5'
TRAIN_IMAGE = f"us-docker.pkg.dev/vertex-ai/training/{TRAIN_RUNTIME}:latest"
print("Training image:", TRAIN_IMAGE)
num_epochs = 10
learning_rate = 0.001
hidden_units = "64,64"
trainer_args = [
f'--train-data-dir={TRANSFORMED_DATA_PREFIX + "/train/*"}',
f'--eval-data-dir={TRANSFORMED_DATA_PREFIX + "/eval/*"}',
f'--tft-output-dir={TRANSFORM_ARTIFACTS_DIR}',
f'--num-epochs={num_epochs}',
f'--learning-rate={learning_rate}',
f'--project={PROJECT}',
f'--region={REGION}',
f'--staging-bucket={BUCKET}',
f'--experiment-name={EXPERIMENT_NAME}'
]
package_uri = os.path.join(TRAINER_PACKAGE_DIR, f'{TRAINER_PACKAGE_NAME}.tar.gz')
worker_pool_specs = [
{
"replica_count": 1,
"machine_spec": {
"machine_type": 'n1-standard-4',
"accelerator_count": 0
},
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [package_uri],
"python_module": "src.model_training.task",
"args": trainer_args,
}
}
]
print("Submitting a custom training job...")
training_job_display_name = f"{TRAINER_PACKAGE_NAME}_{run_id}"
training_job = vertex_ai.CustomJob(
display_name=training_job_display_name,
worker_pool_specs=worker_pool_specs,
base_output_dir=EXPERIMENT_RUN_DIR,
)
training_job.run(
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
sync=True
)
!gsutil ls {EXPORT_DIR}
explanation_config = features.generate_explanation_config()
explanation_config
SERVING_RUNTIME='tf2-cpu.2-5'
SERVING_IMAGE = f"us-docker.pkg.dev/vertex-ai/prediction/{SERVING_RUNTIME}:latest"
print("Serving image:", SERVING_IMAGE)
explanation_metadata = vertex_ai.explain.ExplanationMetadata(
inputs=explanation_config["inputs"],
outputs=explanation_config["outputs"],
)
explanation_parameters = vertex_ai.explain.ExplanationParameters(
explanation_config["params"]
)
vertex_model = vertex_ai.Model.upload(
display_name=MODEL_DISPLAY_NAME,
artifact_uri=EXPORT_DIR,
serving_container_image_uri=SERVING_IMAGE,
parameters_schema_uri=None,
instance_schema_uri=None,
explanation_metadata=explanation_metadata,
explanation_parameters=explanation_parameters,
labels={
'dataset_name': DATASET_DISPLAY_NAME,
'experiment': run_id
}
)
vertex_model.gca_resource
experiment_df = vertex_ai.get_experiment_df()
experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME]
experiment_df.T
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/vertex-ai/locations{REGION}/experiments/{EXPERIMENT_NAME}/metrics?project={PROJECT}"
)
metric_spec = {
'ACCURACY': 'maximize'
}
parameter_spec = {
'learning-rate': hp_tuning.DoubleParameterSpec(min=0.0001, max=0.01, scale='log'),
'hidden-units': hp_tuning.CategoricalParameterSpec(values=["32,32", "64,64", "128,128"])
}
tuning_job_display_name = f"hpt_{TRAINER_PACKAGE_NAME}_{run_id}"
hp_tuning_job = vertex_ai.HyperparameterTuningJob(
display_name=tuning_job_display_name,
custom_job=training_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=4,
parallel_trial_count=2,
search_algorithm=None # Bayesian optimization.
)
print("Submitting a hyperparameter tunning job...")
hp_tuning_job.run(
service_account=SERVICE_ACCOUNT,
tensorboard=tensorboard_resource_name,
restart_job_on_worker_restart=False,
sync=True,
)
hp_tuning_job.trials
best_trial = sorted(
hp_tuning_job.trials,
key=lambda trial: trial.final_measurement.metrics[0].value,
reverse=True
)[0]
print("Best trial ID:", best_trial.id)
print("Validation Accuracy:", best_trial.final_measurement.metrics[0].value)
print("Hyperparameter Values:")
for parameter in best_trial.parameters:
print(f" - {parameter.parameter_id}:{parameter.value}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup Google Cloud project
Step2: Set configurations
Step3: Create Vertex TensorBoard instance
Step4: Initialize workspace
Step5: Initialize Vertex AI experiment
Step6: 1. Preprocess the data using Apache Beam
Step7: Get Source Query from Managed Dataset
Step8: Test Data Preprocessing Locally
Step9: 2. Train a custom model locally using a Keras
Step10: Read transformed data
Step11: Create hyperparameters
Step12: Create and test model inputs and outputs
Step13: Train the model locally.
Step14: Export the trained model
Step15: Inspect model serving signatures
Step16: Test the exported SavedModel
Step17: Start a new Vertex AI experiment run
Step18: 3. Submit a Data Processing Job to Dataflow
Step19: 4. Submit a Custom Training Job to Vertex AI
Step20: Test the training task locally
Step21: Prepare training package
Step22: Prepare the training job
Step23: Submit the training job
Step24: 5. Upload exported model to Vertex AI Models
Step25: Generate the Explanation metadata
Step26: Upload model
Step27: 6. Extract experiment run parameters
Step28: 7. Submit a Hyperparameter Tuning Job to Vertex AI
Step29: Submit the hyperparameter tuning job
Step30: Retrieve trial results
|
1,959
|
<ASSISTANT_TASK:>
Python Code:
import collections
Person = collections.namedtuple('Person', 'name age')
bob = Person(name='Bob', age=30)
print('\nRepresentation:', bob)
jane = Person(name='Jane', age=29)
print('\nField by name:', jane.name)
print('\nFields by index:')
for p in [bob, jane]:
print('{} is {} years old'.format(*p))
import collections
Person = collections.namedtuple('Person', 'name age')
pat = Person(name='Pat', age=12)
print('\nRepresentation:', pat)
pat.age = 21
import collections
try:
collections.namedtuple('Person', 'name class age')
except ValueError as err:
print(err)
try:
collections.namedtuple('Person', 'name age age')
except ValueError as err:
print(err)
import collections
with_class = collections.namedtuple(
'Person', 'name class age',
rename=True)
print(with_class._fields)
two_ages = collections.namedtuple(
'Person', 'name age age',
rename=True)
print(two_ages._fields)
import collections
Person = collections.namedtuple('Person', 'name age')
bob = Person(name='Bob', age=30)
print('Representation:', bob)
print('Fields:', bob._fields)
import collections
Person = collections.namedtuple('Person', 'name age')
bob = Person(name='Bob', age=30)
print('Representation:', bob)
print('As Dictionary:', bob._asdict())
import collections
Person = collections.namedtuple('Person', 'name age')
bob = Person(name='Bob', age=30)
print('\nBefore:', bob)
bob2 = bob._replace(name='Robert')
print('After:', bob2)
print('Same?:', bob is bob2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Just like a regular tuple, a namedtuple is immutable. This restriction allows tuple instances to have a consistent hash value, which makes it possible to use them as keys in dictionaries and to be included in sets.
Step2: Invalid Field Names
Step3: In situations where a namedtuple is created based on values outside the control of the program (such as to represent the rows returned by a database query, where the schema is not known in advance), the rename option should be set to True so the invalid fields are renamed.
Step4: Special Attributes
Step5: namedtuple instances can be converted to OrderedDict instances using _asdict().
Step6: The _replace() method builds a new instance, replacing the values of some fields in the process.
|
1,960
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
df1 = pd.read_csv('../data/df1',index_col=0)
df2 = pd.read_csv('../data/df2')
df1['A'].hist()
import matplotlib.pyplot as plt
plt.style.use('ggplot')
df1['A'].hist()
plt.style.use('bmh')
df1['A'].hist()
plt.style.use('dark_background')
df1['A'].hist()
plt.style.use('fivethirtyeight')
df1['A'].hist()
plt.style.use('ggplot')
df2.plot.area(alpha=0.4)
df2.head()
df2.plot.bar()
df2.plot.bar(stacked=True)
df1['A'].plot.hist(bins=50)
df1.plot.line(x=df1.index,y='B',figsize=(12,3),lw=1)
df1.plot.scatter(x='A',y='B')
df1.plot.scatter(x='A',y='B',c='C',cmap='coolwarm')
df1.plot.scatter(x='A',y='B',s=df1['C']*100)
df2.plot.box() # Tambien puedes incluir el argumento by= para agrupar la informacion
df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])
df.plot.hexbin(x='a',y='b',gridsize=25,cmap='Oranges')
df2['a'].plot.kde()
df2.plot.density()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La informacion
Step2: Hojas de estilo
Step3: Utilizando estilos
Step4: Ahora tu grafica se visulizara de la siguiente manera
Step5: Por lo pronto utilizaremos el estilo ggplot
Step6: Graficas de barras
Step7: Histogramas
Step8: Graficas de lineas
Step9: Graficas de puntos
Step10: Se puede utilizar c para cambiar el color a desplegar y cmap para modificar el rango de colores a desplegar
Step11: Tambien se puede utilizar s para indicar el tamanio de alguna otra columna. El parametro s debe de ser un arreglo y no solo el nombre de la columna
Step12: Graficas de caja
Step13: Grafica Hexagonal
Step14: Kernel Density Estimation (KDE)
|
1,961
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
train_data,test_data = sales.random_split(.8,seed=0)
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
def simple_linear_regression(input_feature, output):
Xi = input_feature
Yi = output
N = len(Xi)
# compute the mean of input_feature and output
Ymean = Yi.mean()
Xmean = Xi.mean()
# compute the product of the output and the input_feature and its mean
SumYiXi = (Yi * Xi).sum()
YiXiByN = (Yi.sum() * Xi.sum()) / N
# compute the squared value of the input_feature and its mean
XiSq = (Xi * Xi).sum()
XiXiByN = (Xi.sum() * Xi.sum()) / N
# use the formula for the slope
slope = (SumYiXi - YiXiByN) / (XiSq - XiXiByN)
# use the formula for the intercept
intercept = Ymean - (slope * Xmean)
return (intercept, slope)
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = intercept + (slope * input_feature)
return predicted_values
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predicted_values = intercept + (slope * input_feature)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residuals = output - predicted_values
# square the residuals and add them up
RSS = (residuals * residuals).sum()
return(RSS)
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept)/slope
return estimated_feature
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
sqft_intercept, sqft_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
# Compute RSS when using bedrooms on TEST data:
sqft_intercept, sqft_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
rss_prices_on_bedrooms = get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Bedrooms is : ' + str(rss_prices_on_bedrooms)
# Compute RSS when using squarfeet on TEST data:
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
rss_prices_on_sqft = get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load house sales data
Step2: Split data into training and testing
Step3: Useful SFrame summary functions
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Step10: Residual Sum of Squares
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Step13: Predict the squarefeet given price
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Step15: New Model
Step16: Test your Linear Regression Algorithm
|
1,962
|
<ASSISTANT_TASK:>
Python Code:
from fretbursts import *
sns = init_notebook()
filename = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
d = loader.photon_hdf5(filename)
loader.alex_apply_period(d)
d.calc_bg(bg.exp_fit, time_s=30, tail_min_us='auto', F_bg=1.7)
d.burst_search()
ph = d.get_ph_times() # all the recorded photons
ph_dd = d.get_ph_times(ph_sel=Ph_sel(Dex='Dem')) # donor excitation, donor emission
ph_d = d.get_ph_times(ph_sel=Ph_sel(Dex='DAem')) # donor excitation, donor+acceptor emission
ph_aa = d.get_ph_times(ph_sel=Ph_sel(Aex='Aem')) # acceptor excitation, acceptor emission
mask_dd = d.get_ph_mask(ph_sel=Ph_sel(Dex='Dem')) # donor excitation, donor emission
mask_d = d.get_ph_mask(ph_sel=Ph_sel(Dex='DAem')) # donor excitation, donor+acceptor emission
mask_aa = d.get_ph_mask(ph_sel=Ph_sel(Aex='Aem')) # acceptor excitation, acceptor emission
ph.size, mask_dd.size, mask_d.size, mask_aa.size
mask_d.sum()
ph[mask_d]
bursts = d.mburst[0]
nd = d.nd[0]
na = d.na[0]
naa = d.naa[0]
E = d.E[0]
S = d.S[0]
bursts
firstburst = bursts[0]
firstburst
bursts.istart
firstburst.istart
ph[firstburst.istart], firstburst.start
ph[firstburst.istop], firstburst.stop
d.burst_search(computefret=False)
d.calc_fret(count_ph=True, corrections=False)
ds = d.select_bursts(select_bursts.size, th1=30, computefret=False)
nd = ds.nd[0] # Donor-detector counts during donor excitation
na = ds.na[0] # Acceptor-detector counts during donor excitation
naa = ds.naa[0] # Acceptor-detector counts during acceptor excitation
E = ds.E[0] # FRET efficiency or Proximity Ratio
S = ds.S[0] # Stoichiometry, as defined in µs-ALEX experiments
nd
from fretbursts.phtools.burstsearch import Burst, Bursts
times = d.ph_times_m[0] # timestamps array
ds_fused = ds.fuse_bursts(ms=0)
bursts = ds_fused.mburst[0]
print('\nNumber of bursts:', bursts.num_bursts)
time_bin = 0.5e-3 # 0.5 ms
time_bin_clk = time_bin / ds.clk_p
sub_bursts_list = []
for burst in bursts:
# Compute binning of current bursts
bins = np.arange(burst.start, burst.stop + time_bin_clk, time_bin_clk)
counts, _ = np.histogram(times[burst.istart:burst.istop+1], bins)
# From `counts` in each bin, find start-stop times and indexes (sub-burst).
# Note that start and stop are the min and max timestamps in the bin,
# therefore they are not on the bin edges. Also the burst width is not
# exactly equal to the bin width.
sub_bursts_l = []
sub_start = burst.start
sub_istart = burst.istart
for count in counts:
# Let's skip bins with 0 photons
if count == 0:
continue
sub_istop = sub_istart + count - 1
sub_bursts_l.append(Burst(istart=sub_istart, istop=sub_istop,
start=sub_start, stop=times[sub_istop]))
sub_istart += count
sub_start = times[sub_istart]
sub_bursts = Bursts.from_list(sub_bursts_l)
assert sub_bursts.num_bursts > 0
assert sub_bursts.width.max() < time_bin_clk
sub_bursts_list.append(sub_bursts)
len(sub_bursts_list)
ds_fused.num_bursts
print('Sub-bursts from burst 0:')
sub_bursts_list[0]
iburst = 10
print('Sub-bursts from burst %d:' % iburst)
sub_bursts_list[iburst]
bursts = sub_bursts_list[0]
bursts
mask_dd = d.get_ph_mask(ph_sel=Ph_sel(Dex='Dem')) # donor excitation, donor emission
mask_ad = d.get_ph_mask(ph_sel=Ph_sel(Dex='Aem')) # donor excitation, acceptor emission
mask_aa = d.get_ph_mask(ph_sel=Ph_sel(Aex='Aem')) # acceptor excitation, acceptor emission
from fretbursts.phtools.burstsearch import count_ph_in_bursts
counts_dd = count_ph_in_bursts(bursts, mask_dd)
counts_dd
counts_ad = count_ph_in_bursts(bursts, mask_ad)
counts_aa = count_ph_in_bursts(bursts, mask_aa)
counts_ad / (counts_dd + counts_ad)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the timestamps
Step2: This are streams of all timestamps (both inside and outside the bursts).
Step3: Masks are arrays of booleans (True or False values) which are True
Step4: Masks can be used to count photons in one stream
Step5: and to obtain the timestamps for one stream
Step6: Note that the arrays ph[mask_d] and ph_d are equal. This is an important point to understand.
Step7: All previous variables are numpy arrays, except for bursts which is
Step8: Indexing bursts we can access a single burst
Step9: The first two "columns" (both in bursts or firstburst) are the index of
Step10: Note that ph[firstburst.istart] is equal to firstburst.start
Step11: The same holds for stop
Step12: Note that bursts is a Bursts object (plural, a bursts-set)
Step13: Note that if you select bursts, you also need to use computefret=False
Step14: Note that the burst counts are integer values, confirming that the background
Step15: We start fusing bursts with separation <= 0 milliseconds,
Step16: Now we can slice each burst using a constant time bin
Step17: The list sub_bursts_list has one set of sub-bursts per each original burst
Step18: Each set of sub-bursts is a usual Bursts object
Step19: Photon counts in custom bursts
Step20: <p class="lead">How do we count the <b>donor</b> and <b>acceptor</b> photons in these bursts?<p>
Step21: Next, we use the function counts_ph_in_bursts
Step22: counts_dd contains the raw counts in each burst (in bursts)
Step23: With these values, we can compute, for example, the uncorrected
|
1,963
|
<ASSISTANT_TASK:>
Python Code:
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import json
from collections import defaultdict
from datetime import datetime, date
from random import randint
from networkx.readwrite.json_graph import node_link_data
%matplotlib inline
G = nx.read_gpickle('20150902_all_ird Final Graph.pkl')
G.nodes(data=True)[0]
pH1N1s = [n for n, d in G.nodes(data=True) \
if d['reassortant'] \
and d['subtype'] == 'H1N1' \
and d['collection_date'].year >= 2009 \
and d['host_species'] in ['Human', 'Swine'] \
and len(G.predecessors(n)) > 0]
len(pH1N1s)
pH1N1s[0:5]
def get_predecessors(nodes, num_degrees):
Gets the predecessors of the nodes, up to num_degrees specified.
assert isinstance(num_degrees, int), "num_degrees must be an integer."
ancestors = defaultdict(list) # a dictionary of number of degrees up and a list of nodes.
degree = 0
while degree <= num_degrees:
degree += 1
if degree == 1:
for n in nodes:
ancestors[degree].extend(G.predecessors(n))
else:
for n in ancestors[degree - 1]:
ancestors[degree].extend(G.predecessors(n))
return ancestors
ancestors = get_predecessors(pH1N1s, 3)
ancestors_subtypes = defaultdict(set)
for deg, parents in ancestors.items():
for parent in parents:
ancestors_subtypes[deg].add(G.node[parent]['subtype'])
ancestors_subtypes
def collate_nodes_of_interest(nodes, ancestors_dict):
Given a starting list of nodes and a dictionary of its ancestors and their degrees of separation
from the starting list of nodes, return a subgraph comprising of those nodes.
nodes_of_interest = []
nodes_of_interest.extend(nodes)
for k in ancestors_dict.keys():
nodes_of_interest.extend(ancestors[k])
G_sub = G.subgraph(nodes_of_interest)
return G_sub
G_sub = collate_nodes_of_interest(pH1N1s, ancestors,)
def serialize_and_write_to_disk(graph, handle):
Correctly serializes the datetime objects in a graph's edges.
Then, write the graph to disk.
# Serialize timestamp for JSON compatibility
date_handler = lambda obj: (
obj.isoformat()
if isinstance(obj, datetime)
or isinstance(obj, date)
else None
)
for n, d in graph.nodes(data=True):
graph.node[n]['collection_date'] = date_handler(graph.node[n]['collection_date'])
# Serialize the data to disk as a JSON file
data = node_link_data(graph)
s = json.dumps(data)
with open(handle, 'w+') as f:
f.write(s)
serialize_and_write_to_disk(G_sub, 'supp_data/viz/H1N1_graph.json')
h7n9s = [n for n, d in G.nodes(data=True) \
if d['subtype'] == 'H7N9' \
and d['host_species'] == 'Human' \
and d['collection_date'].year == 2013]
ancestors = get_predecessors(h7n9s, 3)
G_sub = collate_nodes_of_interest(h7n9s, ancestors,)
serialize_and_write_to_disk(G_sub, 'supp_data/viz/H7N9_graph.json')
# Visualize the data
# First, start the HTPP server
! python -m http.server 8002
# Next, load "localhost:80000/supp_data/viz/h1n1.html"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: 2009 H1N1 lineage trace
Step5: 2013 H7N9 lineage trace
|
1,964
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# In Jupyter, all commands starting with ! are mapped as SHELL commands
!head stockholm_td_adj.dat
np.genfromtxt?
st_temperatures = np.genfromtxt('stockholm_td_adj.dat',
skip_header=1)
st_temperatures.shape
st_temperatures[:10, ]
st_temperatures.dtype
## Calculate which and how many years we have in our data
years = np.unique(st_temperatures[:, 0]).astype(np.int)
years, len(years)
years.min(), years.max()
!head stockholm_td_adj.dat
mask_year = st_temperatures[:, 0] == 1984
mask_feb = st_temperatures[:, 1] == 2
mask_feb.shape
mask_year.dtype
type(mask_year)
## Calculate the mean temperature of mid-days on February in 1984
feb_noon_temps = st_temperatures[(mask_year & mask_feb), 4]
type(feb_noon_temps)
feb_noon_temps.dtype
feb_noon_temps.mean()
## ....
np.save("st_temperatures.npy", st_temperatures)
T = np.load("st_temperatures.npy")
print(T.shape, T.dtype)
from numpy import matrix
a = np.arange(0, 5)
A = np.array([[n+m*10 for n in range(5)] for m in range(5)])
a
A
M = matrix(A)
v = matrix(a).T # make it a column vector
a
M * M
A @ A # @ operator equivalent to np.dot(A, A)
# Element wise multiplication in NumPy
A * A
M * v
A * a
# inner product
v.T * v
# with matrix objects, standard matrix algebra applies
v + M*v
v_incompat = matrix(list(range(1, 7))).T
M.shape, v_incompat.shape
M * v_incompat
A = np.random.rand(10000, 300, 50) # note: this may take a while
A
from scipy import io as spio
spio.savemat('numpy_to.mat', {'A': A}, oned_as='row') # savemat expects a dictionary
data_dictionary = spio.loadmat('numpy_to.mat')
list(data_dictionary.keys())
data_dictionary['A']
A_load = data_dictionary['A']
np.all(A == A_load)
type(A_load)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Comma-separated values (CSV)
Step2: DYI
Step3: Numpy's native file format
Step4: See also
Step5: NumPy for Matlab Users (really?)
Step6: If we try to add, subtract or multiply objects with incomplatible shapes we get an error
Step7: See also the related functions
Step8: Introducing SciPy (ecosystem)
Step9: NumPy $\mapsto$ MATLAB
Step10: MATLAB $\mapsto$ NumPy
|
1,965
|
<ASSISTANT_TASK:>
Python Code:
!wget -O - 'http://www.cs.nyu.edu/~roweis/data/nips12raw_str602.tgz' > /tmp/nips12raw_str602.tgz
import tarfile
filename = '/tmp/nips12raw_str602.tgz'
tar = tarfile.open(filename, 'r:gz')
for item in tar:
tar.extract(item, path='/tmp')
import os, re
from smart_open import smart_open
# Folder containing all NIPS papers.
data_dir = '/tmp/nipstxt/' # Set this path to the data on your machine.
# Folders containin individual NIPS papers.
yrs = ['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12']
dirs = ['nips' + yr for yr in yrs]
# Get all document texts and their corresponding IDs.
docs = []
doc_ids = []
for yr_dir in dirs:
files = os.listdir(data_dir + yr_dir) # List of filenames.
for filen in files:
# Get document ID.
(idx1, idx2) = re.search('[0-9]+', filen).span() # Matches the indexes of the start end end of the ID.
doc_ids.append(yr_dir[4:] + '_' + str(int(filen[idx1:idx2])))
# Read document text.
# Note: ignoring characters that cause encoding errors.
with smart_open(data_dir + yr_dir + '/' + filen, encoding='utf-8', 'rb') as fid:
txt = fid.read()
# Replace any whitespace (newline, tabs, etc.) by a single space.
txt = re.sub('\s', ' ', txt)
docs.append(txt)
from smart_open import smart_open
filenames = [data_dir + 'idx/a' + yr + '.txt' for yr in yrs] # Using the years defined in previous cell.
# Get all author names and their corresponding document IDs.
author2doc = dict()
i = 0
for yr in yrs:
# The files "a00.txt" and so on contain the author-document mappings.
filename = data_dir + 'idx/a' + yr + '.txt'
for line in smart_open(filename, errors='ignore', encoding='utf-8', 'rb'):
# Each line corresponds to one author.
contents = re.split(',', line)
author_name = (contents[1] + contents[0]).strip()
# Remove any whitespace to reduce redundant author names.
author_name = re.sub('\s', '', author_name)
# Get document IDs for author.
ids = [c.strip() for c in contents[2:]]
if not author2doc.get(author_name):
# This is a new author.
author2doc[author_name] = []
i += 1
# Add document IDs to author.
author2doc[author_name].extend([yr + '_' + id for id in ids])
# Use an integer ID in author2doc, instead of the IDs provided in the NIPS dataset.
# Mapping from ID of document in NIPS datast, to an integer ID.
doc_id_dict = dict(zip(doc_ids, range(len(doc_ids))))
# Replace NIPS IDs by integer IDs.
for a, a_doc_ids in author2doc.items():
for i, doc_id in enumerate(a_doc_ids):
author2doc[a][i] = doc_id_dict[doc_id]
import spacy
nlp = spacy.load('en')
%%time
processed_docs = []
for doc in nlp.pipe(docs, n_threads=4, batch_size=100):
# Process document using Spacy NLP pipeline.
ents = doc.ents # Named entities.
# Keep only words (no numbers, no punctuation).
# Lemmatize tokens, remove punctuation and remove stopwords.
doc = [token.lemma_ for token in doc if token.is_alpha and not token.is_stop]
# Remove common words from a stopword list.
#doc = [token for token in doc if token not in STOPWORDS]
# Add named entities, but only if they are a compound of more than word.
doc.extend([str(entity) for entity in ents if len(entity) > 1])
processed_docs.append(doc)
docs = processed_docs
del processed_docs
# Compute bigrams.
from gensim.models import Phrases
# Add bigrams and trigrams to docs (only ones that appear 20 times or more).
bigram = Phrases(docs, min_count=20)
for idx in range(len(docs)):
for token in bigram[docs[idx]]:
if '_' in token:
# Token is a bigram, add to document.
docs[idx].append(token)
# Create a dictionary representation of the documents, and filter out frequent and rare words.
from gensim.corpora import Dictionary
dictionary = Dictionary(docs)
# Remove rare and common tokens.
# Filter out words that occur too frequently or too rarely.
max_freq = 0.5
min_wordcount = 20
dictionary.filter_extremes(no_below=min_wordcount, no_above=max_freq)
_ = dictionary[0] # This sort of "initializes" dictionary.id2token.
# Vectorize data.
# Bag-of-words representation of the documents.
corpus = [dictionary.doc2bow(doc) for doc in docs]
print('Number of authors: %d' % len(author2doc))
print('Number of unique tokens: %d' % len(dictionary))
print('Number of documents: %d' % len(corpus))
from gensim.models import AuthorTopicModel
%time model = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \
author2doc=author2doc, chunksize=2000, passes=1, eval_every=0, \
iterations=1, random_state=1)
%%time
model_list = []
for i in range(5):
model = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \
author2doc=author2doc, chunksize=2000, passes=100, gamma_threshold=1e-10, \
eval_every=0, iterations=1, random_state=i)
top_topics = model.top_topics(corpus)
tc = sum([t[1] for t in top_topics])
model_list.append((model, tc))
model, tc = max(model_list, key=lambda x: x[1])
print('Topic coherence: %.3e' %tc)
# Save model.
model.save('/tmp/model.atmodel')
# Load model.
model = AuthorTopicModel.load('/tmp/model.atmodel')
model.show_topic(0)
topic_labels = ['Circuits', 'Neuroscience', 'Numerical optimization', 'Object recognition', \
'Math/general', 'Robotics', 'Character recognition', \
'Reinforcement learning', 'Speech recognition', 'Bayesian modelling']
for topic in model.show_topics(num_topics=10):
print('Label: ' + topic_labels[topic[0]])
words = ''
for word, prob in model.show_topic(topic[0]):
words += word + ' '
print('Words: ' + words)
print()
model['YannLeCun']
from pprint import pprint
def show_author(name):
print('\n%s' % name)
print('Docs:', model.author2doc[name])
print('Topics:')
pprint([(topic_labels[topic[0]], topic[1]) for topic in model[name]])
show_author('YannLeCun')
show_author('GeoffreyE.Hinton')
show_author('TerrenceJ.Sejnowski')
show_author('ChristofKoch')
from gensim.models import atmodel
doc2author = atmodel.construct_doc2author(model.corpus, model.author2doc)
# Compute the per-word bound.
# Number of words in corpus.
corpus_words = sum(cnt for document in model.corpus for _, cnt in document)
# Compute bound and divide by number of words.
perwordbound = model.bound(model.corpus, author2doc=model.author2doc, \
doc2author=model.doc2author) / corpus_words
print(perwordbound)
%time top_topics = model.top_topics(model.corpus)
%%time
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
smallest_author = 0 # Ignore authors with documents less than this.
authors = [model.author2id[a] for a in model.author2id.keys() if len(model.author2doc[a]) >= smallest_author]
_ = tsne.fit_transform(model.state.gamma[authors, :]) # Result stored in tsne.embedding_
# Tell Bokeh to display plots inside the notebook.
from bokeh.io import output_notebook
output_notebook()
from bokeh.models import HoverTool
from bokeh.plotting import figure, show, ColumnDataSource
x = tsne.embedding_[:, 0]
y = tsne.embedding_[:, 1]
author_names = [model.id2author[a] for a in authors]
# Radius of each point corresponds to the number of documents attributed to that author.
scale = 0.1
author_sizes = [len(model.author2doc[a]) for a in author_names]
radii = [size * scale for size in author_sizes]
source = ColumnDataSource(
data=dict(
x=x,
y=y,
author_names=author_names,
author_sizes=author_sizes,
radii=radii,
)
)
# Add author names and sizes to mouse-over info.
hover = HoverTool(
tooltips=[
("author", "@author_names"),
("size", "@author_sizes"),
]
)
p = figure(tools=[hover, 'crosshair,pan,wheel_zoom,box_zoom,reset,save,lasso_select'])
p.scatter('x', 'y', radius='radii', source=source, fill_alpha=0.6, line_color=None)
show(p)
from gensim.similarities import MatrixSimilarity
# Generate a similarity object for the transformed corpus.
index = MatrixSimilarity(model[list(model.id2author.values())])
# Get similarities to some author.
author_name = 'YannLeCun'
sims = index[model[author_name]]
# Make a function that returns similarities based on the Hellinger distance.
from gensim import matutils
import pandas as pd
# Make a list of all the author-topic distributions.
author_vecs = [model.get_author_topics(author) for author in model.id2author.values()]
def similarity(vec1, vec2):
'''Get similarity between two vectors'''
dist = matutils.hellinger(matutils.sparse2full(vec1, model.num_topics), \
matutils.sparse2full(vec2, model.num_topics))
sim = 1.0 / (1.0 + dist)
return sim
def get_sims(vec):
'''Get similarity of vector to all authors.'''
sims = [similarity(vec, vec2) for vec2 in author_vecs]
return sims
def get_table(name, top_n=10, smallest_author=1):
'''
Get table with similarities, author names, and author sizes.
Return `top_n` authors as a dataframe.
'''
# Get similarities.
sims = get_sims(model.get_author_topics(name))
# Arrange author names, similarities, and author sizes in a list of tuples.
table = []
for elem in enumerate(sims):
author_name = model.id2author[elem[0]]
sim = elem[1]
author_size = len(model.author2doc[author_name])
if author_size >= smallest_author:
table.append((author_name, sim, author_size))
# Make dataframe and retrieve top authors.
df = pd.DataFrame(table, columns=['Author', 'Score', 'Size'])
df = df.sort_values('Score', ascending=False)[:top_n]
return df
get_table('YannLeCun')
get_table('JamesM.Bower', smallest_author=3)
%time model_ser = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \
author2doc=author2doc, random_state=1, serialized=True, \
serialization_path='/tmp/model_serialization.mm')
# Delete the file, once you're done using it.
import os
os.remove('/tmp/model_serialization.mm')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the following sections we will load the data, pre-process it, train the model, and explore the results using some of the implementation's functionality. Feel free to skip the loading and pre-processing for now, if you are familiar with the process.
Step2: Construct a mapping from author names to document IDs.
Step3: Pre-processing text
Step4: In the code below, Spacy takes care of tokenization, removing non-alphabetic characters, removal of stopwords, lemmatization and named entity recognition.
Step5: Below, we use a Gensim model to add bigrams. Note that this achieves the same goal as named entity recognition, that is, finding adjacent words that have some particular significance.
Step6: Now we are ready to construct a dictionary, as our vocabulary is finalized. We then remove common words (occurring $> 50\%$ of the time), and rare words (occur $< 20$ times in total).
Step7: We produce the vectorized representation of the documents, to supply the author-topic model with, by computing the bag-of-words.
Step8: Let's inspect the dimensionality of our data.
Step9: Train and use model
Step10: If you believe your model hasn't converged, you can continue training using model.update(). If you have additional documents and/or authors call model.update(corpus, author2doc).
Step11: Choose the model with the highest topic coherence.
Step12: We save the model, to avoid having to train it again, and also show how to load it again.
Step13: Explore author-topic representation
Step14: Below, we have given each topic a label based on what each topic seems to be about intuitively.
Step15: Rather than just calling model.show_topics(num_topics=10), we format the output a bit so it is easier to get an overview.
Step16: These topics are by no means perfect. They have problems such as chained topics, intruded words, random topics, and unbalanced topics (see Mimno and co-authors 2011). They will do for the purposes of this tutorial, however.
Step17: Let's print the top topics of some authors. First, we make a function to help us do this more easily.
Step18: Below, we print some high profile researchers and inspect them. Three of these, Yann LeCun, Geoffrey E. Hinton and Christof Koch, are spot on.
Step19: Simple model evaluation methods
Step20: Now let's evaluate the per-word bound.
Step21: We can evaluate the quality of the topics by computing the topic coherence, as in the LDA class. Use this to e.g. find out which of the topics are poor quality, or as a metric for model selection.
Step22: Plotting the authors
Step23: We are now ready to make the plot.
Step24: The circles in the plot above are individual authors, and their sizes represent the number of documents attributed to the corresponding author. Hovering your mouse over the circles will tell you the name of the authors and their sizes. Large clusters of authors tend to reflect some overlap in interest.
Step25: However, this framework uses the cosine distance, but we want to use the Hellinger distance. The Hellinger distance is a natural way of measuring the distance (i.e. dis-similarity) between two probability distributions. Its discrete version is defined as
Step26: Now we can find the most similar authors to some particular author. We use the Pandas library to print the results in a nice looking tables.
Step27: As before, we can specify the minimum author size.
Step28: Serialized corpora
|
1,966
|
<ASSISTANT_TASK:>
Python Code:
text = Als der Abend herbeikam und die Freunde in einer weitumherschauenden Laube saßen, trat eine ansehnliche Figur auf die Schwelle, welche unser Freund sogleich für den Barbier von heute früh erkannte. Auf einen tiefen, stummen Bückling des Mannes erwiderte Lenardo: Ihr kommt, wie immer, sehr gelegen und werdet nicht säumen, uns mit Eurem Talent zu erfreuen. — Ich kann Ihnen wohl, fuhr er zu Wilhelmen gewendet fort, Einiges von der Gesellschaft erzählen, deren Band zu sein ich mich rühmen darf. Niemand tritt in unsern Kreis, als wer gewisse Talente aufzuweisen hat, die zum Nutzen oder Vergnügen einer jeden Gesellschaft dienen würden. Dieser Mann ist ein derber Wundarzt, der in bedenklichen Fällen, wo Entschluß und körperliche Kraft gefordert wird, seinem Meister trefflich an der Seite zu stehen bereit ist. Was er als Bartkünstler leistet, davon können Sie ihm selbst ein Zeugniß geben. Hiedurch ist er uns eben so nöthig als willkommen. Da nun aber diese Beschäftigung gewöhnlich eine große und oft lästige Geschwätzigkeit mit sich führt, so hat er sich zu eigner Bildung eine Bedingung gefallen lassen, wie denn Jeder, der unter uns leben will, sich von einer gewissen Seite bedingen muß, wenn ihm nach anderen Seiten hin die größere Freiheit gewährt ist. Dieser also hat nun auf die Sprache Verzicht gethan, insofern etwas Gewöhnliches oder Zufälliges durch sie ausgedrückt wird; daraus aber hat sich ihm ein anderes Redetalent entwickelt, welches absichtlich, klug und erfreulich wirkt, die Gabe des Erzählens nämlich. Sein Leben ist reich an wunderlichen Erfahrungen, die er sonst zu ungelegener Zeit schwätzend zersplitterte, nun aber durch Schweigen genöthigt im stillen Sinne wiederholt und ordnet. Hiermit verbindet sich denn die Einbildungskraft und verleiht dem Geschehenen Leben und Bewegung. Mit besonderer Kunst und Geschicklichkeit weiß er wahrhafte Märchen und märchenhafte Geschichten zu erzählen, wodurch er oft zur schicklichen Stunde uns gar sehr ergötzt, wenn ihm die Zunge durch mich gelös't wird; wie ich denn gegenwärtig thue, und ihm zugleich das Lob ertheile, daß er sich in geraumer Zeit, seitdem ich ihn kenne, noch niemals wiederholt hat. Nun hoff' ich, daß er auch diesmal, unserm theuren Gast zu Lieb' und Ehren, sich besonders hervorthun werde.
Ueber das Gesicht des Rothmantels verbreitete sich eine geistreiche Heiterkeit, und er fing ungesäumt folgendermaßen zu sprechen an:
Hochverehrte Herren! da mir bekannt ist, daß Sie vorläufige Reden und Einleitungen nicht besonders lieben, so will ich ohne weiteres versichern, daß ich diesmal vorzüglich gut zu bestehen hoffe. Von mir sind zwar schon gar manche wahrhafte Geschichten zu hoher und allseitiger Zufriedenheit ausgegangen, heute aber darf ich sagen, daß ich eine zu erzählen habe, welche die bisherigen weit übertrifft, und die, wiewohl sie mir schon vor einigen Jahren begegnet ist, mich noch immer in der Erinnerung unruhig macht, ja sogar eine endliche Entwicklung hoffen läßt. Sie möchte schwerlich ihres Gleichen finden.
from textblob_de import TextBlobDE as TextBlob
from textblob_de import PatternParser
doc = TextBlob(text)
print("Number of sentences: ", len(doc.sentences))
print("Length of sentences in characters: ")
for s in doc.sentences:
print(len(s), end=" - ")
type(s)
Das gilt auch schon für unser Dokument-Objekt doc:
type(doc)
doc.words[:20]
w = doc.words[0]
type(w)
text_2 = Johann Wolfgang Goethe wurde, glaube ich, am 28.8.1749 geboren. Es könnte auch am 20.8. sein. Ich muss zugeben: Genau weiß ich das nicht.
text_3 = Die heutige Agenda ist kurz. 1. Die Frage nach dem Anfang. 2. Ende. Viel Spaß!
doc = TextBlob(text_2)
list(doc.sentences)
doc = TextBlob(text_3)
list(doc.sentences)
blob = TextBlob("Das ist ein schönes Auto.", parser=PatternParser(pprint=True, lemmata=True))
blob.parse()
doc.sentences[0].words
import spacy
nlp = spacy.load('de')
doc = nlp(text_2)
for s in doc.sents:
print(s)
doc = nlp(text_3)
for s in doc.sents:
print(s)
print(spacy.__version__)
import spacy
doc = nlp(text_2)
for token in doc:
print(token.text, end="< | >")
doc = nlp(text_3)
a = [print(token.text, end="< | >") for token in doc]
doc = nlp(text_2)
print("{:<15}{:<15}{:<15}".format("TOKEN", "LEMMA", "POS-Tag"))
for token in doc:
print("{:15}{:15}{:15}".format(token.text, token.lemma_, token.pos_ ))
doc = nlp("Diese Auskünfte muss ich dir nicht geben.")
[token.lemma_ for token in doc]
from spacy_iwnlp import spaCyIWNLP
iwnlp = spaCyIWNLP(lemmatizer_path=r'\mydata\Dropbox\uni\progrs\spacy-iwnlp\IWNLP.Lemmatizer_20170501.json')
nlp.add_pipe(iwnlp)
import spacy
from spacy_iwnlp import spaCyIWNLP
nlp = spacy.load('de')
iwnlp = spaCyIWNLP(lemmatizer_path=r'\mydata\Dropbox\uni\progrs\spacy-iwnlp\IWNLP.Lemmatizer_20170501.json')
nlp.add_pipe(iwnlp)
doc = nlp('Wir mögen Fußballspiele mit ausgedehnten Verlängerungen.')
for token in doc:
print('POS: {}\tIWNLP:{}'.format(token.pos_, token._.iwnlp_lemmas))
l
doc = nlp('Wir mögen Fußballspiele mit ausgedehnten Verlängerungen.')
for token in doc:
print('POS: {}\tIWNLP:{}'.format(token.pos_, token._.iwnlp_lemmas))
from spacy import displacy
text_4 = "Am Anfang war das Wort, das aber bald durch blutige Taten ersetzt wurde."
doc = nlp(text_4)
displacy.render(doc, style='dep', jupyter=True)
text_5 = Früher hat man über Johann Wolfang von Goethe gesprochen, weil er den 'Faust' geschrieben hat, oder über Mozart,
weil der die Zauberflöte komponiert hat. Heute dagegen redet man über Samsung, weil das neue Samsung Note4 erschienen ist,
oder über den neuen BMW. Gut, über Steve Jobs hat man noch so geredet, als wäre er ein neuer Mozart der Technologie.
In den USA weiß man kaum noch wer Shakespeare ist, und in Berlin benimmt man sich schon so, also könnte man mit
1 Mio. € einen Goethe kaufen.
#text_5 = text_5.replace("\n", "") #new lines irritate the parser
doc = nlp(text_5)
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table of Contents
Step2: Textblob installieren
Step3: Achtung
Step4: Das Gute daran, ist, dass wir - wie oben - über dieses Objekt iterieren können
Step7: Vielleicht sollten wir erst einmal erläutern, warum es nicht ganz einfach ist, einen Text in Sätze zu zerlegen. Zuuerst könnte man denken, dass man das mit einigen sehr einfachen Regeln erledigen kann, aber wie ein Blick auf das nächste Beispiel zeigt, ist das nicht so einfach
Step8: 2. Spacy
Step9: Im folgenden werden wir nur mit Spacy weiterarbeiten. Für Spacy spricht, dass es recht neu ist, eine ganze Reihe von Sprachen unterstützt, ein modernes Python-Interface mit einer wohlüberlegten API hat, vergleichsweise neue Aspekte der Sprachtechnologie, z.B. Word Embeddings, unterstützt und dass Deutsch zu den gut unterstützten Sprachen zählt. Gegen Spacy spricht, dass es von einer privaten Firma entwickelt wird, allerdings wird das dadurch gemildert, dass spacy selbst auf github unter einer sehr freizügigen MIT-Lizenz verfügbar ist.
Step10: Tokenisieren
Step12: Part-of-Speech-Tagging
|
1,967
|
<ASSISTANT_TASK:>
Python Code:
x = 1
y = 2
z = x + y
z * 3
from math import sin
sin(2)
my_result = sin(2)
my_result = sin(2)
print(my_result)
print('hello')
hello = 'Hello, world!'
print(hello)
print('The man said:', hello, 'How are you?')
name = input('What is your name?')
age = input('What is your age?')
color = input('What is your favorite color?')
pet = input('What kind of pet would you like?')
job = input('What do you want to be when you grow up?')
print('')
print('There once was a', pet, 'named', name)
print('Every day,', name, 'bought', age, 'bottles of lemonade.')
print('Until one day it turned', color, 'from drinking too much.')
print('It was rushed to the hospital by a', job)
from mne.io import read_raw_bdf
print(raw)
%matplotlib notebook
print('From now on, all graphics will send to your browser.')
from mne.viz import plot_raw
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In good mathematical tradition, I've named various things x, y and z and re-used them in various lines of the program. You are free to choose whatever names you like, but there are some rules. For example, you cannot name a result +, since that would be very confusing. Another important restriction is that names can not have spaces. For example, the answer is not a valid name, but the_answer is (notice I used an underscore "_" character instead of a space, which is a common thing that programmers do).
Step2: Your turn. Write a program that computes the cosine of 5. The function to compute the cosine is called cos and can also be found inside the math module (remember to import it first!). You can use the cell below to write your program in
Step3: Try running the above code cell. It doesn't seem to work??!! Nothing happened!
Step4: Ok, your turn. Write a program that assigns the cosine of 2 to the variable my_result. Use the print function to display the variable to check that it has changed.
Step5: In the Python programming language, literal text needs to always be surrounded by ' quotation marks. Without quotation marks, the program above would try to display the contents of a variable named hello. Try running this example and you'll see what I mean
Step6: The print function can write multiple things at the same time by giving it more than one argument. To give multiple arguments, put a comma between them, like this
Step7: As a child, I would write little programs like this
Step8: To familiarize yourself with manipulating text in a programming language, try modifying the program above to tell a story of your own.
Step9: Now, everything you've learned so far needs to come together. I'm going to leave it up to you to use the function correctly.
Step10: Visualizing the EEG data
|
1,968
|
<ASSISTANT_TASK:>
Python Code:
from pytadbit.mapping.full_mapper import full_mapping
r_enz = 'MboI'
! mkdir -p results/iterativ/$r_enz
! mkdir -p results/iterativ/$r_enz/01_mapping
# for the first side of the reads
full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',
out_map_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r1/'.format(r_enz),
fastq_path='/media/storage/FASTQs/K562_%s_1.fastq' % (r_enz),
r_enz='hindIII', frag_map=False, clean=True, nthreads=20,
windows=((1,25),(1,30),(1,35),(1,40),(1,45),(1,50),(1,55),(1,60),(1,65),(1,70),(1,75)),
temp_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r1_tmp/'.format(r_enz))
# for the second side of the reads
full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',
out_map_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r2/'.format(r_enz),
fastq_path='/media/storage/FASTQs/K562_%s_2.fastq' % (r_enz),
r_enz=r_enz, frag_map=False, clean=True, nthreads=20,
windows=((1,25),(1,30),(1,35),(1,40),(1,45),(1,50),(1,55),(1,60),(1,65),(1,70),(1,75)),
temp_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r2_tmp/'.format(r_enz))
! mkdir -p results/fragment/$r_enz
! mkdir -p results/fragment/$r_enz/01_mapping
# for the first side of the reads
full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',
out_map_dir='results/fragment/{0}/01_mapping/mapped_{0}_r1/'.format(r_enz),
fastq_path='/media/storage/FASTQs/K562_%s_1.fastq' % (r_enz),
r_enz=r_enz, frag_map=True, clean=True, nthreads=20,
temp_dir='results/fragment/{0}/01_mapping/mapped_{0}_r1_tmp/'.format(r_enz))
# for the second side of the reads
full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',
out_map_dir='results/fragment/{0}/01_mapping/mapped_{0}_r2/'.format(r_enz),
fastq_path='/media/storage/FASTQs/K562_%s_2.fastq' % (r_enz),
r_enz=r_enz, frag_map=True, clean=True, nthreads=20,
temp_dir='results/fragment/{0}/01_mapping/mapped_{0}_r2_tmp/'.format(r_enz))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The full mapping function can be used to perform either iterative or fragment-based mapping, or a combination of both.
Step2: And for the second side of the read
Step3: Fragment-based mapping
|
1,969
|
<ASSISTANT_TASK:>
Python Code:
from pymldb import Connection
mldb = Connection()
inceptionUrl = 'file://mldb/mldb_test_data/models/inception_dec_2015.zip'
print mldb.put('/v1/functions/fetch', {
"type": 'fetcher',
"params": {}
})
print mldb.put('/v1/functions/inception', {
"type": 'tensorflow.graph',
"params": {
"modelFileUrl": 'archive+' + inceptionUrl + '#tensorflow_inception_graph.pb',
"inputs": 'fetch({url})[content] AS "DecodeJpeg/contents"',
"outputs": "softmax"
}
})
amazingGrace = "https://www.tensorflow.org/versions/r0.7/images/grace_hopper.jpg"
mldb.query("SELECT inception({url: '%s'}) as *" % amazingGrace)
result = mldb.get('/v1/functions/inception/application', input={"url": amazingGrace})
print result.url + '\n\n' + repr(result) + '\n'
import numpy as np
print "Shape:"
print np.array(result.json()["output"]["softmax"]["val"]).shape
print mldb.put("/v1/procedures/imagenet_labels_importer", {
"type": "import.text",
"params": {
"dataFileUrl": 'archive+' + inceptionUrl + '#imagenet_comp_graph_label_strings.txt',
"outputDataset": {"id": "imagenet_labels", "type": "sparse.mutable"},
"headers": ["label"],
"named": "lineNumber() -1",
"offset": 1,
"runOnCreation": True
}
})
mldb.query("SELECT * FROM imagenet_labels LIMIT 5")
mldb.query(
SELECT scores.pred as score
NAMED imagenet_labels.label
FROM transpose(
(
SELECT flatten(inception({url: '%s'})[softmax]) as *
NAMED 'pred'
)
) AS scores
LEFT JOIN imagenet_labels ON
imagenet_labels.rowName() = scores.rowName()
ORDER BY score DESC
LIMIT 10
% amazingGrace)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading a TensorFlow graph
Step2: Scoring an image
Step3: This is great! With only 3 REST calls we were able to run a deep neural network on an arbitrary image off the internet.
Step4: Interpreting the prediction
Step5: The contents of the dataset look like this
Step7: The labels line up with the softmax layer that we extract from the network. By joining the output of the network with the imagenet_labels dataset, we can essentially label the output of the network.
|
1,970
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Instala o TensorFlow
#!pip install -q tensorflow==2.5
# Imports
import sklearn
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from pathlib import Path
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers.experimental.preprocessing import RandomFlip
from tensorflow.keras.layers.experimental.preprocessing import RandomRotation
from tensorflow.keras.layers.experimental.preprocessing import RandomZoom
from tensorflow.keras.applications import EfficientNetB3
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import Precision
from tensorflow.keras.metrics import Recall
# Seed para reprodutibilidade
tf.random.set_seed(4)
# Diretório atual
diretorio_atual = Path.cwd()
print(diretorio_atual)
# Caminho para os dados de treino
caminho_dados_treino = Path("fruits-360/Training")
# Caminho para os dados de teste
caminho_dados_teste = Path("fruits-360/Test")
# Listando o conteúdo da pasta
imagens_treino = list(caminho_dados_treino.glob("*/*"))
# Visualiza uma amostra da lista
imagens_treino[925:936]
# Expressão lambda que extrai apenas o valor com o caminho de cada imagem
imagens_treino = list(map(lambda x: str(x), imagens_treino))
# Visualiza uma amostra da lista
imagens_treino[925:936]
# Total de imagens de treino
len(imagens_treino)
# Função que obtém o label de cada imagem
def extrai_label(caminho_imagem):
return caminho_imagem.split("/")[-2]
# Aplica a função
imagens_treino_labels = list(map(lambda x: extrai_label(x), imagens_treino))
# Visualiza uma amostra
imagens_treino_labels[840:846]
# Cria o objeto
encoder = LabelEncoder()
# Aplica o fit_transform
imagens_treino_labels = encoder.fit_transform(imagens_treino_labels)
# Visualiza uma amostra
imagens_treino_labels[840:846]
# Aplicamos One-Hot-Encoding nos labels
imagens_treino_labels = tf.keras.utils.to_categorical(imagens_treino_labels)
# Visualiza uma amostra
imagens_treino_labels[840:846]
# Dividimos os dados de treino em duas amostras, treino e validação
X_treino, X_valid, y_treino, y_valid = train_test_split(imagens_treino, imagens_treino_labels)
X_treino[15:18]
y_treino[15:18]
# Redimensionamento de todas as imagens para 224 x 224
img_size = 224
resize = tf.keras.Sequential([tf.keras.layers.experimental.preprocessing.Resizing(img_size, img_size)])
# Cria o objeto para dataset augmentation
data_augmentation = tf.keras.Sequential([RandomFlip("horizontal"),
RandomRotation(0.2),
RandomZoom(height_factor = (-0.3,-0.2)) ])
# Hiperparâmnetros
batch_size = 32
autotune = tf.data.experimental.AUTOTUNE
# Função para carregar e transformar as imagens
def carrega_transforma(image, label):
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels = 3)
return image, label
# Função para preparar os dados noo formato do TensorFlow
def prepara_dataset(path, labels, train = True):
# Prepara os dados
image_paths = tf.convert_to_tensor(path)
labels = tf.convert_to_tensor(labels)
image_dataset = tf.data.Dataset.from_tensor_slices(image_paths)
label_dataset = tf.data.Dataset.from_tensor_slices(labels)
dataset = tf.data.Dataset.zip((image_dataset, label_dataset))
dataset = dataset.map(lambda image, label: carrega_transforma(image, label))
dataset = dataset.map(lambda image, label: (resize(image), label), num_parallel_calls = autotune)
dataset = dataset.shuffle(1000)
dataset = dataset.batch(batch_size)
# Se train = True aplica dataset augmentation
if train:
dataset = dataset.map(lambda image, label: (data_augmentation(image), label), num_parallel_calls = autotune)
# Se train = False repete sobre o dataset e retorna
dataset = dataset.repeat()
return dataset
# Cria o dataset de treino
dataset_treino = prepara_dataset(X_treino, y_treino)
# Shape
imagem, label = next(iter(dataset_treino))
print(imagem.shape)
print(label.shape)
# Vamos visualizar uma imagem e um label
print(encoder.inverse_transform(np.argmax(label, axis = 1))[0])
plt.imshow((imagem[0].numpy()/255).reshape(224,224,3))
# Cria o dataset de validação
dataset_valid = prepara_dataset(X_valid, y_valid, train = False)
# Shape
imagem, label = next(iter(dataset_valid))
print(imagem.shape)
print(label.shape)
# Carregando um modelo pré-treinado
modelo_pre = EfficientNetB3(input_shape = (224,224,3), include_top = False)
# Adicionando nossas próprias camadas ao modelo_pre
modelo = tf.keras.Sequential([modelo_pre,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(131, activation = 'softmax')])
# Sumário do modelo
modelo.summary()
# Hiperparâmetros
lr = 0.001
beta1 = 0.9
beta2 = 0.999
ep = 1e-07
# Compilação do modelo
modelo.compile(optimizer = Adam(learning_rate = lr,
beta_1 = beta1,
beta_2 = beta2,
epsilon = ep),
loss = 'categorical_crossentropy',
metrics = ['accuracy', Precision(name = 'precision'), Recall(name = 'recall')])
%%time
history = modelo.fit(dataset_treino,
steps_per_epoch = len(X_treino)//batch_size,
epochs = 1,
validation_data = dataset_valid,
validation_steps = len(y_treino)//batch_size)
# Não precisamos mais do modelo_pre
modelo.layers[0].trainable = False
# Checkpoint
checkpoint = tf.keras.callbacks.ModelCheckpoint("modelo/melhor_modelo.h5",
verbose = 1,
save_best = True,
save_weights_only = True)
# Early stop
early_stop = tf.keras.callbacks.EarlyStopping(patience = 4)
# Sumário
modelo.summary()
%%time
history = modelo.fit(dataset_treino,
steps_per_epoch = len(X_treino)//batch_size,
epochs = 6,
validation_data = dataset_valid,
validation_steps = len(y_treino)//batch_size,
callbacks = [checkpoint, early_stop])
# Para carregar os pesos, precisamos descongelar as camadas
modelo.layers[0].trainable = True
# Carrega os pesos do ponto de verificação e reavalie
modelo.load_weights("modelo/melhor_modelo.h5")
# Carregando e preparando os dados de teste
camninho_imagens_teste = list(caminho_dados_teste.glob("*/*"))
imagens_teste = list(map(lambda x: str(x), camninho_imagens_teste))
imagens_teste_labels = list(map(lambda x: extrai_label(x), imagens_teste))
imagens_teste_labels = encoder.fit_transform(imagens_teste_labels)
imagens_teste_labels = tf.keras.utils.to_categorical(imagens_teste_labels)
test_image_paths = tf.convert_to_tensor(imagens_teste)
test_image_labels = tf.convert_to_tensor(imagens_teste_labels)
# Função para decode das imagens
def decode_imagens(image, label):
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels = 3)
image = tf.image.resize(image, [224,224], method = "bilinear")
return image, label
# Cria o dataset de teste
dataset_teste = (tf.data.Dataset
.from_tensor_slices((imagens_teste, imagens_teste_labels))
.map(decode_imagens)
.batch(batch_size))
# Shape
imagem, label = next(iter(dataset_teste))
print(imagem.shape)
print(label.shape)
# Visualiza uma imagem de teste
print(encoder.inverse_transform(np.argmax(label, axis = 1))[0])
plt.imshow((imagem[0].numpy()/255).reshape(224,224,3))
# Avalia o modelo
loss, acc, prec, rec = modelo.evaluate(dataset_teste)
print("Acurácia: ", acc)
print("Precision: ", prec)
print("Recall: ", rec)
# Função para carregar uma nova imagem
def carrega_nova_imagem(image_path):
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image, channels = 3)
image = tf.image.resize(image, [224,224], method = "bilinear")
plt.imshow(image.numpy()/255)
image = tf.expand_dims(image, 0)
return image
# Função para fazer previsões
def faz_previsao(image_path, model, enc):
image = carrega_nova_imagem(image_path)
prediction = model.predict(image)
pred = np.argmax(prediction, axis = 1)
return enc.inverse_transform(pred)[0]
# Previsão
faz_previsao("imagens/imagem1.jpg", modelo, encoder)
# Previsão
faz_previsao("imagens/imagem2.jpg", modelo, encoder)
# Previsão
faz_previsao("imagens/imagem3.jpg", modelo, encoder)
# Previsão
faz_previsao("imagens/imagem4.jpg", modelo, encoder)
# Previsão
faz_previsao("imagens/imagem5.jpg", modelo, encoder)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Carregando os Dados (Imagens)
Step2: Pré-Processamento dos Dados
Step3: Label encoding (convertendo string para valor numérico)
Step4: Dataset Augmentation
Step5: Preparando os Dados
Step6: Construção do Modelo
Step7: Vamos treinar o modelo por apenas uma época e verificar as métricas.
Step8: Vamos treinar o modelo por mais 6 épocas a fim de melhorar a performance e aplicar algumas técnicas para evitar overfitting.
Step9: Avaliação do Modelo
Step10: Carregamos os dados de teste.
Step11: Previsões com o Modelo Treinado
|
1,971
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pyfolio as pf
stock_rets = pf.utils.get_symbol_rets('FB')
out_of_sample = stock_rets.index[-40]
pf.create_bayesian_tear_sheet(stock_rets, live_start_date=out_of_sample)
help(pf.bayesian.run_model)
# Run model that assumes returns to be T-distributed
trace = pf.bayesian.run_model('t', stock_rets)
# Check what frequency of samples from the sharpe posterior are above 0.
print('Probability of Sharpe ratio > 0 = {:3}%'.format((trace['sharpe'] > 0).mean() * 100))
import pymc3 as pm
pm.traceplot(trace);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fetch the daily returns for a stock
Step2: Create Bayesian tear sheet
Step3: Lets go through these row by row
Step4: For example, to run a model that assumes returns to be normally distributed, you can call
Step5: The returned trace object can be directly inquired. For example might we ask what the probability of the Sharpe ratio being larger than 0 is by checking what percentage of posterior samples of the Sharpe ratio are > 0
Step6: But we can also interact with it like with any other pymc3 trace
|
1,972
|
<ASSISTANT_TASK:>
Python Code:
# Import external libraries
import matplotlib.pyplot as plt
# Settings
%matplotlib inline
pvarray_parameters = {
'n_pvrows': 4, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'surface_tilt': 20., # tilt of the pv rows
'surface_azimuth': 90., # azimuth of the pv rows front surface
'solar_zenith': 40., # solar zenith angle
'solar_azimuth': 150., # solar azimuth angle
'gcr': 0.5, # ground coverage ratio
}
from pvfactors.geometry import OrderedPVArray
pvarray = OrderedPVArray.fit_from_dict_of_scalars(pvarray_parameters)
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(10, 3))
pvarray.plot_at_idx(0, ax)
plt.show()
# New configuration with direct shading
pvarray_parameters.update({'surface_tilt': 80., 'solar_zenith': 75., 'solar_azimuth': 90.})
pvarray_parameters
# Create new PV array
pvarray_w_direct_shading = OrderedPVArray.fit_from_dict_of_scalars(pvarray_parameters)
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(10, 3))
pvarray_w_direct_shading.plot_at_idx(0, ax)
plt.show()
# Shaded length on first pv row (leftmost)
l = pvarray_w_direct_shading.ts_pvrows[0].front.shaded_length
print("Shaded length on front surface of leftmost PV row: %.2f m" % l)
# Shaded length on last pv row (rightmost)
l = pvarray_w_direct_shading.ts_pvrows[-1].front.shaded_length
print("Shaded length on front surface of rightmost PV row: %.2f m" %l)
front_illum_ts_surface = pvarray_w_direct_shading.ts_pvrows[0].front.list_segments[0].illum.list_ts_surfaces[0]
coords = front_illum_ts_surface.coords
print("Coords: {}".format(coords))
b1 = coords.b1
b2 = coords.b2
print("b1 coords: {}".format(b1))
print("x coords of b1: {}".format(b1.x))
print("y coords of b1: {}".format(b1.y))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare PV array parameters
Step2: Create a PV array and its shadows
Step3: Plot the PV array.
Step4: As we can see in the plot above
Step5: We can now see on the plot above that some inter-row shading is happening in the PV array.
Step6: As we can see, the rightmost PV row is not shaded at all.
Step7: These are the timeseries line coordinates of the surface, and it is made out of two timeseries point coordinates, b1 and b2 ("b" for boundary).
Step8: Each timeseries point is also made of x and y timeseries coordinates, which are just numpy arrays.
|
1,973
|
<ASSISTANT_TASK:>
Python Code:
import pathlib
import os
from typing import Dict, List, Mapping, Optional, Sequence, Tuple, Union
import uuid
import zlib
from IPython.display import HTML
import matplotlib.animation as animation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_graphics.image.transformer as tfg_transformer
from google.protobuf import text_format
from waymo_open_dataset.protos import occupancy_flow_metrics_pb2
from waymo_open_dataset.protos import occupancy_flow_submission_pb2
from waymo_open_dataset.protos import scenario_pb2
from waymo_open_dataset.utils import occupancy_flow_data
from waymo_open_dataset.utils import occupancy_flow_grids
from waymo_open_dataset.utils import occupancy_flow_metrics
from waymo_open_dataset.utils import occupancy_flow_renderer
from waymo_open_dataset.utils import occupancy_flow_vis
# PLEASE EDIT.
# A tfrecord containing tf.Example protos as downloaded from the Waymo Open
# Dataset (motion) webpage.
# Replace this path with your own tfrecords.
DATASET_FOLDER = '/path/to/waymo_open_dataset_motion_v_1_1_0/uncompressed'
# TFRecord dataset.
TRAIN_FILES = f'{DATASET_FOLDER}/tf_example/training/training_tfexample.tfrecord*'
VAL_FILES = f'{DATASET_FOLDER}/tf_example/validation/validation_tfexample.tfrecord*'
TEST_FILES = f'{DATASET_FOLDER}/tf_example/testing/testing_tfexample.tfrecord*'
# Text files containing validation and test scenario IDs for this challenge.
VAL_SCENARIO_IDS_FILE = f'{DATASET_FOLDER}/occupancy_flow_challenge/validation_scenario_ids.txt'
TEST_SCENARIO_IDS_FILE = f'{DATASET_FOLDER}/occupancy_flow_challenge/testing_scenario_ids.txt'
filenames = tf.io.matching_files(TRAIN_FILES)
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.repeat()
dataset = dataset.map(occupancy_flow_data.parse_tf_example)
dataset = dataset.batch(16)
it = iter(dataset)
inputs = next(it)
def create_figure_and_axes(size_pixels):
Initializes a unique figure and axes for plotting.
fig, ax = plt.subplots(1, 1, num=uuid.uuid4())
# Sets output image to pixel resolution.
dpi = 100
size_inches = size_pixels / dpi
fig.set_size_inches([size_inches, size_inches])
fig.set_dpi(dpi)
fig.set_facecolor('white')
ax.set_facecolor('white')
ax.xaxis.label.set_color('black')
ax.tick_params(axis='x', colors='black')
ax.yaxis.label.set_color('black')
ax.tick_params(axis='y', colors='black')
fig.set_tight_layout(True)
ax.grid(False)
return fig, ax
def fig_canvas_image(fig):
Returns a [H, W, 3] uint8 np.array image from fig.canvas.tostring_rgb().
# Just enough margin in the figure to display xticks and yticks.
fig.subplots_adjust(
left=0.08, bottom=0.08, right=0.98, top=0.98, wspace=0.0, hspace=0.0)
fig.canvas.draw()
data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
return data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
def get_colormap(num_agents):
Compute a color map array of shape [num_agents, 4].
colors = plt.cm.get_cmap('jet', num_agents)
colors = colors(range(num_agents))
np.random.shuffle(colors)
return colors
def get_viewport(all_states, all_states_mask):
Gets the region containing the data.
Args:
all_states: states of agents as an array of shape [num_agents, num_steps,
2].
all_states_mask: binary mask of shape [num_agents, num_steps] for
`all_states`.
Returns:
center_y: float. y coordinate for center of data.
center_x: float. x coordinate for center of data.
width: float. Width of data.
valid_states = all_states[all_states_mask]
all_y = valid_states[..., 1]
all_x = valid_states[..., 0]
center_y = (np.max(all_y) + np.min(all_y)) / 2
center_x = (np.max(all_x) + np.min(all_x)) / 2
range_y = np.ptp(all_y)
range_x = np.ptp(all_x)
width = max(range_y, range_x)
return center_y, center_x, width
def visualize_one_step(
states,
mask,
roadgraph,
title,
center_y,
center_x,
width,
color_map,
size_pixels=1000,
):
Generate visualization for a single step.
# Create figure and axes.
fig, ax = create_figure_and_axes(size_pixels=size_pixels)
# Plot roadgraph.
rg_pts = roadgraph[:, :2].T
ax.plot(rg_pts[0, :], rg_pts[1, :], 'k.', alpha=1, ms=2)
masked_x = states[:, 0][mask]
masked_y = states[:, 1][mask]
colors = color_map[mask]
# Plot agent current position.
ax.scatter(
masked_x,
masked_y,
marker='o',
linewidths=3,
color=colors,
)
# Title.
ax.set_title(title)
# Set axes. Should be at least 10m on a side.
size = max(10, width * 1.0)
ax.axis([
-size / 2 + center_x, size / 2 + center_x, -size / 2 + center_y,
size / 2 + center_y
])
ax.set_aspect('equal')
image = fig_canvas_image(fig)
plt.close(fig)
return image
def visualize_all_agents_smooth(
decoded_example,
size_pixels=1000,
):
Visualizes all agent predicted trajectories in a serie of images.
Args:
decoded_example: Dictionary containing agent info about all modeled agents.
size_pixels: The size in pixels of the output image.
Returns:
T of [H, W, 3] uint8 np.arrays of the drawn matplotlib's figure canvas.
# [num_agents, num_past_steps, 2] float32.
past_states = tf.stack(
[decoded_example['state/past/x'], decoded_example['state/past/y']],
-1).numpy()
past_states_mask = decoded_example['state/past/valid'].numpy() > 0.0
# [num_agents, 1, 2] float32.
current_states = tf.stack(
[decoded_example['state/current/x'], decoded_example['state/current/y']],
-1).numpy()
current_states_mask = decoded_example['state/current/valid'].numpy() > 0.0
# [num_agents, num_future_steps, 2] float32.
future_states = tf.stack(
[decoded_example['state/future/x'], decoded_example['state/future/y']],
-1).numpy()
future_states_mask = decoded_example['state/future/valid'].numpy() > 0.0
# [num_points, 3] float32.
roadgraph_xyz = decoded_example['roadgraph_samples/xyz'].numpy()
num_agents, num_past_steps, _ = past_states.shape
num_future_steps = future_states.shape[1]
color_map = get_colormap(num_agents)
# [num_agents, num_past_steps + 1 + num_future_steps, depth] float32.
all_states = np.concatenate([past_states, current_states, future_states], 1)
# [num_agents, num_past_steps + 1 + num_future_steps] float32.
all_states_mask = np.concatenate(
[past_states_mask, current_states_mask, future_states_mask], 1)
center_y, center_x, width = get_viewport(all_states, all_states_mask)
images = []
# Generate images from past time steps.
for i, (s, m) in enumerate(
zip(
np.split(past_states, num_past_steps, 1),
np.split(past_states_mask, num_past_steps, 1))):
im = visualize_one_step(s[:, 0], m[:, 0], roadgraph_xyz,
'past: %d' % (num_past_steps - i), center_y,
center_x, width, color_map, size_pixels)
images.append(im)
# Generate one image for the current time step.
s = current_states
m = current_states_mask
im = visualize_one_step(s[:, 0], m[:, 0], roadgraph_xyz, 'current', center_y,
center_x, width, color_map, size_pixels)
images.append(im)
# Generate images from future time steps.
for i, (s, m) in enumerate(
zip(
np.split(future_states, num_future_steps, 1),
np.split(future_states_mask, num_future_steps, 1))):
im = visualize_one_step(s[:, 0], m[:, 0], roadgraph_xyz,
'future: %d' % (i + 1), center_y, center_x, width,
color_map, size_pixels)
images.append(im)
return images
inputs_no_batch = {k: v[0] for k, v in inputs.items()}
images = visualize_all_agents_smooth(inputs_no_batch)
def create_animation(images, interval=100):
Creates a Matplotlib animation of the given images.
Args:
images: A list of numpy arrays representing the images.
interval: Delay between frames in milliseconds.
Returns:
A matplotlib.animation.Animation.
Usage:
anim = create_animation(images)
anim.save('/tmp/animation.avi')
HTML(anim.to_html5_video())
plt.ioff()
fig, ax = plt.subplots()
dpi = 100
size_inches = 1000 / dpi
fig.set_size_inches([size_inches, size_inches])
plt.ion()
def animate_func(i):
ax.imshow(images[i])
ax.set_xticks([])
ax.set_yticks([])
ax.grid('off')
anim = animation.FuncAnimation(
fig, animate_func, frames=len(images), interval=interval)
plt.close(fig)
return anim
anim = create_animation(images[::5])
HTML(anim.to_html5_video())
config = occupancy_flow_metrics_pb2.OccupancyFlowTaskConfig()
config_text =
num_past_steps: 10
num_future_steps: 80
num_waypoints: 8
cumulative_waypoints: false
normalize_sdc_yaw: true
grid_height_cells: 256
grid_width_cells: 256
sdc_y_in_grid: 192
sdc_x_in_grid: 128
pixels_per_meter: 3.2
agent_points_per_side_length: 48
agent_points_per_side_width: 16
text_format.Parse(config_text, config)
config
inputs = occupancy_flow_data.add_sdc_fields(inputs)
timestep_grids = occupancy_flow_grids.create_ground_truth_timestep_grids(
inputs=inputs, config=config)
print(timestep_grids.vehicles.future_observed_occupancy.shape)
true_waypoints = occupancy_flow_grids.create_ground_truth_waypoint_grids(
timestep_grids=timestep_grids, config=config)
print(true_waypoints.vehicles.observed_occupancy[0].shape)
print(true_waypoints.vehicles.occluded_occupancy[0].shape)
print(true_waypoints.vehicles.flow[0].shape)
vis_grids = occupancy_flow_grids.create_ground_truth_vis_grids(
inputs=inputs, timestep_grids=timestep_grids, config=config)
print(vis_grids.roadgraph.shape)
print(vis_grids.agent_trails.shape)
# Visualize waypoint 4 out of 8.
k = 3
observed_occupancy_grids = true_waypoints.get_observed_occupancy_at_waypoint(k)
observed_occupancy_rgb = occupancy_flow_vis.occupancy_rgb_image(
agent_grids=observed_occupancy_grids,
roadgraph_image=vis_grids.roadgraph,
gamma=1.6,
)
plt.imshow(observed_occupancy_rgb[0])
occluded_occupancy_grids = true_waypoints.get_occluded_occupancy_at_waypoint(k)
occluded_occupancy_rgb = occupancy_flow_vis.occupancy_rgb_image(
agent_grids=occluded_occupancy_grids,
roadgraph_image=vis_grids.roadgraph,
gamma=1.6,
)
plt.imshow(occluded_occupancy_rgb[0])
flow_rgb = occupancy_flow_vis.flow_rgb_image(
flow=true_waypoints.vehicles.flow[k],
roadgraph_image=vis_grids.roadgraph,
agent_trails=vis_grids.agent_trails,
)
plt.imshow(flow_rgb[0])
images = []
for k in range(config.num_waypoints):
observed_occupancy_grids = true_waypoints.get_observed_occupancy_at_waypoint(
k)
observed_occupancy_rgb = occupancy_flow_vis.occupancy_rgb_image(
agent_grids=observed_occupancy_grids,
roadgraph_image=vis_grids.roadgraph,
gamma=1.6,
)
images.append(observed_occupancy_rgb[0])
anim = create_animation(images, interval=200)
HTML(anim.to_html5_video())
images = []
for k in range(config.num_waypoints):
occluded_occupancy_grids = true_waypoints.get_occluded_occupancy_at_waypoint(
k)
occluded_occupancy_rgb = occupancy_flow_vis.occupancy_rgb_image(
agent_grids=occluded_occupancy_grids,
roadgraph_image=vis_grids.roadgraph,
gamma=1.6,
)
images.append(occluded_occupancy_rgb[0])
anim = create_animation(images, interval=200)
HTML(anim.to_html5_video())
images = []
for k in range(config.num_waypoints):
flow_rgb = occupancy_flow_vis.flow_rgb_image(
flow=true_waypoints.vehicles.flow[k],
roadgraph_image=vis_grids.roadgraph,
agent_trails=vis_grids.agent_trails,
)
images.append(flow_rgb[0])
anim = create_animation(images, interval=200)
HTML(anim.to_html5_video())
# Number of channels output by the model.
# Occupancy of currently-observed vehicles: 1 channel.
# Occupancy of currently-occluded vehicles: 1 channel.
# Flow of all vehicles: 2 channels.
NUM_PRED_CHANNELS = 4
def _make_model_inputs(
timestep_grids: occupancy_flow_grids.TimestepGrids,
vis_grids: occupancy_flow_grids.VisGrids,
) -> tf.Tensor:
Concatenates all occupancy grids over past, current to a single tensor.
model_inputs = tf.concat(
[
vis_grids.roadgraph,
timestep_grids.vehicles.past_occupancy,
timestep_grids.vehicles.current_occupancy,
tf.clip_by_value(
timestep_grids.pedestrians.past_occupancy +
timestep_grids.cyclists.past_occupancy, 0, 1),
tf.clip_by_value(
timestep_grids.pedestrians.current_occupancy +
timestep_grids.cyclists.current_occupancy, 0, 1),
],
axis=-1,
)
return model_inputs
def _make_model(
model_inputs: tf.Tensor,
config: occupancy_flow_metrics_pb2.OccupancyFlowTaskConfig,
) -> tf.keras.Model:
Simple convolutional model.
inputs = tf.keras.Input(tensor=model_inputs)
encoder = tf.keras.applications.ResNet50V2(
include_top=False, weights=None, input_tensor=inputs)
num_output_channels = NUM_PRED_CHANNELS * config.num_waypoints
decoder_channels = [32, 64, 128, 256, 512]
conv2d_kwargs = {
'kernel_size': 3,
'strides': 1,
'padding': 'same',
}
x = encoder(inputs)
for i in [4, 3, 2, 1, 0]:
x = tf.keras.layers.Conv2D(
filters=decoder_channels[i],
activation='relu',
name=f'upconv_{i}_0',
**conv2d_kwargs)(
x)
x = tf.keras.layers.UpSampling2D(name=f'upsample_{i}')(x)
x = tf.keras.layers.Conv2D(
filters=decoder_channels[i],
activation='relu',
name=f'upconv_{i}_1',
**conv2d_kwargs)(
x)
outputs = tf.keras.layers.Conv2D(
filters=num_output_channels,
activation=None,
name=f'outconv',
**conv2d_kwargs)(
x)
return tf.keras.Model(
inputs=inputs, outputs=outputs, name='occupancy_flow_model')
model_inputs = _make_model_inputs(timestep_grids, vis_grids)
model = _make_model(model_inputs=model_inputs, config=config)
model.summary()
{v.name: v.shape for v in model.variables}
model_outputs = model(model_inputs)
model_outputs.shape
def _get_pred_waypoint_logits(
model_outputs: tf.Tensor) -> occupancy_flow_grids.WaypointGrids:
Slices model predictions into occupancy and flow grids.
pred_waypoint_logits = occupancy_flow_grids.WaypointGrids()
# Slice channels into output predictions.
for k in range(config.num_waypoints):
index = k * NUM_PRED_CHANNELS
waypoint_channels = model_outputs[:, :, :, index:index + NUM_PRED_CHANNELS]
pred_observed_occupancy = waypoint_channels[:, :, :, :1]
pred_occluded_occupancy = waypoint_channels[:, :, :, 1:2]
pred_flow = waypoint_channels[:, :, :, 2:]
pred_waypoint_logits.vehicles.observed_occupancy.append(
pred_observed_occupancy)
pred_waypoint_logits.vehicles.occluded_occupancy.append(
pred_occluded_occupancy)
pred_waypoint_logits.vehicles.flow.append(pred_flow)
return pred_waypoint_logits
pred_waypoint_logits = _get_pred_waypoint_logits(model_outputs)
vehicle_grids = pred_waypoint_logits.vehicles
print(len(vehicle_grids.observed_occupancy), 'observed occupancy grids.')
print(len(vehicle_grids.occluded_occupancy), 'occluded occupancy grids.')
print(len(vehicle_grids.flow), 'flow fields.')
def _occupancy_flow_loss(
config: occupancy_flow_metrics_pb2.OccupancyFlowTaskConfig,
true_waypoints: occupancy_flow_grids.WaypointGrids,
pred_waypoint_logits: occupancy_flow_grids.WaypointGrids,
) -> Dict[str, tf.Tensor]:
Loss function.
Args:
config: OccupancyFlowTaskConfig proto message.
true_waypoints: Ground truth labels.
pred_waypoint_logits: Predicted occupancy logits and flows.
Returns:
A dict containing different loss tensors:
observed_xe: Observed occupancy cross-entropy loss.
occluded_xe: Occluded occupancy cross-entropy loss.
flow: Flow loss.
loss_dict = {}
# Store loss tensors for each waypoint and average at the end.
loss_dict['observed_xe'] = []
loss_dict['occluded_xe'] = []
loss_dict['flow'] = []
# Iterate over waypoints.
for k in range(config.num_waypoints):
# Occupancy cross-entropy loss.
pred_observed_occupancy_logit = (
pred_waypoint_logits.vehicles.observed_occupancy[k])
pred_occluded_occupancy_logit = (
pred_waypoint_logits.vehicles.occluded_occupancy[k])
true_observed_occupancy = true_waypoints.vehicles.observed_occupancy[k]
true_occluded_occupancy = true_waypoints.vehicles.occluded_occupancy[k]
# Accumulate over waypoints.
loss_dict['observed_xe'].append(
_sigmoid_xe_loss(
true_occupancy=true_observed_occupancy,
pred_occupancy=pred_observed_occupancy_logit))
loss_dict['occluded_xe'].append(
_sigmoid_xe_loss(
true_occupancy=true_occluded_occupancy,
pred_occupancy=pred_occluded_occupancy_logit))
# Flow loss.
pred_flow = pred_waypoint_logits.vehicles.flow[k]
true_flow = true_waypoints.vehicles.flow[k]
loss_dict['flow'].append(_flow_loss(pred_flow, true_flow))
# Mean over waypoints.
loss_dict['observed_xe'] = (
tf.math.add_n(loss_dict['observed_xe']) / config.num_waypoints)
loss_dict['occluded_xe'] = (
tf.math.add_n(loss_dict['occluded_xe']) / config.num_waypoints)
loss_dict['flow'] = tf.math.add_n(loss_dict['flow']) / config.num_waypoints
return loss_dict
def _sigmoid_xe_loss(
true_occupancy: tf.Tensor,
pred_occupancy: tf.Tensor,
loss_weight: float = 1000,
) -> tf.Tensor:
Computes sigmoid cross-entropy loss over all grid cells.
# Since the mean over per-pixel cross-entropy values can get very small,
# we compute the sum and multiply it by the loss weight before computing
# the mean.
xe_sum = tf.reduce_sum(
tf.nn.sigmoid_cross_entropy_with_logits(
labels=_batch_flatten(true_occupancy),
logits=_batch_flatten(pred_occupancy),
))
# Return mean.
return loss_weight * xe_sum / tf.size(pred_occupancy, out_type=tf.float32)
def _flow_loss(
true_flow: tf.Tensor,
pred_flow: tf.Tensor,
loss_weight: float = 1,
) -> tf.Tensor:
Computes L1 flow loss.
diff = true_flow - pred_flow
# Ignore predictions in areas where ground-truth flow is zero.
# [batch_size, height, width, 1], [batch_size, height, width, 1]
true_flow_dx, true_flow_dy = tf.split(true_flow, 2, axis=-1)
# [batch_size, height, width, 1]
flow_exists = tf.logical_or(
tf.not_equal(true_flow_dx, 0.0),
tf.not_equal(true_flow_dy, 0.0),
)
flow_exists = tf.cast(flow_exists, tf.float32)
diff = diff * flow_exists
diff_norm = tf.linalg.norm(diff, ord=1, axis=-1) # L1 norm.
mean_diff = tf.math.divide_no_nan(
tf.reduce_sum(diff_norm),
tf.reduce_sum(flow_exists) / 2) # / 2 since (dx, dy) is counted twice.
return loss_weight * mean_diff
def _batch_flatten(input_tensor: tf.Tensor) -> tf.Tensor:
Flatten tensor to a shape [batch_size, -1].
image_shape = tf.shape(input_tensor)
return tf.reshape(input_tensor, tf.concat([image_shape[0:1], [-1]], 0))
def _run_model_on_inputs(
inputs: Dict[str, tf.Tensor],
training: bool,
) -> occupancy_flow_grids.WaypointGrids:
Preprocesses inputs and runs model on one batch.
inputs = occupancy_flow_data.add_sdc_fields(inputs)
timestep_grids = occupancy_flow_grids.create_ground_truth_timestep_grids(
inputs, config)
true_waypoints = occupancy_flow_grids.create_ground_truth_waypoint_grids(
timestep_grids, config)
vis_grids = occupancy_flow_grids.create_ground_truth_vis_grids(
inputs, timestep_grids, config)
# [batch_size, grid_height_cells, grid_width_cells, 23]
model_inputs = _make_model_inputs(timestep_grids, vis_grids)
# [batch_size, grid_height_cells, grid_width_cells, 32]
model_outputs = model(model_inputs, training=training)
pred_waypoint_logits = _get_pred_waypoint_logits(model_outputs)
return pred_waypoint_logits
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
def train_step(inputs: Dict[str, tf.Tensor]) -> tf.Tensor:
with tf.GradientTape() as tape:
# Run model.
pred_waypoint_logits = _run_model_on_inputs(inputs=inputs, training=True)
# Compute loss.
loss_dict = _occupancy_flow_loss(
config=config,
true_waypoints=true_waypoints,
pred_waypoint_logits=pred_waypoint_logits)
total_loss = tf.math.add_n(loss_dict.values())
grads = tape.gradient(total_loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
return total_loss
num_steps_to_train = 11
step = 0
while step < num_steps_to_train:
# Iterate over batches of the dataset.
inputs = next(it)
loss_value = train_step(inputs)
# Log every 10 batches.
if step % 10 == 0:
float_loss = float(loss_value)
print(f'Training loss after step {step}: {float_loss:.4f}')
step += 1
def _apply_sigmoid_to_occupancy_logits(
pred_waypoint_logits: occupancy_flow_grids.WaypointGrids
) -> occupancy_flow_grids.WaypointGrids:
Converts occupancy logits to probabilities.
pred_waypoints = occupancy_flow_grids.WaypointGrids()
pred_waypoints.vehicles.observed_occupancy = [
tf.sigmoid(x) for x in pred_waypoint_logits.vehicles.observed_occupancy
]
pred_waypoints.vehicles.occluded_occupancy = [
tf.sigmoid(x) for x in pred_waypoint_logits.vehicles.occluded_occupancy
]
pred_waypoints.vehicles.flow = pred_waypoint_logits.vehicles.flow
return pred_waypoints
# [batch_size, grid_height_cells, grid_width_cells, 23]
model_inputs = _make_model_inputs(timestep_grids, vis_grids)
# [batch_size, grid_height_cells, grid_width_cells, 32]
model_outputs = model(model_inputs)
pred_waypoint_logits = _get_pred_waypoint_logits(model_outputs)
pred_waypoints = _apply_sigmoid_to_occupancy_logits(pred_waypoint_logits)
images = []
for k in range(config.num_waypoints):
observed_occupancy_grids = pred_waypoints.get_observed_occupancy_at_waypoint(
k)
observed_occupancy_rgb = occupancy_flow_vis.occupancy_rgb_image(
agent_grids=observed_occupancy_grids,
roadgraph_image=vis_grids.roadgraph,
gamma=1.6,
)
images.append(observed_occupancy_rgb[0])
anim = create_animation(images, interval=200)
HTML(anim.to_html5_video())
images = []
for k in range(config.num_waypoints):
occluded_occupancy_grids = pred_waypoints.get_occluded_occupancy_at_waypoint(
k)
occluded_occupancy_rgb = occupancy_flow_vis.occupancy_rgb_image(
agent_grids=occluded_occupancy_grids,
roadgraph_image=vis_grids.roadgraph,
gamma=1.6,
)
images.append(observed_occupancy_rgb[0])
anim = create_animation(images, interval=200)
HTML(anim.to_html5_video())
images = []
for k in range(config.num_waypoints):
flow_rgb = occupancy_flow_vis.flow_rgb_image(
flow=pred_waypoints.vehicles.flow[k],
roadgraph_image=vis_grids.roadgraph,
agent_trails=vis_grids.agent_trails,
)
images.append(flow_rgb[0])
anim = create_animation(images, interval=200)
HTML(anim.to_html5_video())
images = []
for k in range(config.num_waypoints):
observed_occupancy_grids = pred_waypoints.get_observed_occupancy_at_waypoint(
k)
occupancy = observed_occupancy_grids.vehicles
flow = pred_waypoints.vehicles.flow[k]
occupancy_flow = occupancy * flow
flow_rgb = occupancy_flow_vis.flow_rgb_image(
flow=occupancy_flow,
roadgraph_image=vis_grids.roadgraph,
agent_trails=vis_grids.agent_trails,
)
images.append(flow_rgb[0])
anim = create_animation(images, interval=200)
HTML(anim.to_html5_video())
metrics = occupancy_flow_metrics.compute_occupancy_flow_metrics(
config=config,
true_waypoints=true_waypoints,
pred_waypoints=pred_waypoints,
)
print('Metrics:')
print(metrics)
def _make_submission_proto(
) -> occupancy_flow_submission_pb2.ChallengeSubmission:
Makes a submission proto to store predictions for one shard.
submission = occupancy_flow_submission_pb2.ChallengeSubmission()
submission.account_name = 'me@gmail.com'
submission.unique_method_name = 'My method'
submission.authors.extend(['Author 1', 'Author 2', 'Author 3'])
submission.description = 'Description of my method'
submission.method_link = 'http://example.com/'
return submission
test_shard_paths = tf.io.gfile.glob(TEST_FILES)
print('All test shards:')
print('\n'.join(test_shard_paths))
with tf.io.gfile.GFile(TEST_SCENARIO_IDS_FILE) as f:
test_scenario_ids = f.readlines()
test_scenario_ids = [id.rstrip() for id in test_scenario_ids]
print('Got', len(test_scenario_ids), 'test scenario ids.')
def _make_test_dataset(test_shard_path: str) -> tf.data.Dataset:
Makes a dataset for one shard in the test set.
test_dataset = tf.data.TFRecordDataset(test_shard_path)
test_dataset = test_dataset.map(occupancy_flow_data.parse_tf_example)
test_dataset = test_dataset.batch(1)
return test_dataset
def _add_waypoints_to_scenario_prediction(
pred_waypoints: occupancy_flow_grids.WaypointGrids,
scenario_prediction: occupancy_flow_submission_pb2.ScenarioPrediction,
config: occupancy_flow_metrics_pb2.OccupancyFlowTaskConfig,
) -> None:
Add predictions for all waypoints to scenario_prediction message.
for k in range(config.num_waypoints):
waypoint_message = scenario_prediction.waypoints.add()
# Observed occupancy.
obs_occupancy = pred_waypoints.vehicles.observed_occupancy[k].numpy()
obs_occupancy_quantized = np.round(obs_occupancy * 255).astype(np.uint8)
obs_occupancy_bytes = zlib.compress(obs_occupancy_quantized.tobytes())
waypoint_message.observed_vehicles_occupancy = obs_occupancy_bytes
# Occluded occupancy.
occ_occupancy = pred_waypoints.vehicles.occluded_occupancy[k].numpy()
occ_occupancy_quantized = np.round(occ_occupancy * 255).astype(np.uint8)
occ_occupancy_bytes = zlib.compress(occ_occupancy_quantized.tobytes())
waypoint_message.occluded_vehicles_occupancy = occ_occupancy_bytes
# Flow.
flow = pred_waypoints.vehicles.flow[k].numpy()
flow_quantized = np.clip(np.round(flow), -128, 127).astype(np.int8)
flow_bytes = zlib.compress(flow_quantized.tobytes())
waypoint_message.all_vehicles_flow = flow_bytes
def _generate_predictions_for_one_test_shard(
submission: occupancy_flow_submission_pb2.ChallengeSubmission,
test_dataset: tf.data.Dataset,
test_scenario_ids: Sequence[str],
shard_message: str,
) -> None:
Iterate over all test examples in one shard and generate predictions.
for i, inputs in enumerate(test_dataset):
if inputs['scenario/id'] in test_scenario_ids:
print(f'Processing test shard {shard_message}, example {i}...')
# Run inference.
pred_waypoint_logits = _run_model_on_inputs(inputs=inputs, training=False)
pred_waypoints = _apply_sigmoid_to_occupancy_logits(pred_waypoint_logits)
# Make new scenario prediction message.
scenario_prediction = submission.scenario_predictions.add()
scenario_prediction.scenario_id = inputs['scenario/id'].numpy()[0]
# Add all waypoints.
_add_waypoints_to_scenario_prediction(
pred_waypoints=pred_waypoints,
scenario_prediction=scenario_prediction,
config=config)
def _save_submission_to_file(
submission: occupancy_flow_submission_pb2.ChallengeSubmission,
test_shard_path: str,
) -> None:
Save predictions for one test shard as a binary protobuf.
save_folder = os.path.join(pathlib.Path.home(),
'occupancy_flow_challenge/testing')
os.makedirs(save_folder, exist_ok=True)
basename = os.path.basename(test_shard_path)
if 'testing_tfexample.tfrecord' not in basename:
raise ValueError('Cannot determine file path for saving submission.')
submission_basename = basename.replace('testing_tfexample.tfrecord',
'occupancy_flow_submission.binproto')
submission_shard_file_path = os.path.join(save_folder, submission_basename)
num_scenario_predictions = len(submission.scenario_predictions)
print(f'Saving {num_scenario_predictions} scenario predictions to '
f'{submission_shard_file_path}...\n')
f = open(submission_shard_file_path, 'wb')
f.write(submission.SerializeToString())
f.close()
for i, test_shard_path in enumerate(test_shard_paths):
print(f'Creating submission for test shard {test_shard_path}...')
test_dataset = _make_test_dataset(test_shard_path=test_shard_path)
submission = _make_submission_proto()
_generate_predictions_for_one_test_shard(
submission=submission,
test_dataset=test_dataset,
test_scenario_ids=test_scenario_ids,
shard_message=f'{i + 1} of {len(test_shard_paths)}')
_save_submission_to_file(
submission=submission, test_shard_path=test_shard_path)
if i == 0:
print('Sample scenario prediction:\n')
print(submission.scenario_predictions[-1])
!tar czvf ~/occupancy_flow_challenge/submit_testing.tar.gz -C ~/occupancy_flow_challenge/testing .
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data location
Step2: Create dataset
Step3: Load one example
Step10: Visualize TF Example
Step12: Display animation
Step14: Config
Step15: Occupancy flow ground truth
Step16: Visualize
Step20: Baseline model
Step25: Loss
Step27: Sample training loop
Step29: Sample inference
Step30: Visualize
Step31: Occluded occupancy
Step32: Flow
Step33: Joint occupancy-flow
Step34: Metrics
Step36: Generate submission
Step37: Test set shards
Step38: Test scenario IDs
Step40: Test dataset
Step43: Inference for one test set shard
Step45: Save to file
Step46: Run (slow)
Step47: Compress
|
1,974
|
<ASSISTANT_TASK:>
Python Code:
%%writefile ../../user_models/cylinder_Ascan_2D.in
#title: A-scan from a metal cylinder buried in a dielectric half-space
#domain: 0.240 0.210 0.002
#dx_dy_dz: 0.002 0.002 0.002
#time_window: 3e-9
#material: 6 0 1 0 half_space
#waveform: ricker 1 1.5e9 my_ricker
#hertzian_dipole: z 0.100 0.170 0 my_ricker
#rx: 0.140 0.170 0
#box: 0 0 0 0.240 0.170 0.002 half_space
#cylinder: 0.120 0.080 0 0.120 0.080 0.002 0.010 pec
#geometry_view: 0 0 0 0.240 0.210 0.002 0.002 0.002 0.002 cylinder_half_space n
%matplotlib inline
from gprMax.waveforms import Waveform
from tools.plot_source_wave import check_timewindow, mpl_plot
w = Waveform()
w.type = 'ricker'
w.amp = 1
w.freq = 1.5e9
timewindow = 3e-9
dt = 1.926e-12
timewindow, iterations = check_timewindow(timewindow, dt)
plt = mpl_plot(w, timewindow, dt, iterations, fft=True)
from math import sqrt
# Speed of light in vacuum (m/s)
c = 299792458
# Highest relative permittivity present in model
er = 6
# Maximum frequency present in model
fmax = 4e9
# Minimum wavelength
wmin = c / (fmax * sqrt(er))
# Maximum spatial resolution
dmin = wmin / 10
print('Minimum wavelength: {:g} m'.format(wmin))
print('Maximum spatial resolution: {:g} m'.format(dmin))
d = 0.090
t = (2 * d) / (c / sqrt(6))
print('Minimum time window: {:g} s'.format(t))
import os
from gprMax.gprMax import api
filename = os.path.join(os.pardir, os.pardir, 'user_models', 'cylinder_Ascan_2D.in')
api(filename, n=1, geometry_only=False)
%matplotlib inline
import os
from gprMax.receivers import Rx
from tools.plot_Ascan import mpl_plot
filename = os.path.join(os.pardir, os.pardir, 'user_models', 'cylinder_Ascan_2D.out')
outputs = Rx.defaultoutputs
#outputs = ['Ez']
plt = mpl_plot(filename, outputs, fft=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Geometry of a metal cylinder buried in a dielectric half-space
Step2: By examining the spectrum of a Ricker waveform it is evident much higher frequencies are present, i.e. at a level -40dB from the centre frequency, frequencies 2-3 times as high are present. In this case the highest significant frequency present in the model is likely to be around 4 GHz. To calculate the wavelength at 4 GHz in the half-space (which has the lowest velocity) use
Step3: This would give a minimum spatial resolution of 3 mm. However, the diameter of the cylinder is 20 mm so would be resolved to 7 cells. Therefore a better choice would be 2 mm which resolves the diameter of the rebar to 10 cells.
Step4: This is the minimum time required, but the source waveform has a width of 1.2 ns, to allow for the entire source waveform to be reflected back to the receiver an initial time window of 3 ns will be tested.
Step5: View the results
|
1,975
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import textwrap
import os
import sys
import warnings
warnings.filterwarnings('ignore')
# special things
from pivottablejs import pivot_ui
from ipywidgets import FloatSlider, interactive, IntSlider
from scipy import interpolate
# sql
%load_ext sql_magic
import sqlalchemy
import sqlite3
from sqlalchemy import create_engine
sqlite_engine = create_engine('sqlite://')
# autoreload
%load_ext autoreload
%autoreload 1
# %aimport module_to_reload
# ehh...
# import bqplot.pyplot as plt
import ipyvolume as ipv
import altair as alt
from vega_datasets import data
import seaborn as sns
sns.set_context('poster', font_scale=1.3)
a = "hi"
b = np.array([1, 2, 4, 6])
print("hello world")
a = 4
report = "FAIL"
weight_categories = [ "vlow_weight", "low_weight",
"mid_weight", "high_weight",
"vhigh_weight",]
players['weightclass'] = pd.qcut(players['weight'],
len(weight_categories), weight_categories)
weight_categories = [
"vlow_weight",
"low_weight",
"mid_weight",
"high_weight",
"vhigh_weight",
]
players["weightclass"] = pd.qcut(
players["weight"], len(weight_categories), weight_categories
)
import time
time.sleep(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notebook Extensions
Step2: Snippets Menu
Step3: Python Markdown -- Maybe doesn't work right now for some reason?
Step4: Collapsible Headings
|
1,976
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt
%matplotlib inline
loans = pd.read_csv('lending-club-data.csv')
loans.head(2)
loans.columns
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.drop('bad_loans', axis=1)
target = 'safe_loans'
loans = loans[features + [target]]
print loans.shape
categorical_variables = []
for feat_name, feat_type in zip(loans.columns, loans.dtypes):
if feat_type == object:
categorical_variables.append(feat_name)
for feature in categorical_variables:
loans_one_hot_encoded = pd.get_dummies(loans[feature],prefix=feature)
loans_one_hot_encoded.fillna(0)
#print loans_one_hot_encoded
loans = loans.drop(feature, axis=1)
for col in loans_one_hot_encoded.columns:
loans[col] = loans_one_hot_encoded[col]
print loans.head(2)
print loans.columns
with open('module-8-assignment-2-train-idx.json') as train_data_file:
train_idx = json.load(train_data_file)
with open('module-8-assignment-2-test-idx.json') as test_data_file:
test_idx = json.load(test_data_file)
print train_idx[:3]
print test_idx[:3]
print len(train_idx)
print len(test_idx)
train_data = loans.iloc[train_idx]
test_data = loans.iloc[test_idx]
print len(train_data.dtypes)
print len(loans.dtypes )
features = list(train_data.columns)
features.remove('safe_loans')
print list(train_data.columns)
print features
print len(features)
def intermediate_node_weighted_mistakes(labels_in_node, data_weights):
# Sum the weights of all entries with label +1
print 'labels_in_node: '+ str(labels_in_node)
print 'data_weights: '+str(data_weights)
print data_weights[labels_in_node == +1]
print np.array(data_weights[labels_in_node == +1])
print np.sum(np.array(data_weights[labels_in_node == +1]))
labels_in_node = np.array(labels_in_node)
data_weights = np.array(data_weights)
total_weight_positive = np.sum(data_weights[labels_in_node == +1])
# Weight of mistakes for predicting all -1's is equal to the sum above
### YOUR CODE HERE
# Sum the weights of all entries with label -1
### YOUR CODE HERE
print np.array(data_weights[labels_in_node == -1])
print np.sum(np.array(data_weights[labels_in_node == -1]))
total_weight_negative = np.sum(data_weights[labels_in_node == -1])
# Weight of mistakes for predicting all +1's is equal to the sum above
### YOUR CODE HERE
# Return the tuple (weight, class_label) representing the lower of the two weights
# class_label should be an integer of value +1 or -1.
# If the two weights are identical, return (weighted_mistakes_all_positive,+1)
### YOUR CODE HERE
#print "total_weight_positive: {}, total_weight_negative: {}".format(total_weight_positive, total_weight_negative)
if total_weight_positive >= total_weight_negative:
return (total_weight_negative, +1)
else:
return (total_weight_positive, -1)
example_labels = np.array([-1, -1, 1, 1, 1])
example_data_weights = np.array([1., 2., .5, 1., 1.])
if intermediate_node_weighted_mistakes(example_labels, example_data_weights) == (2.5, -1):
print 'Test passed!'
else:
print 'Test failed... try again!'
# If the data is identical in each feature, this function should return None
def best_splitting_feature(data, features, target, data_weights):
# These variables will keep track of the best feature and the corresponding error
best_feature = None
best_error = float('+inf')
num_points = float(len(data))
print "len(data_weights): {}".format(len(data_weights))
data['data_weights'] = data_weights
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
# The right split will have all data points where the feature value is 1
left_split = data[data[feature] == 0]
right_split = data[data[feature] == 1]
# print "len(left_split): {}, len(right_split): {}".format(len(left_split), len(right_split))
# Apply the same filtering to data_weights to create left_data_weights, right_data_weights
## YOUR CODE HERE
left_data_weights = left_split['data_weights']
right_data_weights = right_split['data_weights']
print "left_data_weights: {}, right_data_weights: {}".format(left_data_weights, right_data_weights)
print "len(left_data_weights): {}, len(right_data_weights): {}".format(len(left_data_weights), len(right_data_weights))
print "sum(left_data_weights): {}, sum(right_data_weights): {}".format(sum(left_data_weights), sum(right_data_weights))
# DIFFERENT HERE
# Calculate the weight of mistakes for left and right sides
## YOUR CODE HERE
#print "np.array type: {}".format(np.array(left_split[target]))
left_weighted_mistakes, left_class = intermediate_node_weighted_mistakes(np.array(left_split[target]), np.array(left_data_weights))
right_weighted_mistakes, right_class = intermediate_node_weighted_mistakes(np.array(right_split[target]), np.array(right_data_weights))
# DIFFERENT HERE
# Compute weighted error by computing
# ( [weight of mistakes (left)] + [weight of mistakes (right)] ) / [total weight of all data points]
## YOUR CODE HERE
error = (left_weighted_mistakes + right_weighted_mistakes) * 1. / sum(data_weights)
print "left_weighted_mistakes: {}, right_weighted_mistakes: {}".format(left_weighted_mistakes, right_weighted_mistakes)
print "left_weighted_mistakes + right_weighted_mistakes: {}, error: {}".format(left_weighted_mistakes + right_weighted_mistakes, error)
print "feature and error: "
print "feature: {}, error: {}".format(feature, error)
# If this is the best error we have found so far, store the feature and the error
if error < best_error:
#print "best_feature: {}, best_error: {}".format(feature, error)
best_feature = feature
best_error = error
#print "best_feature: {}, best_error: {}".format(best_feature, best_error)
# Return the best feature we found
return best_feature
example_data_weights = np.array(len(train_data)* [1.5])
#print "example_data_weights: {}".format(example_data_weights)
#print "train_data: \n {}, features: {}, target: {}, example_data_weights: {}".format(train_data, features, target, example_data_weights)
#print best_splitting_feature(train_data, features, target, example_data_weights)
if best_splitting_feature(train_data, features, target, example_data_weights) == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
def create_leaf(target_values, data_weights):
# Create a leaf node
leaf = {'splitting_feature' : None,
'is_leaf': True}
# Computed weight of mistakes.
weighted_error, best_class = intermediate_node_weighted_mistakes(target_values, data_weights)
# Store the predicted class (1 or -1) in leaf['prediction']
leaf['prediction'] = best_class
return leaf
def weighted_decision_tree_create(data, features, target, data_weights, current_depth = 1, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
data['data_weights'] = data_weights
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1. Error is 0.
if intermediate_node_weighted_mistakes(target_values, data_weights)[0] <= 1e-15:
print "Stopping condition 1 reached."
return create_leaf(target_values, data_weights)
# Stopping condition 2. No more features.
if remaining_features == []:
print "Stopping condition 2 reached."
return create_leaf(target_values, data_weights)
# Additional stopping condition (limit tree depth)
if current_depth > max_depth:
print "Reached maximum depth. Stopping for now."
return create_leaf(target_values, data_weights)
# If all the datapoints are the same, splitting_feature will be None. Create a leaf
splitting_feature = best_splitting_feature(data, features, target, data_weights)
remaining_features.remove(splitting_feature)
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
left_data_weights = data_weights[data[splitting_feature] == 0]
right_data_weights = data_weights[data[splitting_feature] == 1]
left_data_weights = np.array(left_split['data_weights'])
right_data_weights = np.array(right_split['data_weights'])
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target], data_weights)
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target], data_weights)
# Repeat (recurse) on left and right subtrees
left_tree = weighted_decision_tree_create(
left_split, remaining_features, target, left_data_weights, current_depth + 1, max_depth)
right_tree = weighted_decision_tree_create(
right_split, remaining_features, target, right_data_weights, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
example_data_weights = np.array([1.0 for i in range(len(train_data))])
small_data_decision_tree = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
if count_nodes(small_data_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found:', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there: 7'
small_data_decision_tree
def classify(tree, x, annotate = False):
# If the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# Split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x), axis=1)
# Once you've made the predictions, calculate the classification error
return (data[target] != np.array(prediction)).values.sum() / float(len(data))
evaluate_classification_error(small_data_decision_tree, test_data)
evaluate_classification_error(small_data_decision_tree, train_data)
# Assign weights
example_data_weights = np.array([1.] * 10 + [0.]*(len(train_data) - 20) + [1.] * 10)
# Train a weighted decision tree model.
small_data_decision_tree_subset_20 = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
subset_20 = train_data.head(10).append(train_data.tail(10))
evaluate_classification_error(small_data_decision_tree_subset_20, subset_20)
evaluate_classification_error(small_data_decision_tree_subset_20, train_data)
# Assign weights
sth_example_data_weights = np.array([1.] * 10 + [1.] * 10)
# Train a weighted decision tree model.
sth_test_model = weighted_decision_tree_create(subset_20, features, target,
sth_example_data_weights, max_depth=2)
small_data_decision_tree_subset_20
sth_test_model
from math import log
from math import exp
def adaboost_with_tree_stumps(data, features, target, num_tree_stumps):
# start with unweighted data
alpha = np.array([1.]*len(data))
weights = []
tree_stumps = []
target_values = data[target]
for t in xrange(num_tree_stumps):
print '====================================================='
print 'Adaboost Iteration %d' % t
print '====================================================='
# Learn a weighted decision tree stump. Use max_depth=1
tree_stump = weighted_decision_tree_create(data, features, target, data_weights=alpha, max_depth=1)
tree_stumps.append(tree_stump)
# Make predictions
predictions = data.apply(lambda x: classify(tree_stump, x), axis=1)
# Produce a Boolean array indicating whether
# each data point was correctly classified
is_correct = predictions == target_values
is_wrong = predictions != target_values
# Compute weighted error
# YOUR CODE HERE
weighted_error = np.sum(np.array(is_wrong) * alpha) * 1. / np.sum(alpha)
# Compute model coefficient using weighted error
# YOUR CODE HERE
weight = 1. / 2 * log((1 - weighted_error) * 1. / (weighted_error))
weights.append(weight)
# Adjust weights on data point
adjustment = is_correct.apply(lambda is_correct : exp(-weight) if is_correct else exp(weight))
# Scale alpha by multiplying by adjustment
# Then normalize data points weights
## YOUR CODE HERE
alpha = alpha * np.array(adjustment)
alpha = alpha / np.sum(alpha)
return weights, tree_stumps
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, target, num_tree_stumps=2)
def print_stump(tree):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('_')
print ' root'
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0]{1}[{0} == 1] '.format(split_name, ' '*(27-len(split_name)))
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(tree_stumps[0])
print_stump(tree_stumps[1])
print stump_weights
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features,
target, num_tree_stumps=10)
def predict_adaboost(stump_weights, tree_stumps, data):
scores = np.array([0.]*len(data))
for i, tree_stump in enumerate(tree_stumps):
predictions = data.apply(lambda x: classify(tree_stump, x), axis=1)
# Accumulate predictions on scores array
# YOUR CODE HERE
scores = scores + stump_weights[i] * np.array(predictions)
# return the prediction
return np.array(1 * (scores > 0) + (-1) * (scores <= 0))
traindata_predictions = predict_adaboost(stump_weights, tree_stumps, train_data)
train_accuracy = np.sum(np.array(train_data[target]) == traindata_predictions) / float(len(traindata_predictions))
print 'training data Accuracy of 10-component ensemble = %s' % train_accuracy
predictions = predict_adaboost(stump_weights, tree_stumps, test_data)
accuracy = np.sum(np.array(test_data[target]) == predictions) / float(len(predictions))
print 'test data Accuracy of 10-component ensemble = %s' % accuracy
stump_weights
plt.plot(stump_weights)
plt.show()
# this may take a while...
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data,
features, target, num_tree_stumps=30)
error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], train_data)
error = np.sum(np.array(train_data[target]) != predictions) / float(len(predictions))
error_all.append(error)
print "Iteration %s, training error = %s" % (n, error_all[n-1])
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size': 16})
test_error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], test_data)
error = np.sum(np.array(test_data[target]) != predictions) / float(len(predictions))
test_error_all.append(error)
print "Iteration %s, test error = %s" % (n, test_error_all[n-1])
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.plot(range(1,31), test_error_all, '-', linewidth=4.0, label='Test error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.rcParams.update({'font.size': 16})
plt.legend(loc='best', prop={'size':15})
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data ready
Step2: Extracting the target and the feature columns
Step3: Transform categorical data into binary features
Step4: Let's see what the feature columns look like now
Step7: Weighted decision trees
Step8: Checkpoint
Step11: Recall that the classification error is defined as follows
Step12: Checkpoint
Step13: Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Step15: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions
Step16: Here is a recursive function to count the nodes in your tree
Step17: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step18: Let us take a quick look at what the trained tree is like. You should get something that looks like the following
Step19: Making predictions with a weighted decision tree
Step20: Evaluating the tree
Step21: Example
Step22: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
Step23: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data
Step24: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.
Step25: Implementing your own Adaboost (on decision stumps)
Step26: Checking your Adaboost code
Step27: Here is what the first stump looks like
Step28: Here is what the next stump looks like
Step29: If your Adaboost is correctly implemented, the following things should be true
Step30: Making predictions
Step31: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble
Step32: Quiz Question
Step33: Computing training error at the end of each iteration
Step34: Visualizing training error vs number of iterations
Step35: Quiz Question
Step36: Visualize both the training and test errors
|
1,977
|
<ASSISTANT_TASK:>
Python Code:
# First import the model. Here we use the HBV version
from wflow.wflow_sbm import *
import IPython
from IPython.display import display, clear_output
%pylab inline
#clear_output = IPython.core.display.clear_output
# Here we define a simple fictious reservoir
reservoirstorage = 15000
def simplereservoir(inputq,storage):
K = 0.087
storage = storage + inputq
outflow = storage * K
storage = storage - outflow
return outflow, storage
# define start and stop time of the run
startTime = 1
stopTime = 200
currentTime = 1
# set runid, cl;onemap and casename. Also define the ini file
runId = "reservoirtest_1"
#configfile="wflow_hbv_mem.ini"
configfile="wflow_sbm.ini"
wflow_cloneMap = 'wflow_subcatch.map'
# the casename points to the complete model setup with both static and dynamic input
caseName="../examples/wflow_rhine_sbm/"
#make a usermodel object
myModel = WflowModel(wflow_cloneMap, caseName,runId,configfile)
# initialise the framework
dynModelFw = wf_DynamicFramework(myModel, stopTime,startTime)
dynModelFw.createRunId(NoOverWrite=False,level=logging.ERROR)
dynModelFw.setQuiet(1)
# Run the initial part of the model (reads parameters and sets initial values)
dynModelFw._runInitial() # Runs initial part
dynModelFw._runResume() # gets the state variables from disk
# Get list of variables supplied by the model
#print dynModelFw.wf_supplyVariableNamesAndRoles()
# A pit can be set in the ldd by specifying the direction 5
# (see pcraster.eu for the ldd direction conventions)
ret = dynModelFw.wf_setValueLdd("TopoLdd",5.0,8.40943,49.6682)
report(myModel.TopoLdd,"n_ldd.map")
f, ax = plt.subplots(1,3,figsize=(14, 4))
plotar = []
plotarstorage = []
plotaroutflow = []
for ts in range(1,45):
# Add inflow to outflow downstream of the pit
# See the API setion of the INI file
# Get Q value at pit, the reservoir inflow
inflowQ = dynModelFw.wf_supplyScalar("SurfaceRunoff",8.40943,49.6682)
# save for plotting
plotar.append(inflowQ)
# Feed to the reservoir model
outflow, reservoirstorage = simplereservoir(inflowQ, reservoirstorage)
# save for plotting
plotarstorage.append(reservoirstorage)
plotaroutflow.append(outflow)
#dynModelFw._userModel().IF = cover(0.0)
dynModelFw.wf_setValue("IF", outflow ,8.40943,49.7085)
# update runoff ONLY NEEDED IF YOU FIDDLE WITH THE KIN_WAVE RESERVOIR
myModel.updateRunOff()
dynModelFw._runDynamic(ts,ts) # runs for this timesteps
# Now get some results for display
run = dynModelFw.wf_supplyMapAsNumpy("SurfaceRunoff")
uz = dynModelFw.wf_supplyMapAsNumpy("FirstZoneCapacity")
sm = dynModelFw.wf_supplyMapAsNumpy("UStoreDepth")
sm[sm == -999] = np.nan
uz[uz == -999] = np.nan
run[run == -999] = np.nan
ax[0].imshow(log(run))
ax[1].plot(plotarstorage,'k')
ax[1].set_title("Reservoir storage")
ax[2].plot(plotar,'b')
ax[2].plot(plotaroutflow,'r')
ax[2].set_title("Blue inflow, red outflow:" + str(ts))
clear_output()
display(f)
plt.close()
dynModelFw._runSuspend() # saves the state variables
dynModelFw._wf_shutdown()
imshow(dynModelFw.wf_supplyMapAsNumpy("SurfaceRunoff"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set model run-time parameters
Step2: Here we make a pit in the middle of the main river. This will be the inflow to the reservoir
Step3: Run for a number of timesteps
|
1,978
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
x0 = np.random.choice(3, 10)
x0
encoder.fit(x0[:, np.newaxis])
X = encoder.transform(x0[:, np.newaxis]).toarray()
X
dfX = pd.DataFrame(X, columns=encoder.active_features_)
dfX
from sklearn.datasets import load_boston
boston = load_boston()
dfX0_boston = pd.DataFrame(boston.data, columns=boston.feature_names)
dfy_boston = pd.DataFrame(boston.target, columns=["MEDV"])
import statsmodels.api as sm
dfX_boston = sm.add_constant(dfX0_boston)
df_boston = pd.concat([dfX_boston, dfy_boston], axis=1)
df_boston.tail()
dfX_boston.CHAS.plot()
dfX_boston.CHAS.unique()
model = sm.OLS(dfy_boston, dfX_boston)
result = model.fit()
print(result.summary())
params1 = result.params.drop("CHAS")
params1
params2 = params1.copy()
params2["const"] += result.params["CHAS"]
params2
df_boston.boxplot("MEDV", "CHAS")
plt.show()
sns.stripplot(x="CHAS", y="MEDV", data=df_boston, jitter=True, alpha=.3)
sns.pointplot(x="CHAS", y="MEDV", data=df_boston, dodge=True, color='r')
plt.show()
import statsmodels.api as sm
model = sm.OLS.from_formula("MEDV ~ C(CHAS)", data=df_boston)
result = model.fit()
table = sm.stats.anova_lm(result)
table
model1 = sm.OLS.from_formula("MEDV ~ CRIM + ZN +INDUS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B + LSTAT", data=df_boston)
model2 = sm.OLS.from_formula("MEDV ~ CRIM + ZN +INDUS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B + LSTAT + C(CHAS)", data=df_boston)
result1 = model1.fit()
result2 = model2.fit()
table = sm.stats.anova_lm(result1, result2)
table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 더미 변수와 모형 비교
Step2: 분산 분석을 이용한 모형 비교
|
1,979
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('tmdb_5000_movies.csv.gz',
compression='gzip')
df.info()
df.head()
df = df[['title', 'tagline', 'overview', 'genres', 'popularity']]
df.tagline.fillna('', inplace=True)
df['description'] = df['tagline'].map(str) + ' ' + df['overview']
df.dropna(inplace=True)
df.info()
df.head()
import nltk
import re
import numpy as np
stop_words = nltk.corpus.stopwords.words('english')
def normalize_document(doc):
# lower case and remove special characters\whitespaces
doc = re.sub(r'[^a-zA-Z0-9\s]', '', doc, re.I|re.A)
doc = doc.lower()
doc = doc.strip()
# tokenize document
tokens = nltk.word_tokenize(doc)
# filter stopwords out of document
filtered_tokens = [token for token in tokens if token not in stop_words]
# re-create document from filtered tokens
doc = ' '.join(filtered_tokens)
return doc
normalize_corpus = np.vectorize(normalize_document)
norm_corpus = normalize_corpus(list(df['description']))
len(norm_corpus)
from sklearn.feature_extraction.text import CountVectorizer
stop_words = stop_words + ['one', 'two', 'get']
cv = CountVectorizer(ngram_range=(1, 2), min_df=10, max_df=0.8, stop_words=stop_words)
cv_matrix = cv.fit_transform(norm_corpus)
cv_matrix.shape
from sklearn.cluster import KMeans
NUM_CLUSTERS = 6
km = KMeans(n_clusters=NUM_CLUSTERS, max_iter=10000, n_init=50, random_state=42).fit(cv_matrix)
km
from collections import Counter
Counter(km.labels_)
df['kmeans_cluster'] = km.labels_
movie_clusters = (df[['title', 'kmeans_cluster', 'popularity']]
.sort_values(by=['kmeans_cluster', 'popularity'],
ascending=False)
.groupby('kmeans_cluster').head(20))
movie_clusters = movie_clusters.copy(deep=True)
feature_names = cv.get_feature_names()
topn_features = 15
ordered_centroids = km.cluster_centers_.argsort()[:, ::-1]
# get key features for each cluster
# get movies belonging to each cluster
for cluster_num in range(NUM_CLUSTERS):
key_features = [feature_names[index]
for index in ordered_centroids[cluster_num, :topn_features]]
movies = movie_clusters[movie_clusters['kmeans_cluster'] == cluster_num]['title'].values.tolist()
print('CLUSTER #'+str(cluster_num+1))
print('Key Features:', key_features)
print('Popular Movies:', movies)
print('-'*80)
from sklearn.metrics.pairwise import cosine_similarity
cosine_sim_features = cosine_similarity(cv_matrix)
km = KMeans(n_clusters=NUM_CLUSTERS, max_iter=10000, n_init=50, random_state=42).fit(cosine_sim_features)
Counter(km.labels_)
df['kmeans_cluster'] = km.labels_
movie_clusters = (df[['title', 'kmeans_cluster', 'popularity']]
.sort_values(by=['kmeans_cluster', 'popularity'],
ascending=False)
.groupby('kmeans_cluster').head(20))
movie_clusters = movie_clusters.copy(deep=True)
# get movies belonging to each cluster
for cluster_num in range(NUM_CLUSTERS):
movies = movie_clusters[movie_clusters['kmeans_cluster'] == cluster_num]['title'].values.tolist()
print('CLUSTER #'+str(cluster_num+1))
print('Popular Movies:', movies)
print('-'*80)
from sklearn.cluster import AffinityPropagation
ap = AffinityPropagation(max_iter=1000)
ap.fit(cosine_sim_features)
res = Counter(ap.labels_)
res.most_common(10)
df['affprop_cluster'] = ap.labels_
filtered_clusters = [item[0] for item in res.most_common(8)]
filtered_df = df[df['affprop_cluster'].isin(filtered_clusters)]
movie_clusters = (filtered_df[['title', 'affprop_cluster', 'popularity']]
.sort_values(by=['affprop_cluster', 'popularity'],
ascending=False)
.groupby('affprop_cluster').head(20))
movie_clusters = movie_clusters.copy(deep=True)
# get key features for each cluster
# get movies belonging to each cluster
for cluster_num in range(len(filtered_clusters)):
movies = movie_clusters[movie_clusters['affprop_cluster'] == filtered_clusters[cluster_num]]['title'].values.tolist()
print('CLUSTER #'+str(filtered_clusters[cluster_num]))
print('Popular Movies:', movies)
print('-'*80)
from scipy.cluster.hierarchy import ward, dendrogram
from sklearn.metrics.pairwise import cosine_similarity
def ward_hierarchical_clustering(feature_matrix):
cosine_distance = 1 - cosine_similarity(feature_matrix)
linkage_matrix = ward(cosine_distance)
return linkage_matrix
def plot_hierarchical_clusters(linkage_matrix, movie_data, p=100, figure_size=(8,12)):
# set size
fig, ax = plt.subplots(figsize=figure_size)
movie_titles = movie_data['title'].values.tolist()
# plot dendrogram
R = dendrogram(linkage_matrix, orientation="left", labels=movie_titles,
truncate_mode='lastp',
p=p,
no_plot=True)
temp = {R["leaves"][ii]: movie_titles[ii] for ii in range(len(R["leaves"]))}
def llf(xx):
return "{}".format(temp[xx])
ax = dendrogram(
linkage_matrix,
truncate_mode='lastp',
orientation="left",
p=p,
leaf_label_func=llf,
leaf_font_size=10.,
)
plt.tick_params(axis= 'x',
which='both',
bottom='off',
top='off',
labelbottom='off')
plt.tight_layout()
plt.savefig('movie_hierachical_clusters.png', dpi=200)
linkage_matrix = ward_hierarchical_clustering(cv_matrix)
plot_hierarchical_clusters(linkage_matrix,
p=100,
movie_data=df,
figure_size=(12, 14))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Your Turn
Step2: Extract TF-IDF Features
Step3: Cluster Movies using K-Means
Step4: Affinity Propagation
Step5: Hierarchical Clustering
Step6: Calculate Linkage Matrix using Cosine Similarity
Step7: Plot Hierarchical Structure as a Dendrogram
|
1,980
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
# Our data is cleaned by cleaning utility code
df = pd.read_csv('Clean_Data_Adults_1.csv')
# Separate labels and Features
df_labels = df['Depressed']
df_feats = df.drop(['Depressed', 'Unnamed: 0'], axis=1, inplace=False)
X = df_feats.get_values() # features
y = df_labels.get_values() # labels
'''
Get rid of the negative values of the race_id columns
W.L.O.G., subtract the minimum negative from the entire column
'''
def clean_negs(X):
# Get indices of columns that contain negative values
neg_col_inds = np.unique(np.where(X<0)[1])
# Subtract minimum negative for each column
for neg_i in neg_col_inds:
neg_col = X[:, neg_i]
min_neg = np.min(neg_col)
new_col = [c - min_neg for c in neg_col]
X[:, neg_i] = new_col
return X
# Preprocess training features
X = clean_negs(X)
'''
Data Preparation
'''
from sklearn.cross_validation import train_test_split
# Split the simulated data into training set and test set
# Randomly sample 20% data as the test set
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.2, random_state=42)
print 'Training set size is', train_X.shape
print 'Testing set size is', test_X.shape
max_n = len(train_X)
# Train the given classifier
def train_clf(clf, train_feats, train_labels):
# Supervised training
clf.fit(train_feats, train_labels)
# Test the given classifier anc calculate accuracy
def test_clf(clf, test_feats, test_labels):
# Predict using test set
predicted = clf.predict(test_feats)
# Compute accuracy
acc = np.mean(predicted == test_labels)
return predicted, acc
# Compute accuracy of a model trained with a specific number of samples
def compute_acc(clf, n):
train_clf(clf, train_X[:n], train_y[:n])
predict_y, acc = test_clf(clf, test_X, test_y)
return acc
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
from sklearn.ensemble import RandomForestClassifier
[acc_NB, acc_LG, acc_KNN, acc_SVM, acc_RF] = [[] for i in xrange(5)]
for n in xrange(100, max_n, 100):
# Multinomial Naive Bayes
multiNB = MultinomialNB()
acc_NB.append(compute_acc(multiNB, n))
# Logistic Regression
lg = LogisticRegression(penalty='l1')
acc_LG.append(compute_acc(lg, n))
# K Nearest Neighbors
knn = KNeighborsClassifier(n_neighbors=10)
acc_KNN.append(compute_acc(knn, n))
# Support Vector Machine
svc = LinearSVC()
acc_SVM.append(compute_acc(svc, n))
# Random Forest
rf = RandomForestClassifier(n_estimators=60)
acc_RF.append(compute_acc(rf, n))
%matplotlib inline
import matplotlib.pyplot as plt
sizes = range(100, max_n, 100)
fig = plt.figure(1)
fig.set_size_inches(9, 6.5)
plt.plot(sizes, acc_NB, label='Multinomial NB')
plt.plot(sizes, acc_LG, label='Logistic Regression')
plt.plot(sizes, acc_KNN, label='K Nearest Neighbors')
plt.plot(sizes, acc_SVM, label='Support Vector Machine')
plt.plot(sizes, acc_RF, label='Random Forest')
plt.legend(loc='best')
plt.xlabel('sample size')
plt.ylabel('Simulation Accuracy')
'''
Train models with all the training data
Evaluate using the test data
'''
# Multinomial Naive Bayes
multiNB = MultinomialNB()
train_clf(multiNB, train_X, train_y)
predict_y, acc_nb = test_clf(multiNB, test_X, test_y)
# Logistic Regression
lg = LogisticRegression(penalty='l1')
train_clf(lg, train_X, train_y)
predict_y, acc_lg = test_clf(lg, test_X, test_y)
# K Nearest Neighbors
knn = KNeighborsClassifier(n_neighbors=10)
train_clf(knn, train_X, train_y)
predict_y, acc_knn = test_clf(knn, test_X, test_y)
# Support Vector Machine
svc = LinearSVC()
train_clf(svc, train_X, train_y)
predict_y, acc_svc = test_clf(svc, test_X, test_y)
# Random Forest
rf = RandomForestClassifier(n_estimators=60)
train_clf(rf, train_X, train_y)
predict_y, acc_rf = test_clf(rf, test_X, test_y)
print 'Multinomial Naive Bayes accuracy is', acc_nb
print 'Logistic Regression accuracy is', acc_lg
print 'K Nearest Neighbors accuracy is', acc_knn
print 'Support Vector Machine (Linear Kernel) accuracy is', acc_svc
print 'Random Forest accuracy is', acc_rf
# Visualize classifier performance
x = range(5)
y = [acc_nb, acc_lg, acc_knn, acc_svc, acc_rf]
clf_names = ['Multinomial Naive Bayes', 'Logistic Regression', \
'K Nearest Neighbors', 'Support Vector Machine', 'Random Forest']
width = 0.6/1.2
plt.bar(x, y, width)
plt.title('Classifier Performance')
plt.xticks(x, clf_names, rotation=25)
plt.ylabel('Accuracy')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. State assumptions about your data
Step2: 5. Compute accuracy
Step3: 6. Plot accuracy vs. sample size in simulation
Step4: 7. Apply method directly on real data
|
1,981
|
<ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import time
import fasttext
import numpy as np
import pandas as pd
# prevent scientific notations
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,fasttext,scipy
# download the data and un-tar it under the 'data' folder
# -P or --directory-prefix specifies which directory to download the data to
!wget https://dl.fbaipublicfiles.com/fasttext/data/cooking.stackexchange.tar.gz -P data
# -C specifies the target directory to extract an archive to
!tar xvzf data/cooking.stackexchange.tar.gz -C data
!head -n 3 data/cooking.stackexchange.txt
# train/test split
import os
from fasttext_module.split import train_test_split_file
from fasttext_module.utils import prepend_file_name
data_dir = 'data'
test_size = 0.2
input_path = os.path.join(data_dir, 'cooking.stackexchange.txt')
input_path_train = prepend_file_name(input_path, 'train')
input_path_test = prepend_file_name(input_path, 'test')
random_state = 1234
encoding = 'utf-8'
train_test_split_file(input_path, input_path_train, input_path_test,
test_size, random_state, encoding)
print('train path: ', input_path_train)
print('test path: ', input_path_test)
# train the fasttext model
fasttext_params = {
'input': input_path_train,
'lr': 0.1,
'lrUpdateRate': 1000,
'thread': 8,
'epoch': 15,
'wordNgrams': 1,
'dim': 80,
'loss': 'ova'
}
model = fasttext.train_supervised(**fasttext_params)
print('vocab size: ', len(model.words))
print('label size: ', len(model.labels))
print('example vocab: ', model.words[:5])
print('example label: ', model.labels[:5])
# model.get_input_matrix().shape
print('output matrix shape: ', model.get_output_matrix().shape)
model.get_output_matrix()
# we'll get one of the labels to find its nearest neighbors
label_id = 0
print(model.labels[label_id])
index_factors = model.get_output_matrix()
query_factors = model.get_output_matrix()[label_id]
query_factors.shape
class Node:
Node for a navigable small world graph.
Parameters
----------
idx : int
For uniquely identifying a node.
value : 1d np.ndarray
To access the embedding associated with this node.
neighborhood : set
For storing adjacent nodes.
References
----------
https://book.pythontips.com/en/latest/__slots__magic.html
https://hynek.me/articles/hashes-and-equality/
__slots__ = ['idx', 'value', 'neighborhood']
def __init__(self, idx, value):
self.idx = idx
self.value = value
self.neighborhood = set()
def __hash__(self):
return hash(self.idx)
def __eq__(self, other):
return (
self.__class__ == other.__class__ and
self.idx == other.idx
)
from scipy.spatial import distance
def build_nsw_graph(index_factors, k):
n_nodes = index_factors.shape[0]
graph = []
for i, value in enumerate(index_factors):
node = Node(i, value)
graph.append(node)
for node in graph:
query_factor = node.value.reshape(1, -1)
# note that the following implementation is not the actual procedure that's
# used to find the k closest neighbors, we're just implementing a quick version,
# will come back to this later
# https://codereview.stackexchange.com/questions/55717/efficient-numpy-cosine-distance-calculation
# the smaller the cosine distance the more similar, thus the most
# similar item will be the first element after performing argsort
# since argsort by default sorts in ascending order
dist = distance.cdist(index_factors, query_factor, metric='cosine').ravel()
neighbors_indices = np.argsort(dist)[:k].tolist()
# insert bi-directional connection
node.neighborhood.update(neighbors_indices)
for i in neighbors_indices:
graph[i].neighborhood.add(node.idx)
return graph
k = 10
graph = build_nsw_graph(index_factors, k)
graph[0].neighborhood
import heapq
import random
from typing import List, Tuple
def nsw_knn_search(
graph: List[Node],
query: np.ndarray,
k: int=5,
m: int=50) -> Tuple[List[Tuple[float, int]], float]:
Performs knn search using the navigable small world graph.
Parameters
----------
graph :
Navigable small world graph from build_nsw_graph.
query : 1d np.ndarray
Query embedding that we wish to find the nearest neighbors.
k : int
Number of nearest neighbors returned.
m : int
The recall set will be chosen from m different entry points.
Returns
-------
The list of nearest neighbors (distance, index) tuple.
and the average number of hops that was made during the search.
result_queue = []
visited_set = set()
hops = 0
for _ in range(m):
# random entry point from all possible candidates
entry_node = random.randint(0, len(graph) - 1)
entry_dist = distance.cosine(query, graph[entry_node].value)
candidate_queue = []
heapq.heappush(candidate_queue, (entry_dist, entry_node))
temp_result_queue = []
while candidate_queue:
candidate_dist, candidate_idx = heapq.heappop(candidate_queue)
if len(result_queue) >= k:
# if candidate is further than the k-th element from the result,
# then we would break the repeat loop
current_k_dist, current_k_idx = heapq.nsmallest(k, result_queue)[-1]
if candidate_dist > current_k_dist:
break
for friend_node in graph[candidate_idx].neighborhood:
if friend_node not in visited_set:
visited_set.add(friend_node)
friend_dist = distance.cosine(query, graph[friend_node].value)
heapq.heappush(candidate_queue, (friend_dist, friend_node))
heapq.heappush(temp_result_queue, (friend_dist, friend_node))
hops += 1
result_queue = list(heapq.merge(result_queue, temp_result_queue))
return heapq.nsmallest(k, result_queue), hops / m
results = nsw_knn_search(graph, query_factors, k=5)
results
def build_nsw_graph(index_factors: np.ndarray, k: int) -> List[Node]:
n_nodes = index_factors.shape[0]
graph = []
for i, value in enumerate(index_factors):
node = Node(i, value)
if i > k:
neighbors, hops = nsw_knn_search(graph, node.value, k)
neighbors_indices = [node_idx for _, node_idx in neighbors]
else:
neighbors_indices = list(range(i))
# insert bi-directional connection
node.neighborhood.update(neighbors_indices)
for i in neighbors_indices:
graph[i].neighborhood.add(node.idx)
graph.append(node)
return graph
k = 10
index_factors = model.get_output_matrix()
graph = build_nsw_graph(index_factors, k)
graph[0].neighborhood
results = nsw_knn_search(graph, query_factors, k=5)
results
import hnswlib
def build_hnsw(factors, space, ef_construction, M):
# Declaring index
max_elements, dim = factors.shape
hnsw = hnswlib.Index(space, dim) # possible options for space are l2, cosine or ip
# Initing index - the maximum number of elements should be known beforehand
hnsw.init_index(max_elements, M, ef_construction)
# Element insertion (can be called several times)
hnsw.add_items(factors)
return hnsw
space = 'cosine'
ef_construction = 200
M = 24
start = time.time()
hnsw = build_hnsw(index_factors, space, ef_construction, M)
build_time = time.time() - start
build_time
k = 5
# Controlling the recall by setting ef, should always be > k
hnsw.set_ef(70)
# retrieve the top-n search neighbors
labels, distances = hnsw.knn_query(query_factors, k=k)
print(labels)
# find the nearest neighbors and "translate" it to the original labels
[model.labels[label] for label in labels[0]]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Approximate Nearest Neighborhood Search with Navigable Small World
Step2: Given the output matrix, we would like to compute each of its nearest neighbors using the compressed vectors.
Step4: Navigable Small World
Step6: In the original paper, the author used the term "friends" of vertices that share an edge, and "friend list" of vertex $v_i$ for the list of vertices that share a common with the vertex $v_i$.
Step7: Now that we've implemented the knn search algorithm, we can go back and modify the graph building function and use it to implement the actual way of building the navigable small world graph.
Step8: Hnswlib
|
1,982
|
<ASSISTANT_TASK:>
Python Code:
import math
a = math.sqrt(16.0)
b = math.ceil(111.3)
c = math.floor(89.9)
print(a, b, c)
import math
print(math.pi)
PI = math.pi
a = math.sqrt(PI)
b = math.ceil(PI)
c = math.floor(PI)
print(a, b, c)
import csv
f = open("nfl.csv")
csvreader = csv.reader(f)
nfl = list(csvreader)
print(nfl)
import csv
f = open("nfl.csv")
reader = csv.reader(f)
data = list(reader)
patriots_wins = 0
for row in data:
if row[2] == "New England Patriots":
patriots_wins += 1
print(patriots_wins)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Answer
Step2: 4
Step3: Answer
Step4: 5
Step5: 6
|
1,983
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(42)
import tensorflow as tf
tf.set_random_seed(42)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
epochs = 20
batch_size = 128
display_progress = 40 # after this many batches, output progress to screen
wt_init = tf.contrib.layers.xavier_initializer() # weight initializer
# input layer:
n_input = 784
# first convolutional layer:
n_conv_1 = 32
k_conv_1 = 3 # k_size
# second convolutional layer:
n_conv_2 = 64
k_conv_2 = 3
# max pooling layer:
pool_size = 2
mp_layer_dropout = 0.25
# dense layer:
n_dense = 128
dense_layer_dropout = 0.5
# output layer:
n_classes = 10
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
# dense layer with ReLU activation:
def dense(x, W, b):
z = tf.add(tf.matmul(x, W), b)
a = tf.nn.relu(z)
return a
# convolutional layer with ReLU activation:
def conv2d(x, W, b, stride_length=1):
xW = tf.nn.conv2d(x, W, strides=[1, stride_length, stride_length, 1], padding='SAME')
z = tf.nn.bias_add(xW, b)
a = tf.nn.relu(z)
return a
# max-pooling layer:
def maxpooling2d(x, p_size):
return tf.nn.max_pool(x,
ksize=[1, p_size, p_size, 1],
strides=[1, p_size, p_size, 1],
padding='SAME')
def network(x, weights, biases, n_in, mp_psize, mp_dropout, dense_dropout):
# reshape linear MNIST pixel input into square image:
square_dimensions = int(np.sqrt(n_in))
square_x = tf.reshape(x, shape=[-1, square_dimensions, square_dimensions, 1])
# convolutional and max-pooling layers:
conv_1 = conv2d(square_x, weights['W_c1'], biases['b_c1'])
conv_2 = conv2d(conv_1, weights['W_c2'], biases['b_c2'])
pool_1 = maxpooling2d(conv_2, mp_psize)
pool_1 = tf.nn.dropout(pool_1, 1-mp_dropout)
# dense layer:
flat = tf.reshape(pool_1, [-1, weights['W_d1'].get_shape().as_list()[0]])
dense_1 = dense(flat, weights['W_d1'], biases['b_d1'])
dense_1 = tf.nn.dropout(dense_1, 1-dense_dropout)
# output layer:
out_layer_z = tf.add(tf.matmul(dense_1, weights['W_out']), biases['b_out'])
return out_layer_z
bias_dict = {
'b_c1': tf.Variable(tf.zeros([n_conv_1])),
'b_c2': tf.Variable(tf.zeros([n_conv_2])),
'b_d1': tf.Variable(tf.zeros([n_dense])),
'b_out': tf.Variable(tf.zeros([n_classes]))
}
# calculate number of inputs to dense layer:
full_square_length = np.sqrt(n_input)
pooled_square_length = int(full_square_length / pool_size)
dense_inputs = pooled_square_length**2 * n_conv_2
weight_dict = {
'W_c1': tf.get_variable('W_c1',
[k_conv_1, k_conv_1, 1, n_conv_1], initializer=wt_init),
'W_c2': tf.get_variable('W_c2',
[k_conv_2, k_conv_2, n_conv_1, n_conv_2], initializer=wt_init),
'W_d1': tf.get_variable('W_d1',
[dense_inputs, n_dense], initializer=wt_init),
'W_out': tf.get_variable('W_out',
[n_dense, n_classes], initializer=wt_init)
}
predictions = network(x, weight_dict, bias_dict, n_input,
pool_size, mp_layer_dropout, dense_layer_dropout)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy_pct = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) * 100
initializer_op = tf.global_variables_initializer()
with tf.Session() as session:
session.run(initializer_op)
print("Training for", epochs, "epochs.")
# loop over epochs:
for epoch in range(epochs):
avg_cost = 0.0 # track cost to monitor performance during training
avg_accuracy_pct = 0.0
# loop over all batches of the epoch:
n_batches = int(mnist.train.num_examples / batch_size)
for i in range(n_batches):
# to reassure you something's happening!
if i % display_progress == 0:
print("Step ", i+1, " of ", n_batches, " in epoch ", epoch+1, ".", sep='')
batch_x, batch_y = mnist.train.next_batch(batch_size)
# feed batch data to run optimization and fetching cost and accuracy:
_, batch_cost, batch_acc = session.run([optimizer, cost, accuracy_pct],
feed_dict={x: batch_x, y: batch_y})
# accumulate mean loss and accuracy over epoch:
avg_cost += batch_cost / n_batches
avg_accuracy_pct += batch_acc / n_batches
# output logs at end of each epoch of training:
print("Epoch ", '%03d' % (epoch+1),
": cost = ", '{:.3f}'.format(avg_cost),
", accuracy = ", '{:.2f}'.format(avg_accuracy_pct), "%",
sep='')
print("Training Complete. Testing Model.\n")
test_cost = cost.eval({x: mnist.test.images, y: mnist.test.labels})
test_accuracy_pct = accuracy_pct.eval({x: mnist.test.images, y: mnist.test.labels})
print("Test Cost:", '{:.3f}'.format(test_cost))
print("Test Accuracy: ", '{:.2f}'.format(test_accuracy_pct), "%", sep='')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Set neural network hyperparameters
Step3: Set parameters for each layer
Step4: Define placeholder Tensors for inputs and labels
Step5: Define types of layers
Step6: Design neural network architecture
Step7: Define dictionaries for storing weights and biases for each layer -- and initialize
Step8: Build model
Step9: Define model's loss and its optimizer
Step10: Define evaluation metrics
Step11: Create op for variable initialization
Step12: Train the network in a session (identical to intermediate_net_in_tensorflow.ipynb except addition of display_progress)
|
1,984
|
<ASSISTANT_TASK:>
Python Code:
# Setup a target configuration
conf = {
# Platform and board to target
"platform" : "linux",
"board" : "juno",
# Login credentials
"host" : "192.168.0.1",
"username" : "root",
"password" : "",
# Local installation path
"tftp" : {
"folder" : "/var/lib/tftpboot",
"kernel" : "kern.bin",
"dtb" : "dtb.bin",
},
# Tools to deploy
"tools" : [ "rt-app", "taskset" ],
# RTApp calibration values (comment to let LISA do a calibration run)
"rtapp-calib" : {
"0": 358, "1": 138, "2": 138, "3": 357, "4": 359, "5": 355
},
# FTrace configuration
"ftrace" : {
"events" : [
"cpu_idle",
"sched_switch",
],
"buffsize" : 10240,
},
# Where results are collected
"results_dir" : "TestEnvExample",
# Tune which devlib module are required
#"exclude_modules" : [ "hwmon" ],
}
from env import TestEnv
# Initialize a test environment using the provided configuration
te = TestEnv(conf)
# The complete configuration of the target we have configured
print json.dumps(te.conf, indent=4)
# Last configured kernel and DTB image
print te.kernel
print te.dtb
# The IP and MAC address of the target
print te.ip
print te.mac
# A full platform descriptor
print json.dumps(te.platform, indent=4)
# A pre-created folder to host the tests results generated using this
# test environment, notice that the suite could add additional information
# in this folder, like for example a copy of the target configuration
# and other target specific collected information
te.res_dir
# The working directory on the target
te.workdir
# The target topology, which can be used to build BART assertions
te.topology
# Calibrate RT-App (if required) and get the most updated calibration value
te.calibration()
# Generate a JSON file with the complete platform description
te.platform_dump(dest_dir='/tmp')
# Force a reboot of the target (and wait specified [s] before reconnect)
te.reboot(reboot_time=60, ping_time=15)
# Resolve a MAC address into an IP address
te.resolv_host(host='00:02:F7:00:5A:5B')
# Copy the specified file into the TFTP server folder defined by configuration
te.tftp_deploy('/etc/group')
# Run a command on the target
te.target.execute("echo -n 'Hello Test Environment'", as_root=False)
# Spawn a command in background on the target
te.target.kick_off("sleep 10", as_root=True)
# Acces to many target specific information
print "ABI : ", te.target.abi
print "big Core Family : ", te.target.big_core
print "LITTLE Core Family : ", te.target.little_core
print "CPU's Clusters IDs : ", te.target.core_clusters
print "CPUs type : ", te.target.core_names
# Access to big.LITTLE specific information
print "big CPUs IDs : ", te.target.bl.bigs
print "LITTLE CPUs IDs : ", te.target.bl.littles
print "big CPUs freqs : {}".format(te.target.bl.get_bigs_frequency())
print "big CPUs governor : {}".format(te.target.bl.get_bigs_governor())
# Reset and sample energy counters
te.emeter.reset()
nrg = te.emeter.sample()
nrg = json.dumps(te.emeter.sample(), indent=4)
print "First read: ", nrg
time.sleep(2)
nrg = te.emeter.sample()
nrg = json.dumps(te.emeter.sample(), indent=4)
print "Second read: ", nrg
# Configure a specific set of events to trace
te.ftrace_conf(
{
"events" : [
"cpu_idle",
"cpu_capacity",
"cpu_frequency",
"sched_switch",
],
"buffsize" : 10240
}
)
# Start/Stop a FTrace session
te.ftrace.start()
te.target.execute("uname -a")
te.ftrace.stop()
# Collect and visualize the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
output = os.popen("DISPLAY=:0.0 kernelshark {}".format(trace_file))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Attributes
Step2: Functions
Step3: A special TestEnv attribute is <b>target</b>, which represent a <b>devlib instance</b>.<br>
Step4: Sample energy from the target
Step5: Configure FTrace for a sepcific experiment
|
1,985
|
<ASSISTANT_TASK:>
Python Code:
import quandl
data = quandl.get('NIKKEI/INDEX')
data[:5]
data_normal = (((data['Close Price']).to_frame())[-10000:-1])['Close Price']
data_normal[-10:-1] # 最新のデータ10件を表示
data_normal = data_normal.fillna(method='pad').resample('W-MON').fillna(method='pad')
data_normal[:5]
type(data_normal.index[0])
data_normal.index
import numpy as np
import pandas as pd
from scipy import stats
from pandas.core import datetools
# grapgh plotting
from matplotlib import pylab as plt
import seaborn as sns
%matplotlib inline
# settings graph size
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15,6
# model
import statsmodels.api as sm
plt.plot(data_normal)
# ARIMA model prediction ... (This is self thought (not automatically))
diff = data_normal - data_normal.shift()
diff = diff.dropna()
diff.head()
# difference plot
plt.plot(diff)
# automatically ARIMA prediction function (using AIC)
resDiff = sm.tsa.arma_order_select_ic(diff, ic='aic', trend='nc')
# few Times ...(orz...)
resDiff
# search min
resDiff['aic_min_order']
# we found x = x, y= y autopmatically
from statsmodels.tsa.arima_model import ARIMA
ARIMAx_1_y = ARIMA(data_normal,
order=(resDiff['aic_min_order'][0], 1,
resDiff['aic_min_order'][1])).fit(dist=False)
# AR = resDiff[...][0] / I = 1 / MA = resDiff[...][1]
ARIMAx_1_y.params
# check Residual error (... I think this is "White noise")
# this is not Arima ... (Periodicity remained)
resid = ARIMAx_1_y.resid
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
# ok?
# We test SARIMA_model
# predict SARIMA model by myself (not automatically)
import statsmodels.api as sm
SARIMAx_1_y_111 = sm.tsa.SARIMAX(data_normal,
order=(2,1,2),seasonal_order=(1,1,1,12))
SARIMAx_1_y_111 = SARIMAx_1_y_111.fit()
# order ... from ARIMA model // seasonal_order ... 1 1 1 ... ?
print(SARIMAx_1_y_111.summary())
# maybe use "Box-Jenkins method" ...
# https://github.com/statsmodels/statsmodels/issues/3620 for error
# check Residual error
residSARIMA = SARIMAx_1_y_111.resid
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(residSARIMA.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(residSARIMA, lags=40, ax=ax2)
# prediction
pred = SARIMAx_1_y_111.predict(start = 1, end = '2018-01-15')
# (print(SARIMAx_1_y_111.__doc__))
# 本来は未来(インデクスの外)まで予測ができるはずなのですが、
# 何故かエラーが出てしまうので、既存のデータ部分だけ予測します
# TODO エラーの原因特定
# plot real data and predict data
plt.plot(data_normal[:-150:-1])
plt.plot(pred[:-150:-1], "r")
data_extra = pd.concat( [data_normal, pred[data_normal.index[-1] + 1:]] )
plt.plot(data_extra[:-150:-1])
# require
import quandl
import numpy as np
import pandad as pd
from scipy import stats
from pandas.coda import datatools
import statsmodels.api as sm
def get_data(quandl_name):
data = quandl.get(quandl_name)
return data
def set_data(data):
data_normal = (((data['Close Price']).to_frame())[-10000:-1])['Close Price']
data_normal = data_normal.fillna(method='pad').resample('W-MON').fillna(method='pad')
return data_normal
def sarima(quandl_name):
data_normal = set_data(get_data(quandl_name))
diff = (data_normal - (data_normal.shift())).dropna()
resDiff = aic(diff)['aic_min_order']
ar = resDiff[0]
ma = resDiff[1]
SARIMAx_1_y_111 = \
sm.tsa.SARIMAX(data_normal, order=(int(ar),1, int(ma)),seasonal_order=(1,1,1,12))
return SARIMAx_1_y_111
def pred_data(SARIMAx_1_y_111, predict_date):
SARIMAx_1_y_111 = SARIMAx_1_y_111.fit()
print(SARIMAx_1_y_111.summary())
pred = SARIMAx_1_y_111.predict(start = 1, end = predict_date)
return pd.concat( [data_normal, pred[data_normal.index[-1] + 1:]] )
def aic (diff):
return (sm.tsa.arma_order_select_ic(diff, ic='aic', trend='nc'))
# 以上をまとめたもの
def predict_data(quandl_name, predict_data):
sarima_model = sarima(quandl_name)
return pred_data(sarima_model, predict_data)
predict_res = predict_data('NIKKEI/INDEX','2018-01-15')
plt.plot(predict_res[:-150:-1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 抜けデータが目立ったため、週単位でのデータを入手します
Step2: データの用い方
Step3: 以下のグラフから、2000年ごろのデータからの推測でも十分に予測が行える可能性が伺えます。
Step4: ARIMAモデルでモデル推定を行うための下準備として、株価の変化量を取得します。
Step5: AICを求めてモデルの良さを計算しますが、やや時間(約三分)がかかってしまします。
Step6: 先程の実行結果から、AR=2, MA=2という値の場合が最も良いモデルになることがわかりました。
Step7: 比較のためSARIMAモデルではなく、ARIMAモデルでの推定を行ってみます。
Step8: 予測のブレがあまりないことが伺えます
Step9: SARIMAモデルでの推定を行ってみます。
Step10: おおよそ見た限りではARIMAモデルと大差はないようですが、他の論文を読む限りではこちらの手法のほうが推測が上手く行くようです。
Step11: 以下が予測を結合したもの
Step12: 青が実測値、赤が予測値です。それに近い値を計測できたのではないでしょうか?
|
1,986
|
<ASSISTANT_TASK:>
Python Code:
from examples.cfd import plot_field, init_hat
import numpy as np
%matplotlib inline
# Some variable declarations
nx = 50
ny = 50
nt = 100
xmin = 0.
xmax = 2.
ymin = 0.
ymax = 1.
dx = (xmax - xmin) / (nx - 1)
dy = (ymax - ymin) / (ny - 1)
# Initialization
p = np.zeros((nx, ny))
pd = np.zeros((nx, ny))
b = np.zeros((nx, ny))
# Source
b[int(nx / 4), int(ny / 4)] = 100
b[int(3 * nx / 4), int(3 * ny / 4)] = -100
%%time
#NBVAL_IGNORE_OUTPUT
for it in range(nt):
pd = p.copy()
p[1:-1,1:-1] = (((pd[1:-1, 2:] + pd[1:-1, :-2]) * dy**2 +
(pd[2:, 1:-1] + pd[:-2, 1:-1]) * dx**2 -
b[1:-1, 1:-1] * dx**2 * dy**2) /
(2 * (dx**2 + dy**2)))
p[0, :] = 0
p[nx-1, :] = 0
p[:, 0] = 0
p[:, ny-1] = 0
#NBVAL_IGNORE_OUTPUT
plot_field(p, xmax=xmax, ymax=ymax, view=(30, 225))
from devito import Grid, Function, TimeFunction, Operator, configuration, Eq, solve
# Silence the runtime performance logging
configuration['log-level'] = 'ERROR'
grid = Grid(shape=(nx, ny), extent=(xmax, ymax))
p = Function(name='p', grid=grid, space_order=2)
pd = Function(name='pd', grid=grid, space_order=2)
p.data[:] = 0.
pd.data[:] = 0.
# Initialise the source term `b`
b = Function(name='b', grid=grid)
b.data[:] = 0.
b.data[int(nx / 4), int(ny / 4)] = 100
b.data[int(3 * nx / 4), int(3 * ny / 4)] = -100
# Create Laplace equation base on `pd`
eq = Eq(pd.laplace, b, subdomain=grid.interior)
# Let SymPy solve for the central stencil point
stencil = solve(eq, pd)
# # Now we let our stencil populate our second buffer `p`
eq_stencil = Eq(p, stencil)
# Create boundary condition expressions
x, y = grid.dimensions
t = grid.stepping_dim
bc = [Eq(p[x, 0], 0.)]
bc += [Eq(p[x, ny-1], 0.)]
bc += [Eq(p[0, y], 0.)]
bc += [Eq(p[nx-1, y], 0.)]
# Now we can build the operator that we need
op = Operator([eq_stencil] + bc)
%%time
#NBVAL_IGNORE_OUTPUT
# Run the outer loop explicitly in Python
for i in range(nt):
# Determine buffer order
if i % 2 == 0:
_p = p
_pd = pd
else:
_p = pd
_pd = p
# Apply operator
op(p=_p, pd=_pd)
#NBVAL_IGNORE_OUTPUT
# Plot result
plot_field(p.data, xmax=xmax, ymax=ymax, view=(30, 225))
# Now with Devito we will turn `p` into `TimeFunction` object
# to make all the buffer switching implicit
p = TimeFunction(name='p', grid=grid, space_order=2)
p.data[:] = 0.
# Initialise the source term `b`
b = Function(name='b', grid=grid)
b.data[:] = 0.
b.data[int(nx / 4), int(ny / 4)] = 100
b.data[int(3 * nx / 4), int(3 * ny / 4)] = -100
# Create Laplace equation base on `p`
eq = Eq(p.laplace, b)
# Let SymPy solve for the central stencil point
stencil = solve(eq, p)
# Let our stencil populate the buffer `p.forward`
eq_stencil = Eq(p.forward, stencil)
# Create boundary condition expressions
# Note that we now add an explicit "t + 1" for the time dimension.
bc = [Eq(p[t + 1, x, 0], 0.)]
bc += [Eq(p[t + 1, x, ny-1], 0.)]
bc += [Eq(p[t + 1, 0, y], 0.)]
bc += [Eq(p[t + 1, nx-1, y], 0.)]
#NBVAL_IGNORE_OUTPUT
configuration['log-level'] = 'ERROR'
# Create and execute the operator for a number of timesteps
op = Operator([eq_stencil] + bc)
%time op(time=nt)
#NBVAL_IGNORE_OUTPUT
plot_field(p.data[0], xmax=xmax, ymax=ymax, view=(30, 225))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can now pretty much use our previous implementation, although we will use pd instead of pn for consistency with the original. Our boundary conditions are even simpler since they are $0$ everywhere. Our source term is neatly wrapped in the symbol b and we can again use a Python-driven timestepping loop with switching buffers to repeatedly execute the operator we create.
Step2: Nice, we get the same spikes as the pure NumPy implementation. But we still drive the time loop though Python, which is slow. So let's try and utilise Devito's time-stepping mechanism that we saw earlier for this "pseudo-timestepping".
|
1,987
|
<ASSISTANT_TASK:>
Python Code:
# Imports.
from typing import List
import astropy.io.ascii
import astropy.table
import h5py
import numpy
import sklearn.linear_model
import sklearn.cross_validation
# Globals.
# This file stores the ATLAS-CDFS and SWIRE-CDFS catalogues.
CROWDASTRO_PATH = '../data/crowdastro_swire.h5'
# This file stores the training features and labels.
TRAINING_PATH = '../data/training_swire.h5'
# ATLAS catalogue.
ATLAS_CATALOGUE_PATH = '../data/ATLASDR3_cmpcat_23July2015.csv'
# Path to output catalogue to.
OUTPUT_PATH = '../data/crowdastro_catalogue.dat'
# Radius we should consider an object "nearby".
NEARBY = 1 / 60 # 1 arcmin in degrees.
# Size of an ATLAS image vector.
IMAGE_SIZE = 200 * 200
# Number of numeric features before the distance features.
ATLAS_DIST_IDX = 2 + IMAGE_SIZE
def find_host(probabilities: numpy.ndarray, atlas_id: int) -> int:
Finds the host galaxy associated with an ATLAS object.
Arguments
---------
probabilities
(N,) array of predicted probabilities of SWIRE objects.
atlas_id
ID of the ATLAS object to find the host of.
Returns
-------
int
ID of predicted host galaxy.
with h5py.File(CROWDASTRO_PATH, 'r') as cr, h5py.File(TRAINING_PATH, 'r') as tr:
# Get all nearby objects.
ir_distances = cr['/atlas/cdfs/numeric'][atlas_id, ATLAS_DIST_IDX:]
assert ir_distances.shape[0] == tr['features'].shape[0]
# Make a list of IDs of nearby objects.
nearby = sorted((ir_distances <= NEARBY).nonzero()[0])
# Find the best nearby candidate.
nearby_probabilities = probabilities[nearby]
# Select the highest probability object.
best_index = nearby_probabilities.argmax()
best_index = nearby[best_index] # Convert back into an IR index.
return best_index
def train_classifier(indices: List[int]) -> sklearn.linear_model.LogisticRegression:
Trains a classifier.
Arguments
---------
indices
List of infrared training indices.
Returns
-------
sklearn.linear_model.LogisticRegression
Trained logistic regression classifier.
with h5py.File(TRAINING_PATH, 'r') as tr:
features = numpy.nan_to_num(tr['features'].value[indices])
labels = tr['labels'].value[indices]
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced', penalty='l1')
lr.fit(features, labels)
return lr
def predict(classifier: sklearn.linear_model.LogisticRegression, indices: List[int]) -> numpy.ndarray:
Predicts probabilities for a set of IR objects.
Arguments
---------
classifier
Trained classifier.
indices
List of IR indices to predict probability of.
Returns
-------
numpy.ndarray
(N,) NumPy array of predicted probabilities.
with h5py.File(TRAINING_PATH, 'r') as tr:
features = numpy.nan_to_num(tr['features'].value[indices])
return classifier.predict_proba(features)[:, 1]
def train_and_predict(n_splits: int=10) -> numpy.ndarray:
Generates probabilities for IR objects.
Notes
-----
Instances will be split according to ATLAS index, not IR index. This is
because there is overlap in IR objects' features, so we need to make sure
that this overlap is not present in the testing data.
Arguments
---------
n_splits
Number of splits in cross-validation.
Returns
-------
numpy.ndarray
(N,) NumPy array of predictions.
with h5py.File(CROWDASTRO_PATH, 'r') as cr:
# Get the number of ATLAS IDs.
n_atlas = cr['/atlas/cdfs/numeric'].shape[0]
# Get the number of SWIRE IDs.
n_swire = cr['/swire/cdfs/numeric'].shape[0]
# Allocate the array of predicted probabilities.
probabilities = numpy.zeros((n_swire,))
# Split into training/testing sets.
kf = sklearn.cross_validation.KFold(n_atlas, n_folds=n_splits)
# Train and predict.
for train_indices, test_indices in kf:
nearby_train = (cr['/atlas/cdfs/numeric'].value[train_indices, ATLAS_DIST_IDX:]
<= NEARBY).nonzero()[0]
nearby_test = (cr['/atlas/cdfs/numeric'].value[test_indices, ATLAS_DIST_IDX:]
<= NEARBY).nonzero()[0]
classifier = train_classifier(nearby_train)
fold_probs = predict(classifier, nearby_test)
probabilities[nearby_test] = fold_probs
return probabilities
probabilities = train_and_predict()
with h5py.File(CROWDASTRO_PATH, 'r') as cr:
n_atlas = cr['/atlas/cdfs/numeric'].shape[0]
hosts = [find_host(probabilities, i) for i in range(n_atlas)]
# First, we need to get a list of the ATLAS and SWIRE object names.
with h5py.File(CROWDASTRO_PATH, 'r') as cr:
atlas_ids = cr['/atlas/cdfs/string'].value
atlas_locs = cr['/atlas/cdfs/numeric'][:, :2]
# Convert ATLAS IDs into names.
atlas_catalogue = astropy.io.ascii.read(ATLAS_CATALOGUE_PATH)
id_to_name = {r['ID']: r['name'] for r in atlas_catalogue}
atlas_names = [id_to_name[id_.decode('ascii')] for zooniverse_id, id_ in atlas_ids]
swire_names = [n.decode('ascii') for n in cr['/swire/cdfs/string']]
swire_locs = cr['/swire/cdfs/numeric'][:, :2]
# Now we can generate the catalogue.
names = ('radio_object', 'infrared_host', 'ra', 'dec')
table = astropy.table.Table(names=names, dtype=('S50', 'S50', 'float', 'float'))
for atlas_index, atlas_name in enumerate(atlas_names):
host = hosts[atlas_index]
swire_name = swire_names[host]
ra, dec = swire_locs[host]
table.add_row((atlas_name, swire_name, ra, dec))
astropy.io.ascii.write(table=table, output=OUTPUT_PATH)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Crowdastro ATLAS-CDFS Catalogue
Step5: Making predictions
Step6: Generating the catalogue
|
1,988
|
<ASSISTANT_TASK:>
Python Code:
%run "../src/start_session.py"
%run "../src/recurrences.py"
c = IndexedBase('c')
checks_recurrence_spec=recurrence_spec(recurrence_eq=Eq(c[n]/(n+1), 2/(n+1) + c[n-1]/n),
recurrence_symbol=c,
variables=[n])
checks_recurrence_spec
unfolded = checks_recurrence_spec.unfold(depth=4)
unfolded
unfolded.factor(c[n-5])
num = (10*n**5 - 80*n**4 + 190*n**3 - 100*n**2 -92*n +48)
den = expand(n*(n-4)*(n-3)*(n-2)*(n-1))
one = simplify(num/den)
one
num = n**5 -5*n**4 + 5*n**3 + 5*n**2 -6*n
another = simplify(num/den)
another
Eq(unfolded.recurrence_eq.lhs, one + another*c[n-5])
unfolded.instantiate(strategy=raw(substitutions={n:8}))
instantiated_spec = unfolded.instantiate(strategy=based(arity=unary_indexed()))
instantiated_spec
instantiated_spec.subsume(additional_terms={c[0]:Integer(0)})
ipython_latex_description(checks_recurrence_spec, depths=range(10), arity=unary_indexed())
n = symbols('n')
denom, first, second, third = n*(n-1), (n*(n+1)**2)/2, (n*(n+1)*(2*n+1))/6, n**2
expr_to_prove = (first-second-third)/denom
Eq(expr_to_prove, factor(expr_to_prove))
s = IndexedBase('s')
swaps_recurrence_spec=recurrence_spec(recurrence_eq=Eq(s[n]/(n+1),s[n-1]/n + (2*n-3)/(6*n*(n+1))),
recurrence_symbol=s,
variables=[n])
swaps_recurrence_spec
unfolded = swaps_recurrence_spec.unfold(depth=4)
unfolded
instantiated = unfolded.instantiate(strategy=based(arity=unary_indexed()))
instantiated
instantiated.subsume(additional_terms={})
ipython_latex_description(swaps_recurrence_spec, depths=range(10), arity=unary_indexed())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Function do_unfolding_steps allow us to perform unfolding or unrolling on occurrences of the inductively defined symbol. Doing $n$ steps of unfolding is the same to say to use the main recurrence defined above as a rewriting rule for occurrences, on the rhs, that pattern match with the one on the lhs.
Step3: Description and factorization
Step4: Playground
Step5: If desired, we can base the previous equation such that a new equation is produced containing the very first term of the sequence, in our case $c_{0}$.
Step6: Using the previous idea, finding a value for $n$ that causes the very first item to appear, then we can substitute in the entire specification.
Step7: If we attach a value to $c_{0}$ and we put it in the term cache, then it is possible to fully instantiate the spec.
Step8: Moreover, if we want to skip all little steps and perform a batch study for a variable number of steps, then we provide an higher-order operator ipython_latex, which produces a nice representation for a set of unfoldings
Step9: On the number of swaps, the average case
Step10: $\blacksquare$
|
1,989
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import tensorflow as tf
CONTENT_FILE = '/home/ishiyama/image_style_transfer/image/input/test_input_01.JPG'
STYLE_FILE = '/home/ishiyama/image_style_transfer/image/style/test_style_01.jpg'
class Image(np.ndarray):
画像を扱うためのnumpy.ndarray
XXX: 実装が大変なので一旦保留
画像の形状の情報を属性で取り出せるようにしたい
DATA_FORMAT_CHAR = {
'BATCH': 'N',
'HEIGHT': 'H',
'WIDTH': 'W',
'CHANNEL': 'C'
}
def __new__(subtype,
shape,
dtype=float,
buffer=None,
offset=0,
strides=None,
order=None,
data_format='NHWC'):
super(__class__, self).__new__(subtype, shape, dtype, buffer, offset, strides, order)
self.data_format = data_format
num_batch, num_height, num_width, num_channel = self._get_image_shape(data_format=data_format)
self.num_batch = num_batch
self.num_height = num_height
self.num_width = num_width
self.num_channel = num_channel
def _get_image_shape(self, data_format):
_image_shape = self.shape
idx_batch = self._get_index_data_format(data_format=data_format, data_type='BATCH'),
idx_height = self._get_index_data_format(data_format=data_format, data_type='HEIGHT')
idx_width = self._get_index_data_format(data_format=data_format, data_type='WIDTH')
idx_channel = self._get_index_data_format(data_format=data_format, data_type='CHANNEL')
reordered_image_shape = (_image_shape[idx_batch],
_image_shape[idx_height],
_image_shape[idx_width],
_image_shape[idx_channel])
return reordered_image_shape
def _get_index_data_format(self, data_format, data_type):
idx = data_format.find(__class__.DATA_FORMAT_CHAR[data_type])
if idx == -1:
raise ValueError('data type "{}" is not available.'.format(data_type))
return idx
@classmethod
def reshape(self, *args):
self = __class__(super(__class__, self).reshape(args))
def read_images_as_jpeg(file):
JPEG Image Reader
This function reads the content and style images as JPEG format.
These image data must be square for now, different height and
width will be able to supplied for future.
Args:
file : str. A path of the image file.
Returns:
A tuple. Each Elements are Tensor object of the read images.
filename_queue = tf.train.string_input_producer([file])
reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)
image = tf.image.decode_jpeg(value)
# Read image is a Tensor object which tf.nn.conv2d cannot handle,
# so convert it to numpy.ndarray.
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tf.train.start_queue_runners(sess)
# Returned array's shape will be (height, width, channel).
image_array_hwc = sess.run(image)
new_shape = [1]
new_shape.extend(image_array_hwc.shape)
# return image_array_chw
return image_array_hwc.reshape(new_shape)
image = read_images_as_jpeg(file=CONTENT_FILE)
image.shape
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
def calculate_convolutional_layer(x, filter_height, filter_width, output_channels):
Executeing a convolutional layer task.
Args:
x : An image data.
filter_height (int) : A height of each filters.
filter_width (int) : A width of each filters.
output_channels (int) : A number of channels which must be outputed.
Returns:
An activation of an convolutional layer.
if ((not isinstance(filter_height, int))
or (not isinstance(filter_width, int))
or (not isinstance(output_channels, int))):
raise TypeError('"filter_height" and "filter_width" and "output_channels" '
'must be integer.')
# TODO: 入力画像の縦、横、チャンネル数を属性で取得できるようにする
# 例) input_channels = x.num_channels
input_channels = int(x.shape[-1])
W = tf.Variable(
tf.truncated_normal(
shape=[filter_height,
filter_width,
input_channels,
output_channels],
stddev=0.1
)
)
h = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
b = tf.Variable(tf.constant(0.1, shape=[output_channels]))
convoluted_image = tf.nn.relu(h + b)
return convoluted_image
x = tf.placeholder(tf.float32, [1, 1200, 1600, 3])
test_model = calculate_convolutional_layer(
x=x,
filter_height=3,
filter_width=3,
output_channels=1
)
sess = tf.InteractiveSession()
# tf.Session()だと、sess.runで返ってくる行列の要素がすべて0だった。
# TODO: Sessionメソッド と InteractiveSessionメソッドの違いを調べる
# sess = tf.Session()
sess.run(tf.global_variables_initializer())
test_result = sess.run(test_model, feed_dict={x: image})
test_result.shape
test_result
def calculate_max_pooling_layer(x, ksize, strides):
Wrapper function of tf.nn.max_pool.
Args:
x : A Tensor produced by calculate_convolutional_layer.
ksize : A list of ints that has length >= 4. The size of
the window for each dimension of the input tensor.
strides : A list of ints that has length >= 4. The stride
of the sliding window for each dimension of the
input tensor.
Returns:
A pooled image.
pooled_image = tf.nn.max_pool(x, ksize=ksize, strides=strides, padding='SAME')
return pooled_image
class ConvNetProgressHolder(object):
Holder of convoluted images and pooled image.
This class is used like the struct of C language.
This has no methods.
Attributes:
input_data (Tensor) : An image that is applied to convolution and pooling.
conv (list) : The list of convoluted images, each images are Tensor objects.
pool (Tensor) : A image that is pooled after convolutional layer.
def __init__(self):
self.input_data = None
self.conv = []
self.pool = None
# FILTER_CONF = {
# 'height': 3,
# 'width': 3,
# 'channels': 1,
# 'num': 1
# }
FILTER_CONF = {
'height': 3,
'width': 3
}
def apply_vgg_network_unit(x, channels, num_conv):
Apply VGG Network From a Convolutional Layer to Max Pooling Layer.
Table 1 of Simonyan and Zisserman(2015) is separated by 5 parts,
each parts is from an input data or a pooled data at previous part
to a maxpool.
This function provides to apply a that part.
This will apply recursively.
Args:
x (Tensor) : An input data or A Max pooled data returned by this function.
channels (int) : A number of channels described at Table 1 of
Simonyan and Zisserman(2015).
num_conv (int) : A number of applying covolutional layers.
See Simonyan and Zisserman(2015) for detail.
Returns:
A ConvNetProgressHolder object.
if num_conv < 2:
raise ValueError('num_conv must be >= 2.')
conv_holder = ConvNetProgressHolder()
conv_holder.input_data = x
conv = calculate_convolutional_layer(
x=conv_holder.input_data,
filter_height=FILTER_CONF['height'],
filter_width=FILTER_CONF['width'],
output_channels=channels
)
conv_holder.conv.append(conv)
for i in range(1, num_conv):
conv = calculate_convolutional_layer(
x=conv_holder.conv[i - 1],
filter_height=FILTER_CONF['height'],
filter_width=FILTER_CONF['width'],
output_channels=channels
)
conv_holder.conv.append(conv)
conv_holder.pool = calculate_max_pooling_layer(
x=conv_holder.conv[i - 1],
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1]
)
return conv_holder
x = tf.placeholder(tf.float32, [1, 1200, 1600, 3])
unit1 = apply_vgg_network_unit(x=x, channels=2, num_conv=2)
unit2 = apply_vgg_network_unit(x=unit1.pool, channels=4, num_conv=2)
unit3 = apply_vgg_network_unit(x=unit2.pool, channels=8, num_conv=4)
unit4 = apply_vgg_network_unit(x=unit3.pool, channels=16, num_conv=4)
unit5 = apply_vgg_network_unit(x=unit4.pool, channels=32, num_conv=4)
sess = tf.InteractiveSession()
# sess = tf.Session()
sess.run(tf.global_variables_initializer())
# 使いたい過程をリストでまとめてsess.runに投げると特徴量抽出の
# 途中経過を取り出せる
result_unit2_conv, result_unit5_conv, result_unit5_pool = sess.run(
[unit2.conv[1], unit5.conv[2], unit5.pool],
feed_dict={x: image}
)
#result_list = sess.run(
# [unit2.conv[1], unit5.conv[2], unit5.pool],
# feed_dict={x: image}
#)
flattened_result_list = [flatten(res) for res in result_list]
flattened_result_list
result_unit2_conv.shape
result_unit5_conv.shape
result_unit5_conv
result_unit5_conv.shape
def convert_nhwc_to_nchw(image):
return image.transpose([0, 3, 1, 2])
converted = convert_nhwc_to_nchw(result_unit5_conv)
converted.shape
def flatten(image):
num_batch, num_channel, num_height, num_width = image.shape
if num_batch != 1:
raise ValueError('Not assumed batch size has been ocurred.')
return image.reshape([num_channel, num_height * num_width])
test_data1 = np.arange(1 * 3 * 4 * 2).reshape([1, 3, 4, 2])
test_data1
flatten(test_data1)
test_data2 = np.arange(2 * 3 * 4 * 2).reshape([2, 3, 4, 2])
test_data2
flatten(test_data2)
# グラム行列を計算する
def calculate_gram_matrix(x):
return np.dot(x, x.T)
flattened = flatten(result_unit5_conv)
gram_matrix = calculate_gram_matrix(flattened)
gram_matrix
gram_matrix.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Image Style Transferの実装
Step4: VGGを実装する
Step6: Maxプーリング層を実装する
Step9: 畳込みとプーリング処理の途中経過を保持するクラスを実装する
Step10: VGGの畳込みとプーリング層を構築する
Step11: 画像を合成する処理を作る
Step12: flattenのテスト
|
1,990
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import numexpr as ne
import numba
import math
import random
import matplotlib.pyplot as plt
import scipy as sp
import sys
%load_ext Cython
def primes_python(n):
primes = [False, False] + [True] * (n - 2)
i= 2
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
primes_python(20)
tp = %timeit -o primes_python(10000)
%%cython
def primes_cython1(n):
primes = [False, False] + [True] * (n - 2)
i= 2
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
tc1 = %timeit -o primes_cython1(10000)
%%cython
def primes_cython2(int n):
# Note the type declarations below
cdef list primes = [False, False] + [True] * (n - 2)
cdef int i = 2
cdef int k = 0
# The rest of the functions is unchanged
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
tc2 = %timeit -o primes_cython2(10000)
print("Cython version 1 speedup: {0}".format(tp.best/tc1.best))
print("Cython version 2 speedup: {0}".format(tp.best/tc2.best))
@numba.jit(nopython=True)
def primes_numba(n):
primes = [False, False] + [True] * (n - 2)
i= 2
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
res = []
for i in range(2,n):
if primes[i]: res.append(i)
return res
tn = %timeit -o primes_numba(10000)
%%cython -a
def primes_cython1(n):
primes = [False, False] + [True] * (n - 2)
i= 2
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
%%cython -a
def primes_cython2(int n):
# Note the type declarations below
cdef list primes = [False, False] + [True] * (n - 2)
cdef int i = 2
cdef int k = 0
# The rest of the functions is unchanged
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
# Matrices to use
A = np.random.random((1000,3))
B = np.random.random((500,3))
def dist(a, b):
return np.sqrt(np.sum((a-b)**2))
def distance_matrix_python(A, B):
m = A.shape[0]
n = B.shape[0]
D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i],B[j])
return D
%timeit distance_matrix_python(A,B)
%%cython -a
import numpy as np
def dist(a, b):
return np.sqrt(np.sum((a-b)**2))
def distance_matrix_cython0(A, B):
m = A.shape[0]
n = B.shape[0]
D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i],B[j])
return D
%timeit distance_matrix_cython0(A,B)
%%cython -a
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
def dist(cnp.ndarray[float64_t, ndim=1] a, cnp.ndarray[float64_t, ndim=1] b):
return np.sqrt(np.sum((a-b)**2))
def distance_matrix_cython1(cnp.ndarray[float64_t, ndim=2] A, cnp.ndarray[float64_t, ndim=2] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
cnp.ndarray[float64_t, ndim=2] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i], B[j])
return D
%timeit -n 10 distance_matrix_cython1(A,B)
%%cython -a
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
def dist(cnp.ndarray[float64_t, ndim=1] a, cnp.ndarray[float64_t, ndim=1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython2(cnp.ndarray[float64_t, ndim=2] A, cnp.ndarray[float64_t, ndim=2] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
cnp.ndarray[float64_t, ndim=2] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i], B[j])
return D
%timeit -n 10 distance_matrix_cython2(A,B)
%%cython -a
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
def dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython3(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i], B[j])
return D
%timeit -n 10 distance_matrix_cython3(A,B)
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
def dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython4(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i], B[j])
return D
%timeit -n 10 distance_matrix_cython4(A,B)
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
def dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython5(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i,:], B[j,:])
return D
%timeit -n 10 distance_matrix_cython5(A,B)
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
cdef float64_t dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython6(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i,:], B[j,:])
return D
%timeit -n 10 distance_matrix_cython6(A,B)
%%cython -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
cdef float64_t test1(float64_t a, float64_t b):
return a+b
test1(1.,1.)
%%cython -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
cpdef float64_t test2(float64_t a, float64_t b):
return a+b
test2(1,1)
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
cdef inline float64_t dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython7(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i,:], B[j,:])
return D
%timeit -n 10 distance_matrix_cython7(A,B)
@numba.jit(nopython=True)
def dist(a, b):
n = a.shape[0]
ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return math.sqrt(ret)
@numba.jit(nopython=True)
def distance_matrix_numba(A, B):
m = A.shape[0]
n = B.shape[0]
D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i,:], B[j,:])
return D
%timeit -n 10 distance_matrix_numba(A,B)
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
cdef class A(object):
def d(self): return 0
cdef int c(self): return 0
cpdef int p(self): return 0
def test_def(self, long num):
while num > 0:
self.d()
num -= 1
def test_cdef(self, long num):
while num > 0:
self.c()
num -= 1
def test_cpdef(self, long num):
while num > 0:
self.p()
num -= 1
%%timeit n = 1000000
a1 = A()
a1.test_def(n)
%%timeit n = 1000000
a1 = A()
a1.test_cdef(n)
%%timeit n = 1000000
a1 = A()
a1.test_cpdef(n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table of Contents
Step2: Let's evaluate the performance for the first version
Step3: And now we write our first Cython version, by just adding %%cython magic in the first line of the cell
Step4: We achieve x2 speed improvement doing (practically) nothing!.
Step5: Then
Step6: Numba wins this time! but
Step7: Alternative usage of Cython
Step8: Now let's improve this naive Cython implementation by statically defining the types of the variables
Step9: Typed Memory Views
Step10: Compiler optimization
Step11: Compiler directives
Step12: Pure C functions
Step13: Example of cdef and cpdef
Step14: Function inlining
Step15: What about Numba?
Step16: 4.- Other advanced things you can do with Cython
|
1,991
|
<ASSISTANT_TASK:>
Python Code:
data_rang = 9
pr_type = ['a', 'b', 'c', 'd']
p_type = [ np.random.choice(pr_type) for i in range(data_rang) ]
data = {'product_name' : ['x0', 'x1', 'x3', 'x2', 'x4', 'x5', 'x6', 'x7', 'x8'],
'T1': np.random.randint(100, size = [data_rang]),
'T2': np.random.randint(100, size = [data_rang]),
'T3': np.random.randint(100, size = [data_rang]),
'T4': np.random.randint(100, size = [data_rang]),
'T5': np.random.randint(100, size = [data_rang]),
'T6': np.random.randint(100, size = [data_rang]),
'T7': np.random.randint(100, size = [data_rang]),
'product_type': p_type}
test_data = pd.DataFrame(data, columns = ['product_name', 'T1', 'T2', 'T3', 'T4', 'T5', 'T6', 'T7', 'product_type'])
print test_data
def dealing_data(data, start_time, end_time):
product_ty = set(data['product_type'])
result_df = pd.DataFrame()
for item in product_ty:
tmp_data = data[data['product_type'] == item]
slice_data = slicing_data(tmp_data, start_time, end_time)
columns_name = ['product_name', 'product_type']
tmp_data = tmp_data.loc[:, columns_name]
tmp_data['statistic'] = np.zeros(np.array(tmp_data).shape[0])
tmp_data['statistic'] = np.sum(slice_data, axis = 1)
tmp_data = tmp_data.sort_values('statistic', ascending = False)
tmp_data['rank'] = range(len(tmp_data))
tmp_data['rank'] += 1
result_df = result_df.append(tmp_data)
print result_df
return result_df
def slicing_data(data, start_time, end_time):
#select_column = [pd.to_datetime(start_time), pd.to_datetime(end_time)]
#print data
#print "***********",data[select_column]
return data.loc[:, start_time : end_time]
#return data[select_column]
def query_rank(data, query_product_name, start_time, end_time):
re_data = dealing_data(data, start_time, end_time)
result = re_data[re_data['product_name'] == query_product_name]['rank'].values
return result[0]
#查询 产品名为 x6 在 T1到T4这段时间内在同类产品中的销售排名
result_rank = query_rank(test_data, 'x6', 'T1', 'T3')
print "query result , the rank is %d"%result_rank
# 返回总的销售排名表
def dealing_data_b(data, start_time, end_time):
product_ty = set(data['product_type'])
result_df = pd.DataFrame()
for item in product_ty:
tmp_data = data[data['product_type'] == item]
slice_data = slicing_data_b(tmp_data, start_time, end_time)
columns_name = ['product_name', 'product_type']
tmp_data = tmp_data.loc[:, columns_name]
tmp_data['statistic'] = np.zeros(np.array(tmp_data).shape[0])
tmp_data['statistic'] = np.sum(slice_data, axis = 1)
tmp_data = tmp_data.sort_values('statistic', ascending = False)
tmp_data['rank'] = range(len(tmp_data))
tmp_data['rank'] += 1
result_df = result_df.append(tmp_data)
print result_df
return result_df
def slicing_data_b(data, start_time, end_time):
#select_column = [pd.to_datetime(start_time), pd.to_datetime(end_time)]
select_columns = [ it for it in pd.date_range(start_time, end_time)]
#print data
#print "***********",data[select_column]
#return data.loc[:, start_time : end_time]
return data[select_columns]
def query_rank_b(data, query_product_name, start_time, end_time):
re_data = dealing_data_b(data, start_time, end_time)
result = re_data[re_data['product_name'] == query_product_name]['rank'].values
return result[0]
# construct dataframe
sale_value_date = { el: np.random.randint(100, size = [data_rang]) for el in pd.date_range('20100101', '20100109')}
sale_value_date_df = pd.DataFrame(sale_value_date)
print sale_value_date_df
pr_type = ['a', 'b', 'c', 'd']
p_name = [ "x" + str(i) for i in range(data_rang) ]
p_type = [ np.random.choice(pr_type) for i in range(data_rang) ]
product_data = {'product_name': p_name,
'product_type': p_type}
product_data_df = pd.DataFrame(product_data, columns = ['product_name', 'product_type'])
print product_data_df
df = pd.concat([sale_value_date_df, product_data_df], axis = 1)
print df
result_rank_b = query_rank_b(df, 'x4', '2010-01-01', '2010-01-05')
print "query result , the rank is %d"%result_rank_b
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: main function
Step2: 查询函数, 通过输入指定产品名和起止时间参数, 返回该产品在该类中的销售排名
Step3: another form of datafram
|
1,992
|
<ASSISTANT_TASK:>
Python Code:
import os
from skimage import io
from skimage.color import rgb2gray
from skimage import transform
from math import ceil
IMGSIZE = (100, 100)
def load_images(folder, scalefactor=(2, 2), labeldict=None):
images = []
labels = []
files = os.listdir(folder)
for file in (fname for fname in files if fname.endswith('.png')):
img = io.imread(folder + file).astype(float)
img = rgb2gray(img)
# Crop since some of the real world pictures are other shape
img = img[:IMGSIZE[0], :IMGSIZE[1]]
# Possibly downscale to speed up processing
img = transform.downscale_local_mean(img, scalefactor)
# normalize image range
img -= np.min(img)
img /= np.max(img)
images.append(img)
if labeldict is not None:
# lookup label for real world data in dict generated from labels.txt
key, _ = os.path.splitext(file)
labels.append(labeldict[key])
else:
# infere label from filename
if file.find("einstein") > -1 or file.find("curie") > -1:
labels.append(1)
else:
labels.append(0)
return np.asarray(images)[:, None], np.asarray(labels)
x_train, y_train = load_images('data/aps/train/')
# Artifically pad Einstein's and Curie't to have balanced training set
# ok, since we use data augmentation later anyway
sel = y_train == 1
repeats = len(sel) // sum(sel) - 1
x_train = np.concatenate((x_train[~sel], np.repeat(x_train[sel], repeats, axis=0)),
axis=0)
y_train = np.concatenate((y_train[~sel], np.repeat(y_train[sel], repeats, axis=0)),
axis=0)
x_test, y_test = load_images('data/aps/test/')
rw_labels = {str(key): 0 if label == 0 else 1
for key, label in np.loadtxt('data/aps/real_world/labels.txt', dtype=int)}
x_rw, y_rw = load_images('data/aps/real_world/', labeldict=rw_labels)
from mpl_toolkits.axes_grid import ImageGrid
from math import ceil
def imsshow(images, grid=(5, -1)):
assert any(g > 0 for g in grid)
grid_x = grid[0] if grid[0] > 0 else ceil(len(images) / grid[1])
grid_y = grid[1] if grid[1] > 0 else ceil(len(images) / grid[0])
axes = ImageGrid(pl.gcf(), "111", (grid_y, grid_x), share_all=True)
for ax, img in zip(axes, images):
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
ax.imshow(img[0], cmap='gray')
pl.figure(0, figsize=(16, 10))
imsshow(x_train, grid=(5, 1))
pl.show()
pl.figure(0, figsize=(16, 10))
imsshow(x_train[::-4], grid=(5, 1))
pl.show()
from keras.preprocessing.image import ImageDataGenerator
imggen = ImageDataGenerator(rotation_range=20,
width_shift_range=0.15,
height_shift_range=0.15,
shear_range=0.4,
fill_mode='constant',
cval=1.,
zoom_range=0.3,
channel_shift_range=0.1)
imggen.fit(x_train)
for batch in it.islice(imggen.flow(x_train, batch_size=5), 2):
pl.figure(0, figsize=(16, 5))
imsshow(batch, grid=(5, 1))
pl.show()
from keras.layers import Conv2D, Dense, Flatten, MaxPooling2D
from keras.models import Sequential
from keras.backend import image_data_format
def generate(figsize, nr_classes, cunits=[20, 50], fcunits=[500]):
model = Sequential()
cunits = list(cunits)
input_shape = figsize + (1,) if image_data_format == 'channels_last' \
else (1,) + figsize
model.add(Conv2D(cunits[0], (5, 5), padding='same',
activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Convolutional layers
for nr_units in cunits[1:]:
model.add(Conv2D(nr_units, (5, 5), padding='same',
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Fully connected layers
model.add(Flatten())
for nr_units in fcunits:
model.add(Dense(nr_units, activation='relu'))
# Output layer
activation = 'softmax' if nr_classes > 1 else 'sigmoid'
model.add(Dense(nr_classes, activation=activation))
return model
from keras.optimizers import Adam
from keras.models import load_model
try:
model = load_model('aps_lenet.h5')
print("Model succesfully loaded...")
except OSError:
print("Saved model not found, traing...")
model = generate(figsize=x_train.shape[-2:], nr_classes=1,
cunits=[24, 48], fcunits=[100])
optimizer = Adam()
model.compile(loss='binary_crossentropy', optimizer=optimizer,
metrics=['accuracy'])
model.fit_generator(imggen.flow(x_train, y_train, batch_size=len(x_train)),
validation_data=imggen.flow(x_test, y_test),
steps_per_epoch=100, epochs=5,
verbose=1, validation_steps=256)
model.save('aps_lenet.h5')
from sklearn.metrics import confusion_matrix
def plot_cm(cm, classes, normalize=False,
title='Confusion matrix', cmap=pl.cm.viridis):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
pl.imshow(cm, interpolation='nearest', cmap=cmap)
pl.title(title)
pl.colorbar()
tick_marks = np.arange(len(classes))
pl.xticks(tick_marks, classes, rotation=45)
pl.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in it.product(range(cm.shape[0]), range(cm.shape[1])):
pl.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
pl.tight_layout()
pl.ylabel('True label')
pl.xlabel('Predicted label')
y_pred_rw = model.predict_classes(x_rw, verbose=0).ravel()
plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,
classes=["Not Einstein", "Einstein"])
# Same size training set as LeNet
TRAININGSET_SIZE = len(x_train) * 5 * 100
batch_size = len(x_train)
nr_batches = TRAININGSET_SIZE // batch_size + 1
imgit = imggen.flow(x_train, y=y_train, batch_size=batch_size)
x_train_sampled = np.empty((TRAININGSET_SIZE, 1,) + x_train.shape[-2:])
y_train_sampled = np.empty(TRAININGSET_SIZE)
for batch, (x_batch, y_batch) in enumerate(it.islice(imgit, nr_batches)):
buflen = len(x_train_sampled[batch * batch_size:(batch + 1) * batch_size])
x_train_sampled[batch * batch_size:(batch + 1) * batch_size] = x_batch[:buflen]
y_train_sampled[batch * batch_size:(batch + 1) * batch_size] = y_batch[:buflen]
from sklearn.ensemble import RandomForestClassifier
rfe = RandomForestClassifier(n_estimators=64, criterion='entropy', n_jobs=-1,
verbose=True)
rfe = rfe.fit(x_train_sampled.reshape((TRAININGSET_SIZE, -1)), y_train_sampled)
y_pred_rw = rfe.predict(x_rw.reshape((len(x_rw), -1)))
plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,
classes=["Not Einstein", "Einstein"])
pl.show()
print("Rightly classified Einsteins:")
imsshow(x_rw[((y_rw - y_pred_rw) == 0) * (y_rw == 1)])
pl.show()
print("Wrongly classified images:")
imsshow(x_rw[(y_rw - y_pred_rw) != 0])
pl.show()
model = load_model('aps_lenet.h5')
enc_layers = it.takewhile(lambda l: not isinstance(l, keras.layers.Flatten),
model.layers)
encoder_model = keras.models.Sequential(enc_layers)
encoder_model.add(keras.layers.Flatten())
x_train_sampled_enc = encoder_model.predict(x_train_sampled, verbose=True)
rfe = RandomForestClassifier(n_estimators=64, criterion='entropy', n_jobs=-1,
verbose=True)
rfe = rfe.fit(x_train_sampled_enc, y_train_sampled)
y_pred_rw = rfe.predict(encoder_model.predict(x_rw, verbose=False))
plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,
classes=["Not Einstein", "Einstein"])
pl.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Training LeNet
Step3: Training Random Forests
Step4: So training on raw pixel values might not be a good idea. Let's build a feature extractor based on the trained LeNet (or any other pretrained image classifier)
|
1,993
|
<ASSISTANT_TASK:>
Python Code:
from pynq.overlays.base import BaseOverlay
from pynq.lib.video import *
base = BaseOverlay("base.bit")
hdmiin_frontend = base.video.hdmi_in.frontend
hdmiin_frontend.start()
hdmiin_frontend.mode
hdmiout_frontend = base.video.hdmi_out.frontend
hdmiout_frontend.mode = hdmiin_frontend.mode
hdmiout_frontend.start()
colorspace_in = base.video.hdmi_in.color_convert
colorspace_out = base.video.hdmi_out.color_convert
bgr2rgb = [0, 0, 1,
0, 1, 0,
1, 0, 0,
0, 0, 0]
colorspace_in.colorspace = bgr2rgb
colorspace_out.colorspace = bgr2rgb
colorspace_in.colorspace
pixel_in = base.video.hdmi_in.pixel_pack
pixel_out = base.video.hdmi_out.pixel_unpack
pixel_in.bits_per_pixel = 8
pixel_out.bits_per_pixel = 8
pixel_in.bits_per_pixel
inputmode = hdmiin_frontend.mode
framemode = VideoMode(inputmode.width, inputmode.height, 8)
vdma = base.video.axi_vdma
vdma.readchannel.mode = framemode
vdma.readchannel.start()
vdma.writechannel.mode = framemode
vdma.writechannel.start()
frame = vdma.readchannel.readframe()
vdma.writechannel.writeframe(frame)
vdma.readchannel.tie(vdma.writechannel)
vdma.readchannel.stop()
vdma.writechannel.stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: HDMI Frontend
Step2: Creating the device will signal to the computer that a monitor is connected. Starting the frontend will wait attempt to detect the video mode, blocking until a lock can be achieved. Once the frontend is started the video mode will be available.
Step3: The HDMI output frontend can be accessed in a similar way.
Step4: and the mode must be set prior to starting the output. In this case we are just going to use the same mode as the input.
Step5: Note that nothing will be displayed on the screen as no video data is currently being send.
Step6: Pixel format conversion
Step7: Video DMA
Step8: In this case, because we are only using 8 bits per pixel, only the red channel is read and displayed.
Step9: Frame Ownership
|
1,994
|
<ASSISTANT_TASK:>
Python Code:
from flexx import app, ui, react
app.init_notebook()
# A bit of boilerplate to import an example app
import sys
#sys.path.insert(0, r'C:\Users\almar\dev\flexx\examples\ui')
sys.path.insert(0, '/home/almar/dev/pylib/flexx/examples/ui')
from twente_temperature import Twente
ui.Button(text='push me')
t = Twente()
t
t.plot.line_width(10)
colors = ['#a00', '#0a0', '#00a', '#990', '#909', '#0990'] * 2 + ['#000']
@react.connect('t.month.value')
def _update_line_color(v):
t.plot.line_color(colors[int(v)])
t.plot.marker_color(colors[int(v)])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Any widget can be shown by using it as a cell output
Step2: Because apps are really just Widgets, we can show our app in the same way
Step3: And we can interact with it, e.g. change input signals, and react to signal changes
|
1,995
|
<ASSISTANT_TASK:>
Python Code:
def fact(n ) :
res = 1
for i in range(2 , n + 1 ) :
res = res * i
return res
def nCr(n , r ) :
return fact(n ) //(( fact(r ) * fact(n - r ) ) )
n = 2
print("Number ▁ of ▁ Non - Decreasing ▁ digits : ▁ ", nCr(n + 9 , 9 ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,996
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import re
import sys
import numpy as np
textfile = open('tournamentinfo.txt')
text_table = [line.strip() for line in textfile.readlines()]
text_table
player_state = []
player_number = []
id_data = []
name_data = []
state = '([A-Z]{2})'
number = '([0-9]{1})'
dash = '^-'
for line in text_table:
if not re.search(dash, line):
if re.search(state, line) and not re.search(dash, line):
text = line.replace('/', '').replace('-','').replace('>','').replace(':','')
state_num = text.strip().replace('|', ',')[:3].strip(',')
if re.search(state, state_num):
player_state.append(state_num)
id_data.append(text[5:].strip('').replace('|',','))
elif re.search(number, state_num):
name_data.append(text[4:].strip().replace('|',','))
player_number.append(state_num)
player_names = []
rounds = []
uscf_id = []
ratings = []
total_points = []
for item in name_data:
total_points.append(float(item[33:36]))
player_names.append(item[:28].split())
rounds.append(item[39:].replace(' ', '' ).strip(',').split(','))
pd.options.display.max_rows = 70
games_df = pd.DataFrame(rounds, columns = ['Round1','Round2','Round3','Round4','Round5','Round6','Round7'],
index = [player for player in player_number])
for item in id_data:
uscf_id.append(item.replace('R', '').replace('P', ' ')[:8])
ratings.append(item.replace('R', '').replace('P', ' ')[11:16])
games_df['Ratings'] = [rate for rate in ratings]
games_df['Total_Points'] = [n for n in total_points]
games_df['Player_ID'] = [n for n in player_number]
games_df['State'] = [state for state in player_state]
games_df['Player_Names'] = [player for player in player_names]
games_df['Round1'] = games_df['Round1'].map(lambda x: x.lstrip('W')).map(lambda x: x.lstrip('U'))
games_df['Round1'] = games_df['Round1'].map(lambda x: x.lstrip('D')).map(lambda x: x.lstrip('L'))
games_df['Round1'] = games_df['Round1'].map(lambda x: x.lstrip('H')).map(lambda x: x.lstrip('B'))
games_df['Round1'] = games_df['Round1'].map(lambda x: x.lstrip('X'))
games_df['Round2'] = games_df['Round2'].map(lambda x: x.lstrip('W')).map(lambda x: x.lstrip('U'))
games_df['Round2'] = games_df['Round2'].map(lambda x: x.lstrip('D')).map(lambda x: x.lstrip('L'))
games_df['Round2'] = games_df['Round2'].map(lambda x: x.lstrip('H')).map(lambda x: x.lstrip('B'))
games_df['Round2'] = games_df['Round2'].map(lambda x: x.lstrip('X'))
games_df['Round3'] = games_df['Round3'].map(lambda x: x.lstrip('W')).map(lambda x: x.lstrip('U'))
games_df['Round3'] = games_df['Round3'].map(lambda x: x.lstrip('D')).map(lambda x: x.lstrip('L'))
games_df['Round3'] = games_df['Round3'].map(lambda x: x.lstrip('H')).map(lambda x: x.lstrip('B'))
games_df['Round3'] = games_df['Round3'].map(lambda x: x.lstrip('X'))
games_df['Round4'] = games_df['Round4'].map(lambda x: x.lstrip('W')).map(lambda x: x.lstrip('U'))
games_df['Round4'] = games_df['Round4'].map(lambda x: x.lstrip('D')).map(lambda x: x.lstrip('L'))
games_df['Round4'] = games_df['Round4'].map(lambda x: x.lstrip('H')).map(lambda x: x.lstrip('B'))
games_df['Round4'] = games_df['Round4'].map(lambda x: x.lstrip('X'))
games_df['Round5'] = games_df['Round5'].map(lambda x: x.lstrip('W')).map(lambda x: x.lstrip('U'))
games_df['Round5'] = games_df['Round5'].map(lambda x: x.lstrip('D')).map(lambda x: x.lstrip('L'))
games_df['Round5'] = games_df['Round5'].map(lambda x: x.lstrip('H')).map(lambda x: x.lstrip('B'))
games_df['Round5'] = games_df['Round5'].map(lambda x: x.lstrip('X'))
games_df['Round6'] = games_df['Round6'].map(lambda x: x.lstrip('W')).map(lambda x: x.lstrip('U'))
games_df['Round6'] = games_df['Round6'].map(lambda x: x.lstrip('D')).map(lambda x: x.lstrip('L'))
games_df['Round6'] = games_df['Round6'].map(lambda x: x.lstrip('H')).map(lambda x: x.lstrip('B'))
games_df['Round6'] = games_df['Round6'].map(lambda x: x.lstrip('X'))
games_df['Round7'] = games_df['Round7'].map(lambda x: x.lstrip('W')).map(lambda x: x.lstrip('U'))
games_df['Round7'] = games_df['Round7'].map(lambda x: x.lstrip('D')).map(lambda x: x.lstrip('L'))
games_df['Round7'] = games_df['Round7'].map(lambda x: x.lstrip('H')).map(lambda x: x.lstrip('B'))
games_df['Round7'] = games_df['Round7'].map(lambda x: x.lstrip('X'))
games_df['Round1'].replace('', 0, inplace=True)
games_df['Round2'].replace('', 0, inplace=True)
games_df['Round3'].replace('', 0, inplace=True)
games_df['Round4'].replace('', 0, inplace=True)
games_df['Round5'].replace('', 0, inplace=True)
games_df['Round6'].replace('', 0, inplace=True)
games_df['Round7'].replace('', 0, inplace=True)
games_df
games_df.set_index(['Player_Names'])
games_df.set_index(['Round1', 'Round2', 'Round3', 'Round4', 'Round5', 'Round6', 'Round7'])['Ratings']
def parse_series(series):
player_mapping = {}
average_rating = []
for index, row in series.iterrows():
player_mapping[int(row['Player_ID'])] = int(row['Ratings'])
for index, row in series.iterrows():
if row[0] is not None:
try:
key1 = int(row.Round1)
row.Round1 = player_mapping[key1]
key2 = int(row.Round2)
row.Round2 = player_mapping[key2]
key3 = int(row.Round3)
row.Round3 = player_mapping[key3]
key4 = int(row.Round4)
row.Round4 = player_mapping[key4]
key5 = int(row.Round5)
row.Round5 = player_mapping[key5]
key6 = int(row.Round6)
row.Round6 = player_mapping[key6]
key7 = int(row.Round7)
row.Round7 = player_mapping[key7]
total = row.Round1 + row.Round2 + row.Round3 + row.Round4 + row.Round5 + row.Round6 + row.Round7
games = len([row.Round1, row.Round2, row.Round3, row.Round4, row.Round5, row.Round6, row.Round7])
average_rating.append(total/games)
except KeyError:
total = int(row.Round1) + int(row.Round2) + int(row.Round3) + int(row.Round4) + int(row.Round5) + int(row.Round6) + int(row.Round7)
games = len([row.Round1, row.Round2, row.Round3, row.Round4, row.Round5, row.Round6, row.Round7])
average_rating.append(total/games)
return average_rating
average_rate = parse_series(games_df)
games_df['Average_Rating'] = [n for n in average_rate]
games_df.head()
games_df.tail()
games_df.to_csv('Project_2_results', sep=',', encoding='utf-8')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the text file into the Ipython Notebook
Step2: First i want to seperate the lines so i will strip them, and store them in a list using a list comprehension.
Step3: Print out the list, to inspect how it's formated and changes that will need to be made. (Cleaning)
Step4: I want to split up what's easy for me to take from the list, so i've created a few lists to store those in.
Step5: Using Regex to clean some of the data inside the table before we loop through it, so it can be easier to accesss the desired information from the table list we've created.
Step6: Using a for-loop to parse through the table, i've split the rows seperately becaues they are formated differently, making it easier to work once on each unique row set.
Step7: Because i still need to process more information and because some cells are misformed with extra data, i need to run another forloop to reach a more detailed cleaning, I am also going to create some list of columns to store these values.
Step8: I want to set the max number of rows to 70 for DataFrames because there are a total of 64 I.D. and if there are more than set ammount it can be a sign that i've made a mistake.
Step9: I will de doing the same with the alternate rows and cleaning it in the process.
Step10: Here I want to add the columns that the DataFrame needs so i do index and list comprehension for each particular one.
Step11: The following is the use of indexing and mapping of cells containting letters and the process of removing them using map function and lambdas.
Step12: I want to replace the empty string values with get.nan but because further along this notebook, i was presented with a ValueError i've returned and set them to 0. (Not sure if this is going to affect the data much), although some rows contain a number of unplayed games, and it may contribute to some difference in players with unplayed matches.
Step13: Printing the DataFrame constantly throughout helps to visualize the changes and not make unnecessary changes to the object.
Step14: I have indexed to the Player Names to compare the data better to the original table.
Step15: Below I've creating a series of the data i wish to analyze further and do computational processes on.
Step16: Here I wanted to create a function that would loop through the data and return the desired result.
Step17: Creating an object that will process the parse_series function and return its list values.
Step18: Adding the column -Average_Rating- to the games_df DataFrame object.
Step19: As we can see, the Average Ratings have been computed and now applied to the original DataFrame Object, here is the top 5 rows.
Step20: Here are te last 5 rows in the DataFrame object.
Step21: Now since all computations are done, and the data that we want is in the DataFrame, we can create and send to CSV File using the to_csv method.
|
1,997
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import iris
import iris.plot as iplt
import iris.coord_categorisation
import cf_units
import numpy
%matplotlib inline
infile = '/g/data/ua6/DRSv2/CMIP5/NorESM1-M/rcp85/mon/ocean/r1i1p1/hfbasin/latest/hfbasin_Omon_NorESM1-M_rcp85_r1i1p1_200601-210012.nc'
cube = iris.load_cube(infile)
print(cube)
dim_coord_names = [coord.name() for coord in cube.dim_coords]
print(dim_coord_names)
cube.coord('latitude').points
aux_coord_names = [coord.name() for coord in cube.aux_coords]
print(aux_coord_names)
cube.coord('region')
global_cube = cube.extract(iris.Constraint(region='global_ocean'))
def convert_to_annual(cube):
Convert data to annual timescale.
Args:
cube (iris.cube.Cube)
full_months(bool): only include years with data for all 12 months
iris.coord_categorisation.add_year(cube, 'time')
iris.coord_categorisation.add_month(cube, 'time')
cube = cube.aggregated_by(['year'], iris.analysis.MEAN)
cube.remove_coord('year')
cube.remove_coord('month')
return cube
global_cube_annual = convert_to_annual(global_cube)
print(global_cube_annual)
iplt.plot(global_cube_annual[5, ::])
iplt.plot(global_cube_annual[20, ::])
plt.show()
def convert_to_seconds(time_axis):
Convert time axis units to seconds.
Args:
time_axis(iris.DimCoord)
old_units = str(time_axis.units)
old_timestep = old_units.split(' ')[0]
new_units = old_units.replace(old_timestep, 'seconds')
new_unit = cf_units.Unit(new_units, calendar=time_axis.units.calendar)
time_axis.convert_units(new_unit)
return time_axis
def linear_trend(data, time_axis):
Calculate the linear trend.
polyfit returns [a, b] corresponding to y = a + bx
masked_flag = False
if type(data) == numpy.ma.core.MaskedArray:
if type(data.mask) == numpy.bool_:
if data.mask:
masked_flag = True
elif data.mask[0]:
masked_flag = True
if masked_flag:
return data.fill_value
else:
return numpy.polynomial.polynomial.polyfit(time_axis, data, 1)[-1]
def calc_trend(cube):
Calculate linear trend.
Args:
cube (iris.cube.Cube)
running_mean(bool, optional):
A 12-month running mean can first be applied to the data
yr (bool, optional):
Change units from per second to per year
time_axis = cube.coord('time')
time_axis = convert_to_seconds(time_axis)
trend = numpy.ma.apply_along_axis(linear_trend, 0, cube.data, time_axis.points)
trend = numpy.ma.masked_values(trend, cube.data.fill_value)
return trend
trend_data = calc_trend(global_cube_annual)
trend_cube = global_cube_annual[0, ::].copy()
trend_cube.data = trend_data
trend_cube.remove_coord('time')
#trend_unit = ' yr-1'
#trend_cube.units = str(global_cube_annual.units) + trend_unit
iplt.plot(trend_cube)
plt.show()
print(global_cube_annual)
diffs_data = numpy.diff(global_cube_annual.data, axis=1)
lats = global_cube_annual.coord('latitude').points
diffs_lats = (lats[1:] + lats[:-1]) / 2.
print(diffs_data.shape)
print(len(diffs_lats))
plt.plot(diffs_lats, diffs_data[0, :])
plt.plot(lats, global_cube_annual[0, ::].data / 10.0)
plt.show()
time_axis = global_cube_annual.coord('time')
time_axis = convert_to_seconds(time_axis)
diffs_trend = numpy.ma.apply_along_axis(linear_trend, 0, diffs_data, time_axis.points)
diffs_trend = numpy.ma.masked_values(diffs_trend, global_cube_annual.data.fill_value)
print(diffs_trend.shape)
plt.plot(diffs_lats, diffs_trend * -1)
plt.axhline(y=0)
plt.show()
plt.plot(diffs_lats, diffs_trend * -1, color='black')
plt.axhline(y=0)
plt.axvline(x=30)
plt.axvline(x=50)
plt.axvline(x=77)
plt.xlim(20, 90)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ocean heat transport in CMIP5 models
Step5: So for any given year, the annual mean shows ocean heat transport away from the tropics.
Step6: So the trends in ocean heat transport suggest reduced transport in the RCP 8.5 simulation (i.e. the trend plot is almost the inverse of the climatology plot).
Step7: Convergence trend
|
1,998
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from pandas_datareader.data import DataReader
endog = DataReader('UNRATE', 'fred', start='1954-01-01')
hp_cycle, hp_trend = sm.tsa.filters.hpfilter(endog, lamb=129600)
mod_ucarima = sm.tsa.UnobservedComponents(endog, 'rwalk', autoregressive=4)
# Here the powell method is used, since it achieves a
# higher loglikelihood than the default L-BFGS method
res_ucarima = mod_ucarima.fit(method='powell', disp=False)
print(res_ucarima.summary())
mod_uc = sm.tsa.UnobservedComponents(
endog, 'rwalk',
cycle=True, stochastic_cycle=True, damped_cycle=True,
)
# Here the powell method gets close to the optimum
res_uc = mod_uc.fit(method='powell', disp=False)
# but to get to the highest loglikelihood we do a
# second round using the L-BFGS method.
res_uc = mod_uc.fit(res_uc.params, disp=False)
print(res_uc.summary())
fig, axes = plt.subplots(2, figsize=(13,5));
axes[0].set(title='Level/trend component')
axes[0].plot(endog.index, res_uc.level.smoothed, label='UC')
axes[0].plot(endog.index, res_ucarima.level.smoothed, label='UC-ARIMA(2,0)')
axes[0].plot(hp_trend, label='HP Filter')
axes[0].legend(loc='upper left')
axes[0].grid()
axes[1].set(title='Cycle component')
axes[1].plot(endog.index, res_uc.cycle.smoothed, label='UC')
axes[1].plot(endog.index, res_ucarima.autoregressive.smoothed, label='UC-ARIMA(2,0)')
axes[1].plot(hp_cycle, label='HP Filter')
axes[1].legend(loc='upper left')
axes[1].grid()
fig.tight_layout();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hodrick-Prescott (HP) filter
Step2: Unobserved components and ARIMA model (UC-ARIMA)
Step3: Unobserved components with stochastic cycle (UC)
Step4: Graphical comparison
|
1,999
|
<ASSISTANT_TASK:>
Python Code:
from enum import Enum
class AccountType(Enum):
SAVINGS = 1
CHECKING = 2
AccountType.SAVINGS
AccountType.SAVINGS == AccountType.SAVINGS
AccountType.SAVINGS == AccountType.CHECKING
AccountType.SAVINGS.name
class BankAccount():
def __init__(self,owner,accountType):
self.owner=owner
self.accountType=accountType
self.balance=0
def withdraw(self,amount):
if amount<0:
raise ValueError("amount<0")
if self.balance<amount:
raise ValueError("withdraw more than balance")
self.balance-=amount
def deposit(self,amount):
if amount<0:
raise ValueError("amount<0")
self.balance+=amount
def __str__(self):
return "owner:{!s} account type:{!s}".format(self.owner,self.accountType.name)
def __len__(self):
return self.balance
myaccount=BankAccount("zhaizhai",AccountType.CHECKING)
print(myaccount.balance)
class BankUser():
def __init__(self,owner):
self.owner=owner
self.SavingAccount=None
self.CheckingAccount=None
def addAccount(self,accountType):
if accountType==AccountType.SAVINGS:
if self.SavingAccount==None:
self.SavingAccount=BankAccount(self.owner,accountType)
else:
print("more than one saving account!")
raise AttributeError("more than one saving account!")
elif accountType==AccountType.CHECKING:
if self.CheckingAccount==None:
self.CheckingAccount=BankAccount(self.owner,accountType)
else:
print("more than one checking account!")
raise AttributeError("more than one checking account!")
else:
print("no such account type!")
raise ValueError("no such account type!")
def getBalance(self,accountType):
if accountType==AccountType.SAVINGS:
if self.SavingAccount==None:
print("saving account not exist")
raise AttributeError("saving account not exist")
else:
return self.SavingAccount.balance
elif accountType==AccountType.CHECKING:
if self.CheckingAccount==None:
print("checking account not exist")
raise AttributeError("checking account not exist")
else:
return self.CheckingAccount.balance
else:
print("no such account type!")
raise AttributeError("no such account type!")
def deposit(self,accountType,amount):
if accountType==AccountType.SAVINGS:
if self.SavingAccount==None:
print("saving account not exist")
raise AttributeError("saving account not exist")
else:
return self.SavingAccount.deposit(amount)
elif accountType==AccountType.CHECKING:
if self.CheckingAccount==None:
print("checking account not exist")
raise AttributeError("checking account not exist")
else:
return self.CheckingAccount.deposit(amount)
else:
print("no such account type!")
raise AttributeError("no such account type!")
def withdraw(self,accountType,amount):
if accountType==AccountType.SAVINGS:
if self.SavingAccount==None:
print("saving account not exist")
raise AttributeError("saving account not exist")
else:
return self.SavingAccount.withdraw(amount)
elif accountType==AccountType.CHECKING:
if self.CheckingAccount==None:
print("checking account not exist")
raise AttributeError("checking account not exist")
else:
return self.CheckingAccount.withdraw(amount)
else:
print("no such account type!")
raise AttributeError("no such account type!")
def __str__(self):
s="owner:{!s}".format(self.owner)
if self.SavingAccount!=None:
s=s+"account type: Saving balance:{:.2f}".format(self.SavingAccount.balance)
if self.CheckingAccount!=None:
s=s+"account type: Checking balance:{:.2f}".format(self.CheckingAccount.balance)
return s
newuser=BankUser("zhaizhai")
print(newuser)
newuser.addAccount(AccountType.SAVINGS)
print(newuser)
newuser.deposit(AccountType.SAVINGS,2)
newuser.withdraw(AccountType.SAVINGS,1)
print(newuser)
newuser.withdraw(AccountType.CHECKING,1)
def ATMSession(bankUser):
def Interface():
option1=input("Enter Options:\
1)Exit\
2)Creat Account\
3)Check Balance\
4)Deposit\
5)Withdraw")
if option1=="1":
Interface()
return
option2=input("Enter Options:\
1)Checking\
2)Saving")
if option1=="2":
if option2=="1":
bankUser.addAccount(AccountType.CHECKING)
Interface()
return
elif option2=="2":
bankUser.addAccount(AccountType.SAVINGS)
Interface()
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="3":
if option2=="1":
print(bankUser.getBalance(AccountType.CHECKING))
Interface()
return
elif option2=="2":
print(bankUser.getBalance(AccountType.SAVINGS))
Interface()
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="4":
option3=input("Enter Interger Amount, Cannot be Negative:")
if option2=="1":
bankUser.deposit(AccountType.CHECKING,int(option3))
Interface()
return
elif option2=="2":
bankUser.deposit(AccountType.SAVINGS,int(option3))
Interface()
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="5":
option3=input("Enter Interger Amount, Cannot be Negative:")
if option2=="1":
bankUser.withdraw(AccountType.CHECKING,int(option3))
Interface()
return
elif option2=="2":
bankUser.withdraw(AccountType.SAVINGS,int(option3))
Interface()
return
else:
print("no such account type")
raise AttributeError("no such account type")
print("no such operation")
raise AttributeError("no such operation")
return Interface
myATM=ATMSession(newuser)
myATM()
print(newuser)
%%file bank.py
from enum import Enum
class AccountType(Enum):
SAVINGS = 1
CHECKING = 2
class BankAccount():
def __init__(self,owner,accountType):
self.owner=owner
self.accountType=accountType
self.balance=0
def withdraw(self,amount):
if type(amount)!=int:
raise ValueError("not integer amount")
if amount<0:
raise ValueError("amount<0")
if self.balance<amount:
raise ValueError("withdraw more than balance")
self.balance-=amount
def deposit(self,amount):
if type(amount)!=int:
raise ValueError("not integer amount")
if amount<0:
raise ValueError("amount<0")
self.balance+=amount
def __str__(self):
return "owner:{!s} account type:{!s}".format(self.owner,self.accountType.name)
def __len__(self):
return self.balance
def ATMSession(bankUser):
def Interface():
option1=input("Enter Options:\
1)Exit\
2)Creat Account\
3)Check Balance\
4)Deposit\
5)Withdraw")
if option1=="1":
return
option2=input("Enter Options:\
1)Checking\
2)Saving")
if option1=="2":
if option2=="1":
bankUser.addAccount(AccountType.CHECKING)
return
elif option2=="2":
bankUser.addAccount(AccountType.SAVINGS)
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="3":
if option2=="1":
print(bankUser.getBalance(AccountType.CHECKING))
return
elif option2=="2":
print(bankUser.getBalance(AccountType.SAVINGS))
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="4":
option3=input("Enter Interger Amount, Cannot be Negative:")
if option2=="1":
bankUser.deposit(AccountType.CHECKING,int(option3))
return
elif option2=="2":
bankUser.deposit(AccountType.SAVINGS,int(option3))
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="5":
option3=input("Enter Interger Amount, Cannot be Negative:")
if option2=="1":
bankUser.withdraw(AccountType.CHECKING,int(option3))
return
elif option2=="2":
bankUser.withdraw(AccountType.SAVINGS,int(option3))
return
else:
print("no such account type")
raise AttributeError("no such account type")
print("no such operation")
raise AttributeError("no such operation")
return Interface
class Regression():
def __init__(self,X,y):
self.X=X
self.y=y
self.alpha=0.1
def fit(self,X,y):
return
def get_params(self):
return self.beta
def predict(self,X):
import numpy as np
return np.dot(X,self.beta)
def score(self,X,y):
return 1-np.sum((y-self.predict(X))**2)/np.sum((y-np.mean(y))**2)
def set_params(self,alpha):
self.alpha=alpha
class OLSRegression(Regression):
def fit(self):
import numpy as np
X=self.X
y=self.y
self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y)
ols1=OLSRegression([[2],[3]],[[1],[2]])
ols1.fit()
ols1.predict([[2],[3]])
X=[[2],[3]]
y=[[1],[2]]
beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y)
class RidgeRegression(Regression):
def fit(self):
import numpy as np
X=self.X
y=self.y
self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)+self.alpha**2),np.transpose(X)),y)
return
ridge1=RidgeRegression([[2],[3]],[[1],[2]])
ridge1.fit()
ridge1.predict([[2],[3]])
ridge1.score([[2],[3]],[[1],[2]])
class LassoRegression(Regression):
def fit(self):
from sklearn.linear_model import Lasso
myLs=Lasso(self.alpha)
myLs.fit(self.X,self.y)
self.beta=myLs.coef_.reshape((-1,1))
self.beta0=myLs.intercept_
return
def predict(self,X):
import numpy as np
return np.dot(X,self.beta)+self.beta0
lasso1=LassoRegression([[2],[3]],[[1],[2]])
lasso1.fit()
lasso1.predict([[2],[3]])
lasso1.score([[2],[3]],[[1],[2]])
from sklearn.linear_model import Lasso
myLs=Lasso(alpha=0.1)
myLs.fit([[2],[3]],[[1],[1]])
beta=np.array(myLs.coef_)
print(beta.reshape((-1,1)))
beta0=myLs.intercept_
print(beta0)
from sklearn.datasets import load_boston
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
import statsmodels.api as sm
import numpy as np
boston=load_boston()
boston_x=boston.data
boston_y=boston.target
kf=KFold(n_splits=2)
kf.get_n_splits(boston)
ols1_m=0
ridge1_m=0
lasso1_m=0
for train_index, test_index in kf.split(boston_x):
X_train, X_test = boston_x[train_index], boston_x[test_index]
y_train, y_test = boston_y[train_index], boston_y[test_index]
y_train=y_train.reshape(-1,1)
y_test=y_test.reshape(-1,1)
ols1=OLSRegression(sm.add_constant(X_train),y_train)
ols1.fit()
ols1_m+=ols1.score(sm.add_constant(X_test),y_test)
print("OLS score:",ols1.score(sm.add_constant(X_test),y_test))
ridge1=RidgeRegression(sm.add_constant(X_train),y_train)
ridge1.fit()
ridge1_m+=ridge1.score(sm.add_constant(X_test),y_test)
print("ridge score:",ridge1.score(sm.add_constant(X_test),y_test))
lasso1=LassoRegression(X_train,y_train)
lasso1.fit()
lasso1_m+=lasso1.score(X_test,y_test)
print("lasso score:",lasso1.score(X_test,y_test))
break
print(ols1_m,ridge1_m,lasso1_m)
ols1.get_params()
ols_r=[]
ridge_r=[]
lasso_r=[]
alpha_l=[]
for alpha_100 in range(5,100,5):
alpha=alpha_100/100
alpha_l.append(alpha)
for train_index, test_index in kf.split(boston_x):
X_train, X_test = boston_x[train_index], boston_x[test_index]
y_train, y_test = boston_y[train_index], boston_y[test_index]
y_train=y_train.reshape(-1,1)
y_test=y_test.reshape(-1,1)
ols1=OLSRegression(sm.add_constant(X_train),y_train)
ols1.set_params(alpha)
ols1.fit()
ols_r.append(ols1.score(sm.add_constant(X_test),y_test))
ridge1=RidgeRegression(sm.add_constant(X_train),y_train)
ridge1.set_params(alpha)
ridge1.fit()
ridge_r.append(ridge1.score(sm.add_constant(X_test),y_test))
lasso1=LassoRegression(X_train,y_train)
lasso1.set_params(alpha)
lasso1.fit()
lasso_r.append(lasso1.score(X_test,y_test))
break
import matplotlib.pyplot as plt
plt.plot(alpha_l,ols_r,label="linear regression")
plt.plot(alpha_l,ridge_r,label="ridge")
plt.plot(alpha_l,lasso_r,label="lasso")
plt.xlabel("alpha")
plt.ylabel("$R^{2}$")
plt.title("the relation of R squared with alpha")
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: An Enum stands for an enumeration, it's a convenient way for you to define lists of things. Typing
Step2: returns a Python representation of an enumeration. You can compare these account types
Step3: To get a string representation of an Enum, you can use
Step4: Part 1
Step5: Part 2
Step6: Write some simple tests to make sure this is working. Think of edge scenarios a user might try to do.
Step7: Part 4
Step8: Problem 2
Step9: Part 2
Step10: Part 3
Step11: Part 3
Step12: Part 4
Step13: Part 5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.